HOW TO REDESIGN A COLLEGE COURSE USING NCAT'S METHODOLOGY

VI. How to Create Small within Large

When most people think about the relationship between size and educational quality, they more or less take for granted that small is better. Whether it’s Mark Hopkins sitting on a log with his single student or the U.S. News & World Report rankings, a low student-faculty ratio—and its corollary, small class size—is assumed to be an indicator of high quality. In the ideal world, all classes would be small. In the real world, offering small classes inevitably increases instructional costs. Is it possible to resolve this familiar trade-off between cost and quality?

One of the key characteristics of most course redesigns is large class size. Some redesigns begin with large lecture sections and retain those large sizes in the redesign; others reduce the number of sections offered and create larger classes; still others combine all sections into one large section. As discussed in Chapter V, larger sections can reduce costs because fewer faculty are needed to prepare and deliver the course, yet course redesign also increases student learning. The idea that it is possible to increase learning while increasing class size (or maintaining already large sections) goes against common assumptions about quality that are held by most in higher education as well as by the public at large. Because that idea is counterintuitive for most, we address this issue specifically in this chapter. The main idea is to create small within large, to focus on individual students within a large class.

Teams and Group Work

Q: What are examples of using teams or small group work in large lecture sections?

A: The main idea is to divide large lecture sections into small groups and involve students in active-learning, collaborative activities during and/or outside class time both face-to-face and online. Following are examples.

  • To facilitate active learning in large geology lectures of more than 150 students, students were given many opportunities to solve problems collaboratively with students around them in the forms of think-pair-share questions, graded work, and graded in-lecture assignments. The activities greatly improved attendance and encouraged active participation in class, as the students were given the opportunity to collaborate with other students before turning in their work for grades. The assignments consisted of easy-to-grade, multiple-choice questions, but they relied on students’ interpreting information as opposed to memorizing facts. Often, the questions involved data and plots, images, and scenarios the students had to interpret before selecting their answers.
  • Students in a large, 220-student astronomy class were divided into small learning teams of 10 to 15 students each. The instructor provided an overview of the week’s activities at a weekly meeting of the full class. Then about a dozen discussion questions were posted online, ranging from factual questions testing basic knowledge to complex questions requiring that students draw conclusions, to questions intended to elicit controversy. Midweek, students met in teams for one hour to prepare answers collaboratively and to carry out inquiry-based team projects. Each team was supervised by an undergraduate learning assistant. Teams posted written answers to all questions on the course website. At the third weekly class meeting, the instructor led a discussion session directing questions not to individual students but to the learning teams. Before the meeting, the instructor reviewed all of the posted written answers to a given question, thereby allowing the discussion time to be devoted to questions with dissonant answers among teams.
  • Small-group activities provided a strategy for exposing students to psychology course concepts. Working in pairs, students could use their books, could dialogue about questions, and could reinforce learning with other students. Faculty and undergraduate learning assistants observed that small-group work was more productive when students worked in dyads. More in-depth discussions occurred and more completed worksheets were submitted. The worksheets included essay responses so that students did more than answer simple questions or blindly choose responses. More discussion and comments arose from the essay format for small-group assignments.
  • All 930 students enrolled in a fine arts course were divided into peer learning teams of six students each. The teams engaged in online discussions that required students to analyze two short essays in preparation for producing their own short essays. The discussions increased interaction among students, created an atmosphere of active learning, and developed students’ critical-thinking skills. Newly created positions called preceptors, most of whom had BAs in English, interacted with students via e-mail, monitored student progress, led online discussions, and graded critical-analysis essays. Each preceptor worked with 10 peer learning teams, or a total of 60 students. When asked how it felt to be a student in a large, online class, students responded, “I’m not in a large class; I’m in a class of six.”
  • Required weekly discussion posts demanded engagement with primary source readings that was both broader and deeper than in the traditional offerings of a history course. Students were required to make a minimum of three discussion posts each week in response to questions and comments pertaining to assigned primary source readings. This meant that each student had to “speak up” every week and offer a set of coherent thoughts in a virtual discussion group. It represented great improvement over the traditional classroom format in which a minority of students engaged in discussion. Moderation of the discussion groups by virtual preceptors and the instructors of record enhanced the quantity and quality of instructor feedback. In addition, transforming the colloquial English of oral interventions in the classroom into standard written English improved the quality of student discussion, sharpened writing skills, and increased the amount of written work students submitted during the course of a semester compared with traditional courses. Because discussion groups focused on the analysis of primary sources and the integration of those interpretations into textbook and lecture material, students were exposed to a more sophisticated style of learning.
  • Class size in women’s studies was increased from 150 or 200 to 400. In the traditional 200-student sections, group discussions were very difficult. In the redesign, part of the lecture time was replaced with required online student activities and discussion. Students in the large lectures were broken into smaller, 40-person communities. Each group was administered by an undergraduate learning assistant and a graduate teaching assistant. The redesigned course enhanced quality by increasing student engagement with the course. Students were asked to actively interact with the material and with their peers and to apply course concepts to real-life examples. Working in small groups, students had to complete a series of five discussion boards, which involved participating in discussions around course topics, completing individual and group activities such as taking virtual field trips, and examining real data on women’s issues. Students had a series of three experiential assignments that required them to learn by doing. For example, during a unit on gender roles, students were asked to play “toy store detectives” in order to analyze the messages about gender embedded in children’s toys. Talking in front of a group can be an intimidating experience for many students, but the online format allowed them anonymity. It also let them compose their thoughts before making a post.
  • Students from large lecture sections of about 90 students in a management course were divided into groups of 3. Each group participated in 10 online discussions throughout the semester. The discussions pertained to the slides that had been posted for the coming week’s assigned text chapter. Each group contributed to the discussion by asking two questions regarding the slides and by answering the two questions posed by each of the other group members. Two days prior to the class meeting on the chapter, the group was responsible for sending a group report to the instructor electronically. The report summarized the group’s decision as to which question and answer best described the discussion they had had on the topics. Additionally, students were advised to be prepared to discuss their questions and answers if called upon in the face-to-face class session. During the large lecture, students took group quizzes to increase student-to-student exchanges and discussed more actively with the entire group because of their pre-class interactions online.

Q: How do we ensure that all members of the group participate equally—that is, make the same contributions to the group work?

A: Although plenty of literature shows that collaborative learning can be very effective, it does not follow that students will engage in the practice automatically. A few will, but many students need prodding to overcome their ingrained habit to study alone.

Here is an example of a successful plan that ensures equal participation. To ensure that learning team members actually worked together, 40 percent of a student’s score in the course was attributed not to the student’s individual performance but to the team’s performance. (The remaining 60 percent was based on the student’s performance on quizzes and examinations.) The scores for written and oral answers to discussion questions were attributed not to individuals but to the team. Thus, every student on a team had an incentive to help every other student prepare good written and oral answers to the discussion questions. Likewise, grades for collaborative homework projects were assigned to teams, not individuals.

Members of the learning teams were permitted to divide the cumulative team score among themselves as they saw fit. A password-protected facility on the team home page allowed each team member to rate each teammate on performance. Each student could see his or her average performance rating by the rest of the team (but not ratings by individuals) and could compare that rating with the average rating of all members of the team. Then the team scores were divided among the members according to a simple algorithm based on the ratings.

The system worked remarkably well. Before posting results of the team ratings, the instructor asked supervising undergraduate learning assistants whether the students had rated each other fairly, and 90 percent of the time the assistants said the students’ mutual ratings conformed almost exactly to their own perceptions of the students’ performance. (Ten percent of the time, the coaches recommended that the instructor mitigate a low rating for one or two individuals, which the instructor did.) Because the students within a learning team knew each other personally, they could and did exert powerful peer pressure to perform. The students perceived the system as fair.

Student-Response Systems (Clickers)

Q: What are examples of the effective use of clickers?

A: Student-response systems (clickers) provide two important benefits: they increase student engagement with the course, and they provide immediate feedback for the instructor about how well students comprehend the course material. Following are examples of the effective use of clickers.

  • A student-response system (clickers) was used in large psychology lecture sections (400 students) to promote participation and regular attendance in the redesign. Ten percent of the course grade was based on class participation, calculated as the number of times a student clicked in out of the total number of opportunities to do so. Instructors incorporated three to five clicker questions into each day’s PowerPoint slides. The questions were created to be in a style and at a challenge level similar to the exam questions. Students viewed the clickers favorably, with a majority of respondents agreeing “somewhat” or “strongly” that clickers were useful by promoting understanding of course material, enabling them to connect with the instructor, and enabling them to connect with course material. Focus groups revealed that clickers were most effective when they were used for soliciting student feedback to challenging questions in class.
  • To facilitate active learning in large physics lecture sections (100 or 250 students), a classroom response system (clickers) was used to pose conceptual questions that students answered after consulting with a small group of peers. Among other things, the technology enabled instructors to troll for and correct student misconceptions. The team had a you-need-to-be-there-and-you-need-to-be-engaged attitude with regard to the use of the system, and it had a positive impact on attendance and student attitude. The use of a classroom response system made the course more interactive and had a positive impact on class attendance (responses contributed toward the course grade).
  • All students in a psychology course were required to purchase a clicker and were instructed to bring the clicker to each seated class. The instructors incorporated questions from future tests into the lecture, and students provided answers during the class via their clickers. The results provided the instructor with immediate feedback as to whether or not students’ understanding of the identified difficult material improved after classroom demonstrations and discussion. If needed, the instructor could then employ peer instruction or other demonstrations and discussion until students were performing at an acceptable level on quiz items by using the clickers. Clickers were also used for monitoring participation during each class period, which counted in the overall course grade. This information was also used for following up via e-mail with students who were not in class, as well as for reaching out to students who had not been attending class on a regular basis.

Individualized Instruction via Online Tutorials

Q: How can online tutorials transform a large course into a class of one?

A: Interactive tutorials that include simulations and exercises replace standard presentation formats, thereby giving students needed practice and supporting greater engagement with the material. Students can access course materials as often as needed. Tutorials allow the learning experience to be individualized for each student—something impossible to achieve with the one-size-fits-all lecture model.

The selection of online learning materials needs to be a thoughtful process. There are dozens of commercial and noncommercial products that claim to be interactive and cutting-edge but end up being a glorified set of PowerPoint presentations or flashcards. Unless the quality of online tutorials is high, they can be seen as an unchallenging waste of time by students. We address criteria for choosing software at greater length in Chapter X.

Following are examples of effective online tutorial use.

  • A chemistry redesign made heavy use of Web-based tutorial modules in a large course comprising 350 to 450 students per section. Each module led a student through a topic in 6 to 10 interactive pages. When the student completed the tutorial, a debriefing section presented a series of questions that tested whether the student had mastered the content of that module. Students found the online tutorials to be very helpful; they particularly liked the ability to link directly from a problem they had difficulty with to a tutorial that helped them learn the concepts needed to solve the problem. Many reported they found the online material much more accessible than the textbook material. Because students came to class prepared to ask questions after completing the tutorials and because they helped structure the discussion sections, less instructors’ preparation time was required. Tutorials also provided an effective substitute for faculty time otherwise spent preparing and delivering lectures. When the team did less lecturing and counted on the tutorials to provide a major fraction of the instruction, students were not at a disadvantage.
  • Spanish redesign projects universally employed the strategy of using technology when prior research indicated it was most effective and using class time when it was most effective. The result was a combination of class sessions focused on oral skills development and online tutorials that taught with reading, listening, writing, grammar, and vocabulary. Putting such exercises online left more time in class for communicative activities. Students came to class having already studied and completed various mechanical and self-grading exercises. That preparation let instructors focus on directing various interactive activities instead of teaching grammar and other skills. All videos accompanying the elementary Spanish textbook were placed online. Not having to show the videos in class was another important improvement over the traditional course. In the redesigned course, the students had already watched the video before coming to class, thus leaving more time to discuss the videos during class. The textbook and workbook exercises previously in a paper format were moved online along with directions for use and model answers. Students received immediate (automated) feedback and detailed grammatical explanations about their work. Exercises were divided between practice exercises that could be taken as many times as needed and quizzes that could be taken only once for a grade.
  • Almost all NCAT mathematics redesigns were built around a commercial instructional software package. The availability of the software enabled each institution to avoid spending funds on software development and instead to direct all resources toward support of student learning. The software was versatile—supporting verbal, visual, and discovery-based learning styles—and could be accessed anytime at home or in a lab. Students found the software easy to use and achieved a comfort level in a short amount of time. They especially liked the instant feedback they received when working problems and the Guided Solutions available when their answers were incorrect. Tutorials have taken over the main instructional role in most math redesigns. The software also let instructors see the work that students were actually doing and they could therefore easily monitor students’ progress.
  • Easy online access to materials and resources increased learner time on task in an English composition redesign. Grammar review sites and quizzes—including the support site for the New Century Handbook, the CLAST online textbook, Cttc.comnet.edu/grammar, Academic.com, and the Texas Information Literacy Tutorial (TILT)—provided individualized remediation based on diagnostic information. Students also had access to textbook companion website materials that assisted with writing principles, writing mechanics, and reading comprehension. Students could access information around the clock and as often as they needed to do so. By conducting some instruction online instead of in class, faculty increased the amount of class time spent on the writing process. Outside class, students could submit midstage drafts to tutors at commercial online tutoring service Smarthinking and/or to college e-responders. Those round-the-clock services provided students with prompt, constructive feedback on writing assignments. The fast feedback and online assistance let students make the right changes and improved the quality of student writing. During class, the labor management aspects of the course website let the faculty provide students with individual assistance throughout class time, focusing on the needs of each student and supporting a diversity of learning styles.
  • A statistics redesign used StatTutor, an automated, intelligent tutoring system developed at Carnegie Mellon University. StatTutor facilitated understanding of statistical ideas and analytical techniques by helping students construct useful knowledge representations and thereby develop effective problem-solving skills. It contained a specific outline of steps, or scaffolding, to follow in solving problems and gave immediate feedback, tracking individual students as they went through lab exercises. StatTutor provided feedback when students pursued an unproductive path, and it closely assessed individual students’ acquisition of statistical-inference skills—in effect providing an individual tutor for each student. StatTutor also supported a dynamic model of problem solving in lab exercises by asking students to choose and categorize relevant variables and select the appropriate statistical package tools, thus making labs and homework more open-ended, exploratory, and active.

Mastery Quizzing

Quizzing is an effective tool that compels students to review material. Used by many teachers in a variety of disciplines from the primary grades through graduate school, the quizzing tool is perhaps the most universally recognized way to get students to prepare for class. Quizzing deals with students individually and lets them correct their individual misunderstandings in the process. We have found that when used appropriately, Web-based quizzing is an effective and efficient pedagogical tool and a major contributor to improved student learning.

Q: What is the most effective way to use quizzing?

A: Quizzes should be required rather than voluntary. If students do not have to take quizzes, many of them will not bother—if only because students do not like the idea of being evaluated. If students do not take the quizzes, they cannot benefit from the feedback that tells them which aspects of their learning are incorrect.

Quizzes should be low stakes. They should be treated as interactive exercises rather than evaluations. In addition to reducing the level of anxiety associated with evaluation, students can use quizzes as an index of what they need to study. The point value associated with taking quizzes should be less than that associated with other evaluative tools such as exams and papers. This reduces the stressfulness of quizzing, making a quiz less like an evaluation and more like an opportunity to gain feedback on what students need to study more carefully.

Students should be allowed—in fact, encouraged—to take quizzes repeatedly so that they can master the material. Consistent with the idea that a quiz is a learning tool rather than an evaluation tool, repeated attempts facilitate student mastery of the material. Students should be encouraged to take quizzes as often as necessary to demonstrate their mastery of the material. Then the highest grade—not the first, most recent, or average grade—should be accepted as evidence of ability: If students are graded based on their first attempt, they see the quiz as an evaluation rather than a learning tool. If they are graded based on the most recent score, there may be a disincentive to continue to take the quiz (to practice) after an acceptable grade has been achieved. If they are graded based on an average grade, students are not likely to take the quiz repeatedly—if only because a bad score can dramatically reduce their chances of doing well.

Students should have the opportunity to see—immediately after completing each quiz—how many and which questions they answered correctly and incorrectly. Consistent with the importance of immediacy of reinforcement, this allows students to see how they did even as they remember why they answered questions the way they did. Ideally, for each question answered incorrectly, feedback should include information on where to turn to find the correct answer. It may be in the form of an indicator of the page to turn to or, better, a link to a Web-based image of the page(s) to review. The advantage of a Web-based link is that it makes the process of quizzing more interactive and less like a study tool.

Quizzes should be due frequently. In keeping with the idea that massed practice is less effective than spacing learning throughout the semester, quizzes should be due on a regular basis (once, twice, or three times a week) throughout the semester—not only before exams.

Q: How should quiz questions be organized?

A: Item selection should be randomized to make it harder for students to cheat. If every student sees the same quiz items in the same order, students will compare notes and prepare answers to the questions rather than understand the material. For the same reason, there should be several different versions of each quiz item.

The order of the questions (either in the same order as material is covered within the text or randomly arranged) is unrelated to the efficacy of quizzing. Instructors who prefer to make their quizzes more difficult by randomly arranging the order of questions should be encouraged to do so.

The number of questions that should appear on a quiz should be based on what the course instructors consider to be appropriate for the class. We have found that quizzes with 15 to 25 items work well. The 15 to 25 items should be drawn from a quiz pool of 100 to 200 questions per quiz assignment to ensure that students are taking different quizzes with each attempt.

For multiple-choice questions, when possible (i.e., the question does not have all of the above or A and B or other, similar answers as options), the order of the answers should be scrambled. This makes it harder for students to focus on the answer order and tends to focus them on the correct answer. Because spelling is important in answers to short-answer questions, students may understand the concept but answer incorrectly. We discourage the use of short-answer questions in quizzing unless spelling is a part of the course learning objectives (e.g., foreign-language courses). Essay questions may be appropriate for quizzing but should be used sparingly—if only because of the time and effort required to grade them.

Q: Should you use test banks provided by commercial publishers?

A: Most publishing companies provide test banks in conjunction with their textbooks. Often, answers provide guided feedback linked to the textbook; for example, students can click and see a pdf of a page they need to study. Instructors need to screen questions from publisher test banks before including such questions in quizzes. Including all items provided by the publisher without reviewing them is not a good idea, if only because many of the items are not good questions. Some items are inconsistent with course goals, and others may not be important enough to be included.

Modularization

Many students get to the end of a course having mastered a large percentage of the material but not enough to pass the course. They are then forced to repeat the entire course. Others are required to take a developmental course because of low placement scores when actually they lack only a small part of the course content. Course modularization offers institutions a way to treat students as individuals and accommodate partial learning by having students study only what they don’t know and thereby letting them make more-rapid progress.

Q: How can modularization be used to reduce the number of incompletes and/or failures?

A: Any course can be modularized by dividing it into distinct segments and assigning one credit for successful completion of one module, two modules, and so on. By requiring students to demonstrate a passing level of proficiency in one module before proceeding to the next, severe deficiencies can be identified and corrected early, resulting in a lower failure/withdrawal rate. In the traditional format, many students fall behind and feel compelled to withdraw. In a modularized format, students who complete, say, 60 percent of the material receive some credit rather than failing the course. And rather than having to reenroll for the entire course, students can take the remaining credits in the subsequent semester. That strategy has enabled redesign teams to eliminate one-fourth of course repetitions, thereby opening slots for additional students every year.

Q: How can modularization be used to combine multiple courses into one?

A: A computer programming redesign combined two introductory courses—one the primary entry point for computer science majors and the other a less technical version of the same course for non-majors—into one course organized in modules. The modules covered particular aspects of computer programming at five different levels of subject mastery and skill acquisition. Non-majors had to demonstrate mastery through level three; computer science majors, through level five. Course credit was variable depending on the number of modules successfully mastered and the level of skill mastery the student attained. Students who had difficulty with the higher levels could change majors and receive course credit without having to drop the course and repeat modules already mastered. And non-majors who developed an interest in becoming computer science majors could go further than originally planned to meet the more stringent requirements.

 

 

Table of Contents

Introduction
I. Essential Elements
II.Getting Ready
IIIA. Six Models
IIIB. Six Models
IV. Instructional Roles
V. Instructional Costs
VI. Small within Large
VII. Learning Assessment
VIII. Completion Rates
IX. Faculty Concerns
X. Technological Issues
XI. Student Participation
XII. Planning/Implementing
XIII. A Written Plan
XIV. Consensus

Appendices:
Assessment Planning
Assessment Reporting
Completion Reporting
Cost Planning Tool (CPT)
CPT Examples
CPT Instructions
Scope of Effort Worksheet
Scope of Effort Instructions