How to Redesign a College-Level or Developmental Math Course Using the Emporium Model

VIII. How to Compare Completion Rates

Completion rates refers to the percentages of students who began the course and finished with a grade of C or better. This measure (sometimes referred to as a pass rate) is generally accepted in higher education to indicate student “success” in a course.

Completion rates are not the same as measures of student learning. Assessment of learning refers to direct and comparable measures of student learning outcomes; completion rates refers to final grades.

Q: Why are grades not comparative measures of student learning?

A: Pass rates (grades of C or better) in traditional courses are not a reliable indicator of student learning and almost universally suffer from inconsistencies in grading practices. Students in traditional courses are assessed in a variety of ways that lead to overall grading differences. Inconsistencies include (1) curving, (2) failing to establish common standards for topic coverage (in some sections, entire topics are not covered, yet students pass), (3) having no clear guidelines regarding the award of partial credit, (4) allowing students to fail a required final exam yet still pass the course, and (5) failing to provide training and oversight of instructors, especially part-time ones.

NCAT has frequently observed the phenomenon of improved student learning outcomes supported by clear assessment data coupled with decreased completion rates. This phenomenon is typically due to prior grade inflation.

Examples

  • At Florida Gulf Coast University (FGCU), redesign students succeeded at a much higher level than traditional students on module exam objective questions, which tested content knowledge (85% vs. 72%). Yet when comparing final grades, 22% of students in the traditional course received a D, F, or withdrew; in the redesigned course, 29% received a D, F, or withdrew. Upon further investigation, the FGCU team discovered that different standards for passing the course were applied. The adjuncts who taught the traditional course curved their module exam grades, often by as much as 15 to 20 points.
  • At SUNY Potsdam, average scores on comparable questions graded by the same rubric improved from 2.22 in the traditional course to 2.58 in the redesigned course. Correct responses to common multiple-choice questions increased from 55% to 76%. Yet student success rates (grades of C or better) declined from 73% of traditional students to 61% of redesign students. Since generally less demanding adjunct faculty have been eliminated from teaching the courses and grading has become more uniform, the team believes that past grades were higher because the grading was easier.
  • At Alcorn State University, the average of mid-term exam scores and final exam scores in College Algebra from fall 2008 traditional sections were compared to those of fall 2009 redesigned sections. Students in the redesigned course performed significantly better. The average score of the fall 2008 traditional sections was 55.89, while that of the fall 2009 redesigned sections was 66.16. Even though the students received better scores on the common exams, the DFW rate of fall 2009 was higher (47%) than that of fall 2008 (22%). The reason for the conflict between improved test scores and lower completion rates was most likely due to the fact that the redesigned course used uniform grading methods across sections, whereas instructors in the past had more grading flexibility, possibly leading to grade inflation.

Q: Why would one want to look at comparative completion rates as well as comparative measures of student learning?

A: It is important for students to both master the content of the math course and complete the course. It is possible to demonstrate increased student learning through redesign (e.g., final exam means that increase from 50 percent to 70 percent), but if only 20 percent of students take the final exam, you have a problem, despite the demonstrated increase in student learning outcomes.

 

 

Table of Contents

Introduction
I. Essential Elements
II. Improving Essentials
III. Getting Ready
IV. The Lab
V. Course Policy Decisions
VI. Instructional Costs
VII. Learning Assessment
VIII. Completion Rates
IX. Faculty Concerns
X. Student Participation
XI. Planning/Implementing
XII. A Written Plan
XIII. Consensus

Appendices:
Assessment Planning
Assessment Reporting
Completion Reporting
Cost Planning Tool (CPT)
CPT Instructions
Scope of Effort Worksheet
Scope of Effort Instructions
Student Learning Results
Completion Results
Cost Reduction Results