Improving the Quality of Student Learning

Carnegie Mellon University

Based on available data about learning outcomes from the course pilot, what were the impacts of re-design on learning and student development?

Note: Because of the amount of data collected in spring 2001 and the effort required to analyze it, that process is not finished. The results will be posted when they are available.

During the 2000-2001 academic year, our StatTutor cognitive tutor for statistics was used by more than 400 students taking introductory statistics in the fall and spring semesters. The prototype deployment of StatTutor modules was in the fall of 2000. Those modules targeted labs that taught statistical concepts that we knew from our previous studies of introductory statistics at Carnegie Mellon to be difficult for introductory students. These concepts are: correlation, boxplots, scatterplots, contingency tables, chi-squared statistic, and t-tests. As a limited check on how the students who had used the StatTutor modules performed on problems requiring knowledge of these concepts, we tracked information from the final exams on questions testing these concepts. The result for the classes in the fall of 2000 are summarized in this table:

Question # Topic % Correct # StatTutor Units
2 Correlation 87 2
6 Boxplots 81 2
1 Scatterplots 88 2
4 Contingency Tables 86 2
5 Chi-Squared Statistic 67 1
7 T-Tests 57 1

Because our system is a computerized learning environment, we have been able to collect a lot of data from the course pilot. Specifically, each time a student (or pair of students) sat down at a computer to work on a data-analysis problem with StatTutor, a complete record of their interactions was logged in that student's history file. For each session where StatTutor was used, professors were provided with summary sheets detailing the names of students who attended the lab and information on their performance. The latter included the number of questions each student attempted to answer, where errors occurred, and how many of the questions requiring TA approval of the answers each student attempted. Complete log files for the students' interactions with StatTutor were collected and used, along with pre- and post-test questions, to assess student-learning outcomes. More of these interactions were logged in the labs during the spring semester than in the fall.

We have produced summary files from these logs for preliminary analyses, but these preliminary analyses are still being conducted, so there are no results to report yet. A noteworthy feature of this data set is that the data from students’ interactions with StatTutor have been integrated with a variety of other course-related data (e.g., exam grades, specific test question scores, etc) and demographic data (e.g., student’s year in college, chosen major, etc.) to assess how different aspects of the course and different types of students are impacted by using StatTutor.

It is important to note, however, that these data will reflect outcomes associated with using the StatTutor in a regular college course where all students were given access. We are supplementing these data with data from an experiment to be conducted in the near future. This experiment will investigate the learning outcomes of two groups of students—those using StatTutor and those using a scaled-down problem-solving environment (that does not have the scaffolding support or feedback features of StatTutor). The students participating in this experiment will be selected to be similar to students who take the target Introductory Statistics course. Unlike the course, however, they will be randomly assigned to receive either StatTutor or its scaled-down version for problem-solving practice. In this way, we can assess the added value of StatTutor relative to a more typical problem-solving practice environment used in “reformed” introductory statistics courses.

December 2001 Update: A version of the experiment described in the preceding paragraph was conducted during summer 2001.

To evaluate StatTutor in a more controlled environment, we designed an experiment in which people could get a fairly intensive statistics experience in a relatively short amount of time. Participants without any prior (formal) statistics training were recruited for pay. They were asked to attend five sessions, for two to three hours per session, and were assigned to work with StatTutor when solving problems. For sessions 1-4, the participants watched videotaped lectures and worked on sequences of problems. In addition, at the beginning of session 1 and during session 5, participants completed several paper-and-pencil tests. Moreover, during session 5, participants worked on additional, open-ended data-analysis problems on the computer without feedback.

Some of the results of the pencil-and-paper tests were as follows:

One paper and pencil assessment was a multiple-choice test covering the basic skills and concepts of exploratory data analysis, including questions on identifying study designs, selecting appropriate analyses, and drawing conclusions from the results. Participants' scores increased by 3.65 out of 16 items or a 22.8% increase, a significant improvement, t(19)=5.877, p < .001. Another paper and pencil assessment asked students to read through sample problems and classify them into groups (on whatever basis they felt reasonable). By analysing participants' categories, we found that there was a significant pre-post shift in the way participants classified the problem: before the experiment, they tended to base their classifications on the subject matter of the problems and, after the experiment, they tended to base their classifications on the appropriate exploratory analysis, t(19) = 4.11, p < .001.

Perhaps the most dramatic results of the summer experiments revolve around a central theme of this entire project: using the StatTutor tutor as a tool to help students achieve a level of statistical literacy that not deemed possible in the course prior to its redesign. Specifically, one of the targets of adding the StatTutor was to help students go beyond following the right steps once they had been given data and an associated statistical method to apply to that data. The step beyond is to identify the appropriate statistical method to use when presented with just the situation to be analyzed and some questions to answer about that situation. Prior to the redesign of the course, students were typically given the method to be applied along with the data and the questions to be answered about the data. The scaffolding provided by StatTutor appears to help the students learn how to make good decisions about both which method to apply as well as how to apply it.

In the experiments conducted last summer, students were given open-ended quiz problems that described a real world situation (drawn from subject areas such as medicine, economics, education, etc.). The nature of these open-ended questions is such that students who are just guessing (or making choices for bad reasons) can make a number of errors as they work through an answer. For example, they can make an error in the initial choice of statistical method, e.g. choosing to construct a contingency table when the variables involved and the nature of the problem calls for a test of correlation. Even after choosing the right analysis method, they can make errors in the application of that method to the particular case. The number of errors they make can then be taken as a measure of what they have learned.

In an earlier study of Carnegie Mellon students who had taken a full semester of introductory statistics (prior to the redesign of the course), Lovett found that those students made more than 9 errors per selection opportunity when answering such questions. [Lovett, M.C. (2001) "A collaborative convergence on studying reasoning processes: A case study in statistics." In S. Carver and D. Klahr (Eds.) Cognition and Instruction: Twenty-five Years of Progress (pp. 347-384).] In contrast, the students who participated in the summer study using StatTutor made only 0.73 errors on average per opportunity to select an appropriate analysis. Moreover, students who had not experienced StatTutor lessons tended to chose statistical methods based on the content of case they were given, e.g. choosing a boxplot analysis for a medical example just because they had seen a prior medical example where boxplots were used. Those with instruction from StatTutor lessons more often chose a method for the right reasons - the nature of the analysis requested and the variables involved.

Although these tests were on different groups, the fact that the groups were similar in makeup, combined with the dramatic difference in the number of errors they made suggests that the use of StatTutor is making a substantial difference in adding a kind of statistical literacy that was not part of the course prior to its redesign.

Some additional evidence for that same belief can be found in the number of errors made in similar open-ended questions in three labs using StatTutor in the regular classes during the spring of 2001. In those three labs, students made only 1.3, 1.5, and 1.1 errors on average per opportunity to select the appropriate analysis. Again, this contrasts sharply with the average of 9 errors that Lovett found in her earlier study of introductory statistics students at Carnegie Mellon.

Finally, there is more evidence of the impact of StatTutor in the updated numbers on the chart given above in our first report. Adding the Spring 2001 numbers, that chart now looks like this:

Question #
Topic
% Correct

Fall 00

% Correct

Spring 01

# StatTutor Units
2 Correlation 87 86 2
6 Boxplots 81 87 2
1 Scatterplots 88 85 2
4 Contingency Tables 86 85 2
5 Chi-Squared Statistic 67 70 1
7 T-Tests 57 74 1

This chart shows the percentage of correct answers to final exam questions on the topics indicated in each row and the number of StatTutor lessons that dealt with those topics. What needs to be clarified from the initial report about this chart is the following: these are questions about choosing the right statistical method that were not asked on exams prior to the course redesign because of the difficulty that students had in making such decisions. On prior exams, students were simply given the method to use along with the data. Prior to the fall of 2000, the exams tested mostly the students' application skills rather than their skills in choosing the right statistical analysis for a particular case study. Yet, choosing the right method to apply is very much something the faculty wanted the students to learn. As these numbers indicate, it appears that the StatTutor approach is helping students achieve this additional kind of statistical literacy even in the first course in statistics. These results are consistent with the experiment from last summer.

Back

 

Program in Course Redesign Quick Links:

Program In Course Redesign Main Page...

Lessons Learned:
Round 1...
Round II...
Round III...

Savings:
Round I...
Round II...
Round III...

Project Descriptions:
Sorted by Discipline...
Sorted by Model...
Sorted by Success...
Sorted by Grant Rounds...