First, Measure Something . . .

By Dr. Carol A. Twigg, President and CEO
National Center for Academic Transformation

Everyone in higher education seems to be talking about accountability. Some (mainly public policy makers) think it’s a good idea. Others (just about everyone else in higher education) think it’s a bad idea. Still others (higher education associations and accrediting agencies) are caught in the middle, wanting to respond to legitimate requests for accountability yet wanting to resist requests that impinge on institutional autonomy.

Accountability implies measurement. One must be accountable for something to someone, and one must measure the “something.” Yes, but what is the “something” and who is the “someone”? Well, in this case, it seems pretty clear that the “someone” is policy makers serving as a proxy for the public. So the “someone” is not as much of a problem as the “something.” Here’s where things begin to get a bit fuzzy.

Echoing the predominant view, Paul Lingenfelter, Executive Director of SHEEO, has observed, “In postsecondary education the task of establishing criteria and data for effectiveness is substantially more complicated [than in elementary and secondary education] due to its many diverse missions. Postsecondary institutions provide remedial instruction for adults, and they develop the skills required to analyze blood samples, computer software, literature, and history. Their graduates are expected to teach children, to write newspaper articles, to manage small and multi-national businesses, to provide psychotherapy, and to design and build skyscrapers and telecommunication satellites. In addition, some institutions are charged with expanding knowledge as well as transmitting it. They conduct research and train successive generations of investigators.”

What you are going to measure depends on what you want to achieve. In the private sector, what you measure is simple: profit, the bottom line. Businesses succeed or fail based on their ability to deliver what their customers want and to do so better than their competitors. The American auto industry is in the midst of learning this lesson the hard way.

I daresay that the international economy is as complicated as Paul’s description of postsecondary education, but in the private sector, accountability is clear. The “someone” is stockholders, and the “something” is profit.

Even in parts of the not-for-profit world, we have examples of effective measurement. Medicine immediately springs to mind. Medicine is based on research and careful observation of practice. The bottom line is clear: curing illness. That’s what everyone in the health professions is trying to do. If a new approach or a new drug is discovered that cures or palliates a health problem, every medical practitioner begins to use it immediately. Doctors read journals and talk to other doctors about what works and what doesn’t work, and we all benefit from a community of practice.

So what’s higher education’s “bottom line”? Well, surely it must be student learning. Ah, but that complexity problem . . .

In 1995, Bruce Johnstone, former Chancellor of the State University of New York, received funding from the Ford Foundation to organize what he called a Learning Productivity Network to address the need for higher education to become more productive for the sake of students, parents, and taxpayers alike. At the initial meeting of the network—a panoply of higher education leaders—a debate immediately broke out. One well-respected participant said, “How can we talk about learning productivity--you can’t measure learning productivity because we don’t know what learning is.” His point, I believe, was that postsecondary learning is so complicated that we can’t possibly measure it, much less improve it.

My response that day was, “That’s ridiculous! Every day college faculty members measure and evaluate learning—in tests, assignments, exams, and so on.” Everyone who “practices” higher education measures and certifies learning all the time. We award credit hours and degrees as a certification of learning in every subject that we teach. So I don’t think the issue is “complexity.”

The issue is, of course, consistency. When professor X gives a grade of Y in organic chemistry, does it mean the same thing when professor Z gives a grade of Y, whether at the same institution or at a different institution. I agree that consistency is a problem. But perhaps the desire for consistency may be getting in the way of making any real progress on assessment. Are we letting perfect get in the way of good?

I suggest that if we begin to use the measures that we have—imperfect as they may be—we would begin to make progress on improving student learning. Grades given by college professors across the country are sufficient to award degrees and certificates, and while they are far from perfect as a consistent measure of student learning, they represent a good start.

Here’s a simple example. As part of the application processes of the Program in Course Redesign and the Roadmap to Redesign, hundreds of institutions described success rates in their introductory courses that they want to improve through redesign. From those applications, we have a pretty good idea of the percentages of students who are unsuccessful in these courses by sector—about 15% at R1s, 30-40% at comprehensives and 50-60% at community colleges on average.

We also know that some institutions do a better job than others in regard to student success in these courses. If states and/or systems and/or institutions began by systematically capturing and reporting the percentages of students who fail to complete core courses, they would have a far better understanding of the state of student learning than they do today. It’s not perfect, but it’s a good start.

Think what the impact would be if all of us in higher education started to evaluate our efforts in improving teaching and learning by at least comparing grades. If we began to capture, report and compare student success rates in the context of our diverse efforts in higher education—and act on what we learned—how would our practices be different than they are today? Let’s consider some examples from applications of information technology to improve teaching and learning, NCAT’s particular interest.

  • During the ‘80’s and ‘90’s, just about every college and university (and indeed some states and systems) established some kind of academic computing unit with the goal of improving teaching and learning. As time has gone by, these units have increased in size, scope, complexity and budget and now constitute sizable institutional investments. Are these units generating a return on that investment? How do we measure their impact? Do we measure the number of faculty members who use their services or do we measure how that use translates to improved student learning?
  • Many entities give grants to harness the power of information technology to improve teaching and learning. Some are national (foundations and government agencies); some are state-based; others are institution-based. Can the National Science Foundation or FIPSE or any of the private foundations summarize what applications of technology supported by their grants have had the most impact on student learning? If the answer is no, why is it no? And why do they continue to give money to projects that have no apparent impact on student learning?
  • Members of the higher education community are engaged in a number of special projects that use information technology to improve teaching and learning. Some are national—indeed international--in scope. MERLOT, the Open Courseware initiative and their local derivatives are spending lots of dollars from foundations, institutions and state governments. Can we demonstrate the impact of these initiatives on student learning? Are we even trying to do it?

Suppose, in each instance, we began by asking faculty members to report what difference particular applications of technology made in their courses as evidenced by improved student learning? Even if the measures were not perfect, we’d begin to make progress. We’d stop funding things that have no impact, and we’d start to spend our time and effort on those that can make a real difference. We’d start arguing about validity and reliability in the context of doing something to improve student learning. If we don’t start to look at the impact of what we’re doing on our bottom line, are we making a good investment in instructional technology?

The higher education community is filled with unproven assumptions about what works best to improve student learning. Can you imagine if doctors conducted their practices as we in higher education do? Ignoring what research we have? Ignoring when colleagues at a similar institution (or in one’s own department!) make dramatic increases in student learning? By beginning with capturing and reporting something as simple as successful completion rates, we begin to identify promising practices and hopefully stop treating students with the educational equivalent of bleeding patients to cure them!

We also start to get at the consistency problem as well. Once comparative grading practices were made public, the argument about the validity and reliability of grades would begin in earnest.

First, measure something . . .

Back