NCAT Monograph

Quality Assurance for Whom?
Providers and Consumers in Today’s Distributed Learning Environment

By Carol A. Twigg

Download PDF version

Preface
Part 1: Quality from the Provider's Perspective
Part 2: Quality from the Consumer's Perspective
Conclusion: Quality Assurance in a Disaggregated World
Notes
Symposium Participants

Preface

On July 13–14, 2000, a group of sixteen higher education leaders gathered at the Sagamore Hotel in Lake George, New York, to participate in an invitational symposium. The topic was "Preserving Quality in Distributed Learning Environments." This was the third of the Pew Symposia in Learning and Technology, whose purpose is to conduct an ongoing national conversation about issues related to the intersection of learning and technology.

According to Carole Cotton of CCA Consulting, a market research firm, ninety-four percent of all colleges and universities are either currently (63%) or planning to be (31%) engaged in distance and distributed learning. Some believe that this extraordinary growth is outstripping the existing quality assurance capacities of state agencies, accrediting associations, and similar groups. Others counter that distance learning is a long-established form of higher education and that quality assurance practices for distance education are essentially the same as those used for traditional, on-campus education. Regardless, the advent of distance and distributed learning has raised numerous questions about quality assurance. How do established distance learning institutions ensure quality? What more needs to be done? How do quality assurance agencies view the distinction between on- and off-campus teaching and learning?

This symposium explored the topic of quality assurance in distributed learning in an effort to provide some answers. Participants in the symposium fell into two categories. The first were leaders from accrediting associations, the federal government, and other policy-oriented associations; the second were campus practitioners who are actively engaged in developing and implementing online programs and who are thus grappling with quality assurance issues on a daily basis. By joining those with a policy perspective and with responsibility for quality assurance on a macro scale and those with a practical perspective and with responsibility for quality assurance on a micro scale, we hoped to arrive at a point of understanding that would have a positive impact on both theory and practice.

We confined our discussion to college-level, credit-bearing teaching and learning experiences and excluded noncredit courses and programs in order to keep a focus on higher education’s primary domain. We also tried not to redefine existing quality assurance systems. Although these quality assurance systems are not perfect—and they can certainly be improved—they more or less work, for many institutions, for states, and for the federal government. We also know that the predominant quality assurance organizations are attempting to improve their processes (e.g., placing a growing emphasis on learning outcomes and encouraging greater flexibility in the application of current standards to nontraditional organizations). Neither did we try to solve practical problems for the accreditation process (e.g., how to conduct reviews in these new environments when teams lack the necessary skills or experience). Rather than replicate the "what’s wrong with accreditation" discussion heard elsewhere, we raised these issues only when they were explicitly related to distributed learning environments.

George Connick, president emeritus of the Education Network of Maine, has pointed out that any discussion about quality in a distributed learning environment must first ask: From whose perspective are we considering quality? If we are looking at quality from the viewpoint of most traditional higher education institutions, we are likely to get a very different answer from that offered by students studying via technology, especially at a distance.

As a consequence, this paper, like the symposium discussion, is organized into two parts. The first part focuses on questions and issues of quality assurance viewed largely from the perspective of institutions and agencies. It explores the nature of the problem that distributed learning seems to present for traditional quality assurance practice. The second part focuses on quality assurance from the point of view of consumers, primarily students but also employers and graduate and professional schools. The accreditation process and many of the quality assurance methods used in the academy pre-date the consumer culture that has become widely accepted in today’s society. How are consumer needs different from those of institutions and quality assurance agencies in a distributed learning environment? One thing is clear: when we turn to questions of quality assurance at the course level, where most consumers interact with online learning, we find chaos. Yet as this paper suggests, we can find a way out of this chaotic situation by meeting both the providers’ and the consumers’ needs.

A few words about terminology are in order. Throughout this paper, the terms distance learning, distance education, distributed learning, and online learning are used more or less interchangeably. At times, the use of distance learning seems appropriate because the issues under discussion most frequently concern off-campus (distance) versus on-campus learning. At other times, particularly when describing the new higher education environment, the phrase distributed learning more clearly expresses the changing nature (and the blending) of all forms of higher education. In any event, the reader should not draw unwarranted conclusions from a particular usage. This paper, like the discussion in Lake George, builds on the good work of the individuals who participated, both virtually and in real time, in the symposium. Before our meeting, a number of them submitted written answers to a series of questions, and their responses, elaborated by the discussion, have been included in this paper. Although not every participant will agree with every statement in this paper, both the discussion and our general conclusions have been captured.

The goal of the Pew Symposia is to approach topics related to learning and technology from a public-interest perspective. Many constituencies bring self-interested agendas to discussions about technology: administrators worry about facing competitors; faculty worry about keeping jobs; and vendors worry about selling particular hardware and software. So too do different segments of the higher education community bring competing agendas that often reflect political considerations first and quality concerns second. The Pew Symposia are intended to produce thoughtful analyses and discussions that serve the larger good. Please let us know if we have met that goal in our approach to this very important and somewhat contentious issue.

Part 1: Quality from the Provider’s Perspective

THE NATURE OF THE PROBLEM

Many members of the higher education community approach the issue of quality assurance in distance learning not as a desired end but as a problem that needs to be solved. The "problem" expresses itself in three different but related ways.

1. Distance learning requires new, separate quality assurance standards because it is different.

Many people believe that distance learning is so different from classroom-based education that new—and separate—standards of quality are needed. Matthew Pittinsky, Blackboard Inc. chairman, and Bob Chase, National Education Association president, asked in the introduction to a recent study of this issue, "How can a teaching/learning process that deviates so markedly from what has been practiced for hundreds of years embody quality education?"1 The recently drafted "Guidelines for the Evaluation of Electronically Offered Certificate and Degree Programs," a joint product of the Council of Regional Accrediting Commissions and the Western Cooperative for Educational Telecommunications (WCET), introduced the issue as follows: "New delivery systems test conventional assumptions, raising fresh questions as to the essential nature and content of an educational experience and the resources required to support it. As such they present extraordinary and distinct challenges to the right regional accrediting commissions which assure the quality of the great majority of degree-granting institutions of higher learning in the United States."2

The higher education community has developed several quality indicators that are so well understood and accepted that many institutional quality assurance programs simply imbed them. Quality equals a tenured full-time faculty member with a Ph.D. teaching the course. Quality equals courses and degree programs offered by and on a residential campus. Quality equals students learning by sitting in the same room with a professor. When it comes to distance education, however, the picture is not as clear.

2. Distance education programs have low (or no) quality standards.

Many people, particularly those who lack firsthand familiarity with distance learning, are frankly suspicious of distance education and think that distance education programs have either low standards or even no standards. The American Federation of Teachers (AFT) stated: "Still, a good number of educators remain skeptical [of distance learning]. Believing that teaching and learning are inherently social processes, these educators consider ‘same-time same-place’ interaction central to a successful educational experience."3

Some people are more than uncomfortable. Those concerned with consumer protection sometimes presume that distance learning is more susceptible to fraud and abuse than traditional education. Others are suspicious of the motives of those engaged in distance learning. Are institutions developing distance-learning programs to fulfill their core values or for other reasons? The image of distance education as a "cash cow" is a powerful one. One symposium participant asked, "Is this really a mainstream thing, or is it just the part that has to pay for itself?"

Clearly many, if not most, people have a preconceived "model" of distance learning. One model views distance education as disconnected from the faculty because some distance offerings have historically been managed by departments such as Extended Studies or Continuing Education. Students may graduate by taking courses offered mostly by adjunct faculty. Thus many people conclude that distance education programs are left outside of the formal faculty structures that oversee quality—they are not particularly "owned"—and that the mechanisms of internal quality assurance do not apply.

Others counter that this conclusion derives from how distance education was frequently conducted in the past and that today’s distance learning programs are becoming fully integrated into campus life. As an example, the University of Illinois at Urbana-Champaign (UIUC) now views distance learning as part of its central mission to serve the people of the state of Illinois, as part of the core values of the institution. UIUC’s master’s degree program in library science is offered online as a "scheduling option." This program is the same as the one offered on campus: the same faculty teach on campus and online; students meet the same entrance requirements; faculty evaluation is the same for faculty on campus and for those online. New hires are told they will teach on campus as well as online. Illinois has moved from the idea that "distance education is of poor quality" to a conviction that "distance education is now mainstream."

Those symposium participants less familiar with the distance-learning scene questioned whether UIUC represents an ideal situation, one that is out of the ordinary. Some believe that the majority of institutions are operating distance programs as "cash cows," using fewer resources to bring in additional income to the institution. Those participants with extensive experience in the field countered that UIUC is not an exception. On those campuses seriously engaged in online learning—versus those merely talking about it—the integration exemplified by UIUC is typical.

The term distributed learning has evolved specifically to describe this integration and to move people away from seeing a split between on- and off-campus use of technology in academic programs. Distributed learning encompasses both on- and off-campus online teaching and learning. The term had its origins in the networking community, where experts talk about distributed intelligence on the network, for example, in contrast to the central intelligence of the mainframe computer. The term suggests that learning is being distributed throughout the network. Consequently, the kind of either/or (on/off-campus) distinction that the term "distance learning" suggests is no longer appropriate.

Clearly much of the concern about distance education is really because many people in higher education are not familiar with it. They need to go through a process that will bring them to the same comfort level they now have with traditional higher education. This means that all parties with an interest in higher education—including legislators and policy-makers—need to be educated.

3. There is no consensus on distance learning quality.

Many people believe that there is no consensus on what constitutes good practice in distance education. Regional accrediting bodies, they assert, have varying levels of specificity when it comes to defining high-quality distance learning. Institutions and state systems are devising their own standards based on their reading of the accrediting bodies, the literature, and so on. Because distance learning reaches beyond local and regional boundaries, many feel that some commonly accepted standards are needed to ensure adequate protection for student consumers. Do we have a common understanding of the indicators of quality in a distributed learning environment? If so, what are its components?

PRINCIPLES AND PRACTICES

In the early 1990s, the WCET developed "Principles of Good Practice for Electronically Offered Academic Degree and Certificate Programs" (http://www.wiche.edu/telecom/projects/balancing/principles.htm), which has been widely circulated and adopted by states, regional accrediting associations, and others.

Since that first list was produced, many other groups have developed similar statements:

  • The American Distance Education Consortium (ADEC), an international consortium of state universities and land-grant institutions, provides high-quality, economic distance education programs and services via the latest and most appropriate information technologies. ADEC has developed the "ADEC Guiding Principles for Distance Learning" (http://www.adec.edu/admin/papers/distance-learning_principles.html) and the "ADEC Guiding Principles for Distance Teaching and Learning" (http://www.adec.edu/admin/papers/distance-teaching_principles.html).
  • A joint task force of the American Council on Education and The Alliance: An Association for Alternative Programs for Adults produced "Guiding Principles for Distance Learning in a Learning Society."
  • The Instructional Telecommunications Council (ITC), an affiliated council of the American Association of Community Colleges established in 1977, provides leadership, information, and resources to expand and enhance distance learning through the effective use of technology. ITC’s new monograph series "Quality Enhancing Practices in Distance Education" (http://www.itcnetwork.org/quality.html) provides case studies containing best practices in community college distance education including, for example, teaching, student services, accreditation, and assessment
  • The American Federation of Teachers (AFT) recently published, "Distance Education: Guidelines for Good Practice" (http://www.aft.org/higher_ed/downloadable/distance.pdf). Based on a 1999 survey of two hundred AFT members who are distance education practitioners, these guidelines attempt to go deeper than previous guidelines reviewed by the AFT.

In cooperation with the WCET, the Council of Regional Accrediting Commissions (C-RAC) recently published the draft document "Guidelines for the Evaluation of Electronically Offered Degree and Certificate Programs" (http://www.wiche.edu/telecom/Guidelines.htm), which updates and elucidates the WCET’s earlier statement.

Clearly a lot of thought and work by countless individuals has gone into developing these statements. Those familiar with all of them will find a remarkable degree of congruence among them. As a way of confirming how much these statements have in common, the symposium participants spent a good deal of time discussing a recent study commissioned by the National Education Association (NEA) and Blackboard Inc. and conducted by the Institute for Higher Education Policy (IHEP). That study, entitled "Quality on the Line: Benchmarks for Success in Internet-Based Distance Education" (http://www.ihep.com/quality.pdf), first reviewed all of the existing principles, guidelines, and benchmarks that address best practices in distributed learning and combined them into a single list of forty-five "benchmarks."

The researchers then tested the efficacy of that list by interviewing leading practitioners in the field, asking them three questions:

  1. To what extent are these benchmarks being incorporated into their existing practice?
  2. Are there additional benchmarks, not found in the literature, that contribute to quality?
  3. How important are the benchmarks to the institution’s faculty, administrators, and students?

In that process, the researchers dropped thirteen benchmarks, added three, and combined those that overlapped. The result is a list of twenty-four benchmarks that are "essential to ensure quality in Internet-based distance education." This list is reproduced in Figure 1.

Symposium participants were asked five questions:

  1. Are these benchmarks sufficient to meet the need for commonly accepted standards of good practice? What, if anything, is missing from these statements?
  2. These principles of good practice are basically process-oriented and resemble current accreditation practices. How do we know that institutions and organizations in fact apply them? How do we know that these principles contribute to high-quality outcomes?
  3. How should these standards be applied in new institutional configurations?
  4. Are these principles any different from principles of good practice in on-campus programs? If so, in what ways?
  5. Are these statements sufficiently consumer-oriented?

Their discussion of these five questions follows.

1. Are these benchmarks sufficient to meet the need for commonly accepted standards of good practice? What, if anything, is missing from these statements?

Participants agreed that the IHEP benchmarks go a long way in demonstrating that there is consensus about what constitutes good practice. Only two problems were identified. The first is relatively minor: this list, like many of those IHEP consolidated, tends to "mix and mush" the targets of analysis. Some statements address only programs, others address courses, and still others address "learning experiences" and "non-formal educational programs." Consistency and clarity concerning the organizational level being addressed would improve any statement about quality indicators in distance learning.

The second problem is more substantive. Although these statements are called benchmarks, several symposium participants observed that they are more like principles of good practice on the way to becoming benchmarks. We often think that we are talking about best practices when we are really talking about adequate practices. Rather than having benchmarks, we rely on a pass/fail model. Benchmarks, on the other hand, imply specific measures of high—or the best—quality and gradations moving toward those measures.

These statements do not say, "You should have these outcomes." They say only, "You should have outcomes." Furthermore, they do not say anything about the level at which students or institutions ought to be performing on the particular outcomes chosen. For example, one principle calls for faculty assistance in transitioning from the classroom to the online environment, but it does not say what characterizes excellent faculty orientation in an online context. Rather than simply noting that a requirement exists, we need to demonstrate how well it works.

There is a choice to be made about how narrowly or broadly we frame our interest in quality. Are we concerned about quality assurance mainly in terms of minimum standards for consumer protection, or are we interested in creating incentives for quality improvement, incentives that will make the market work better? If the latter, we are led to another set of consumer information issues. Several of the symposium participants pointed out that we in higher education are willing to "settle" way too soon, accepting a level of performance that is erratic. There is no concept of "world-class" (which is where the term benchmarking comes from)—of meeting or exceeding customer expectations, ideas that are used in the business world. Rarely is there that kind of drive in our industry. How do we get beneath the veneer of "we do it" to "we do it well"? How do we bring the concept of "world-class" into higher education?

In response to the question about what is missing from the list, several participants noted that the IHEP study excluded certain benchmarks on the grounds that a quality course does not require their being present. That may indeed be true. Yet those that were excluded are, in the view of many symposium participants, ones that will lead to higher-quality practices because they are more learner-centered and because they incorporate pedagogical approaches of proven effectiveness.

Two benchmarks that were dropped from the teaching/learning category involved collaboration:

  • Courses are designed to require students to work in groups utilizing problem-solving activities in order to develop topic understanding.
  • Course materials promote collaboration among students

Two others in that category involved designing modular and mastery learning techniques:

  • Courses are separated into self-contained segments (modules) that can be used to assess student mastery before moving forward in the course or program.
  • The modules/segments are of varying lengths determined by the complexity of learning outcomes.

Three benchmarks that were dropped from the course development category involved paying attention to student learning styles:

  • During course development, the various learning styles of students are considered.
  • Assessment instruments are used to ascertain the specific learning styles of students, which then determine the type of course delivery.
  • Courses are designed with a consistent structure, easily discernable to students of varying learning styles.

Symposium participants observed that dropping these indicators reinforces the notion that these statements represent a minimal-standards approach much like current accreditation processes.

Finally, the IHEP list does not include what might be called political issues. Participants noted that the discussion about quality in distance learning is taking place amid tremendous turmoil on our campuses, a result of the changing nature of higher education. Faculty and administrators are locking horns over organizational issues such as part-time faculty, governance, and commercialization. In many cases, the fight over distance learning, often couched in quality assurance terms, is part of that struggle.

For example, maximum class size is frequently mentioned as a quality indicator for online courses. The IHEP report does not include a benchmark for class size because the researchers found that there are a wide variety of opinions regarding the optimum faculty-student ratio. Some suggest that there should be a maximum size of, say, 20 to 25 students, and others recommend that the first online course a faculty member teaches should be capped at a relatively low enrollment. Yet many faculty have found that appropriate interaction and good student outcomes can be achieved in courses with large enrollments, and they are successfully offering online courses with hundreds of students.

What matters is the pedagogical design. An explicit goal of the Pew Grant Program in Course Redesign is to find ways to accommodate larger numbers of students. Most of these projects rely heavily on software. The UIUC has tripled enrollment in its foreign language courses by relying heavily on using Mallard, a UIUC-developed intelligent-assessment software program that automates grading of homework exercises and quizzes. Rio Salado College has doubled the number of students in its online mathematics courses by building them around high-quality software products from Academic Systems.

The IHEP report states, "It could be argued that maximum class size relates more to faculty course workload than student outcomes."4 One organization rightfully concerned about faculty workload, the American Federation of Teachers (AFT), notes the University of Illinois Faculty Seminar on Distance Education’s recommendation favoring smaller faculty/student ratios. The AFT guidelines do not, however, endorse a hard-and-fast rule; rather they recommend that class size should be established through normal faculty channels (e.g., through collective bargaining.) At the same time, the AFT document denounces practices in which teaching faculty "operate from workbooks based on a prefabricated curriculum that the faculty member had little role in developing." The AFT urges that additional compensation be provided to faculty for teaching online courses and advocates that faculty members retain control over use and reuse of course materials. Like these examples, many of the "standards" included in the AFT guidelines have little to do with academic quality and much to do with advancing faculty interests.5 Whether one agrees or disagrees with these positions, few would argue that they are essential to high-quality distance learning.

In conclusion, participants at the symposium generally supported the idea of confining "political" issues to political documents and omitting them as indicators of high quality.

2. These principles of good practice are basically process-oriented and resemble current accreditation practices. How do we know that institutions and organizations in fact apply them? How do we know that these principles contribute to high-quality outcomes?

A complicating factor that affects the discussion of quality assurance in distance learning is that it is taking place within a context of dissatisfaction with current higher education quality assurance processes in general. Some see the attention focused on distance learning as an opportunity to correct the inadequacies of the current quality assurance system, leading others to wonder whether there is an unfair attempt to create a double standard.

Generally, symposium participants agreed that the IHEP standards will pass the test for many, if not most, regulatory agencies. Some participants, however, expressed concern that the benchmarks reinforce the idea of minimal standards instead of focusing on student learning outcomes. As one participant noted, the problem is that the list is concerned too much with what we have been doing on campus and not enough with student learning and product.

Inputs versus outputs

Licensing authorities and accrediting agencies have long assumed that institutions with certain attributes (e.g., president, board, full-time faculty) had the capacity to carry out various degree-granting educational missions. Current quality reviews are based primarily on examining institutional "inputs": the capacity and resources of institutions and programs. In many ways, accreditation has historically been based on an act of faith: if certain capacity and resource conditions are present, student learning takes place.

The unbundling of services, an implicit attribute of distributed learning provision, poses new challenges for determining which capacity and resource factors are essential. Those educators attached to input measures become nervous when distance-learning programs appear to eliminate many of the capacity and resource conditions of higher education (e.g., full-time faculty and physical campuses). How will we define credits without seat time? How will we define degrees without full-time core faculty? Others believe that distributed learning will leave accreditors with nothing on which to base quality judgments other than student achievement. We will finally be forced to address student learning as a central indicator of quality. Those who want to move quality assurance in the direction of assessing learning outcomes see this as a good thing.

Problems arise, however, in two cases: (1) when some insist that standards for distributed learning be not only different from but higher than those for traditional education (i.e., an insistence on assessment of learning outcomes for distributed learning programs versus assessment of capacity and resources for campus-based programs); and (2) when some assert that an examination of capacity and resource conditions has little or no importance in distributed learning environments.

External pressures for accountability

Despite the fact that almost everything we do in U.S. higher education seems to be examined externally, many symposium participants noted the trend toward even greater demands for external certification as a way to ensure quality. In some states, for example, students wanting to become teachers must take state tests, rather than institutional exams, in order to be certified. Institutions are then ranked by their students’ scores on these state licensing exams. These developments represent an extension of current practice in other professional fields like law, engineering, nursing, and accounting, which already have some form of external validation. Soon we may see testing of all students. Many states, including Washington, Colorado, and Illinois, are talking about exit exams at every level of higher education. Even though many educators question whether these common exams are a good way to assess learning, most agree that these exams will probably happen more often rather than less.

All symposium participants agreed that these trends toward greater external certification largely indicate a lack of confidence about how well higher education is doing. Generally, degree acquisition, graduation, and grades are no longer viewed as adequate indicators of competency. In addition, the pressure for external exams often reflects the frustration that many outside of our community feel about the enormous sums being spent for U.S. higher education.

As a result, a new industry that certifies competency is emerging. In the information technology field, Java or Microsoft certifications are at least comparable to a degree and are perhaps even more important. If you want to hire a Cisco engineer, you hire someone with a Cisco certificate rather than with a computer science degree. If such competency certification works and is effective in this field, one could envision this strategy being applied to the bachelor’s degree, with national standards derived from competency. And once we accept the idea of competency, the question arises of whether a student even needs to complete courses in order to receive a degree. If a student can pass all competency tests, why should he or she have to take classes? Will we discover that the bachelor’s degree has become so useless and meaningless that we will use a bank of tests to certify abilities in certain areas? In any event, there is little doubt that this trend toward competency certification will expand.

Challenges to peer review

Current quality assurance processes rely on peer review, especially faculty peer review. Today many are asking whether this approach passes muster for either face or content validity. For those who are not members of the "club," peer review looks a lot like the fox in the henhouse. Those who have looked closely at the peer review process have serious questions about how well prepared the peer review teams are to provide a valid evaluation of the institution they are reviewing. Can a process owned by the industry (as accreditation is) provide legitimate quality assurance, or must the process be governed from outside?

Distributed learning environments further complicate these issues. What does it mean to be a peer? And can peer review be sustained in this new, more complex environment? Is it reasonable to expect faculty to bundle their quality assurance responsibilities in an unbundled world? Is it possible? For example, one problem the regional accrediting associations face is the lack of trained evaluators who have both a knowledge base and an experiential base on which to make judgments about online education. There are so few "peers" to evaluate the different applications of distributed learning that individuals at the institutions being evaluated frequently have to teach the evaluators. If peer evaluation is worth preserving in higher education, we need to develop sufficient training and supplementary materials to assist in shifting the culture and managing the current lack of knowledge among evaluators.

The increased diversity of learning providers and learning experiences suggests an increased diversity in the quality assurance system. Alternatives to peer review include the use of professional staff trained in quality assessment. For example, higher education currently relies on external quality processes to determine whether or not institutions will be bonded for facilities based on whether the proposed construction project looks like a good or bad investment. This process has external as well as internal validity. Could we create a similar external quality assurance process for the academic environment?

Other alternatives include internal centralized structures for controlling quality. Many organizations, including some colleges, have developed such approaches: the British Open University, the U.S. Army, the University of Phoenix, various corporations, and Rio Salado College, for example. One might speculate as to whether this development at new kinds of institutions is an extension of what occurred several decades ago when the university model did not translate very well to regional colleges and community colleges. By taking course design to a central team that "tells" the faculty member (often an adjunct) what and how to present, this process appears to pass the responsibility for quality from the faculty member
to the institution.

3. How should these standards be applied in new institutional configurations?

New configurations of traditional institutions (e.g., virtual universities) and new forms of higher education organizations raise ongoing questions about accountability for academic quality. Distributed learning is characterized by the ability to disaggregate and reaggregate practically every aspect of the higher education enterprise. Vastly expanded opportunities for outsourcing—not just areas like food services and facilities management but every aspect of the academic program—tend to make people nervous. As more and more institutions begin to outsource core functions, many people begin to ask whether or not the institution is the appropriate unit of analysis for quality assurance processes.

Even if the institution remains the primary unit of analysis, other questions arise. What is the core institution in these new environments? Is there a core that should not be outsourced? Can one outsource the vice-president for academic affairs, the board of trustees, the faculty? Who is in control of the curriculum? The responsibility for the quality of the academic program has traditionally been vested in the faculty. Does the distributed learning environment challenge that fundamental responsibility? Does the unbundling of faculty roles jeopardize the capacity of "faculty" to fulfill that responsibility?

Many institutions are partnering with companies that provide technological support for both students and faculty. How do we know that the support provided is quality support? We need to develop criteria to enable institutions to make good outsourcing choices. If we continue to expect faculty to be responsible for quality control, are new structures or new understandings of quality assurance required to enable the faculty to continue to fulfill this responsibility? New approaches to faculty development for effective oversight might include training in such things as evaluating interactive courseware and reviewing contractual arrangements for reusable educational products.

In addition to institutional outsourcing, many different consortial arrangements are emerging. Who is responsible for quality in a consortium? In the Colorado Electronic Community College, for example, multiple institutions provide courses within an established degree program structure and agree to cross-list or transfer all courses. Students can earn a degree online from any of the participating colleges. Consortia like this work because common course and degree program content has been mandated at the state level, through the legislature or through the state higher education executive body, or because the participating institutions have agreed on the curriculum.

These consortia appear to challenge the traditional quality assurance model because no individual campus appears to be completely in control of its curriculum. The question of accountability becomes terribly important. Noting that none of the virtual university consortia offers degrees, symposium participants agreed that the institution is still the responsible party in these new arrangements, since it remains the degree-granting authority. Even though the teaching and the student support may be distributed among the participants, one institution will ultimately offer the degree, and the faculty of that institution are still responsible for overseeing the curriculum for their degrees.

The heart of the issue of how quality standards should be applied can be summarized as follows. How do we maintain academic integrity when the educational process is made up of many pieces? If we are worried about integrity, one entity needs to take responsibility for quality assurance and that should be the institution. If we build a new home and the plumber messes up, we hold the general contractor responsible. Similarly, in higher education, the focus should be on the entity that hires the third-party provider. For an institution to maintain its integrity, everything endorsed by the institution becomes the institution’s responsibility.

Other pressures are also undermining the institution’s control of the curriculum. Licensing authorities and accrediting agencies have long assumed that as long as institutions require students to get passing grades in a certain number of general education and major courses, those institutions have "standards." Distributed learning programs present the obvious question about whether adding up Carnegie units still suffices as a set of standards.

Increasingly, students are "going to college" in many different ways, cobbling together courses from multiple providers. The distributed learning environment merely escalates a well-established trend. Although institutional oversight of the curriculum at the degree program level remains in place, clearly individual courses do not receive the same level of scrutiny and are transferred, to some extent, on faith. Transfer courses are reviewed by the registrar’s office, not by the faculty. The institution exercises very little true oversight. Holding single institutions responsible for the quality of a college degree that is assembled from four or five different institutions is less and less possible. How do we ensure some sense of integrity for the whole when the degree is becoming a fairly disjointed sum of the parts? If students are the ones who "bundle" offerings from multiple providers, who is the controlling authority for quality assurance?

Finally, new configurations raise the question about whether there should be different standards for different missions rather than common standards across all institutions and organizations. In the context of the diversity of approaches to distributed learning environments, the variety of organizations offering learning opportunities, the proliferation of versions of what a baccalaureate degree means, the blurring of boundaries between education and training, and the rising importance of individual certification programs, some people suggest that we should be moving toward different standards for different missions.

4. Are these principles any different from principles of good practice in on-campus programs? If so, in what ways?

Several of the symposium participants observed that the IHEP list does not look much different from a list of principles of good practice for on-campus learning. The characteristics of a good face-to-face course, for example, are the same as those of a high-quality distance-learning course. Student support, faculty support, reliable infrastructures, effective evaluation—all are required to ensure a high-quality learning environment whether on or off campus.

Current quality assurance practices, which consist largely of measuring inputs, occur at three levels. First, college faculty, both individually and collectively, are the primary ensurers of quality. They establish goals for the learning experience (syllabus), manage the process to enable students to acquire the learning (delivery), and evaluate the results of that process by making qualitative judgments about each student’s learning (grades). College faculty ensure quality individually at the course level and collectively at the program level. This is true in both traditional and distance learning settings.

Next, institutions are the secondary ensurers of quality, overseeing these departmental processes. They determine—through hiring practices, credential examinations and personnel reviews—that the faculty are qualified to teach particular subjects. Institutions also ask whether all programs and departments carry out appropriate quality reviews. Again, this is true in both traditional and distance settings.

Finally, external quality assurance organizations (e.g., regional accreditors, state agencies, specialized accreditors) are the tertiary ensurers of quality, by overseeing these institutional processes. They ask whether the other two levels of quality assurance work in practice. Do institutions carry out appropriate quality reviews? Again, this is true in both traditional and distance settings.

Should we have different standards for distance learning? Should we have higher standards for distance learning? By following our "academic common sense," the answer seems clear. The growth of distance learning is raising questions about existing quality assurance processes because it challenges the assumptions of basing judgments purely on input measures and serves as a further impetus to move to outcomes-based quality assurance. As one symposium participant noted, every new act of evaluation highlights insufficiencies in our old ways of operating.

Although certain aspects of our current quality assurance practices may be inadequate, most—if not all—of the ongoing concerns about these practices are not related to distance education. To establish a double standard that looks the other way when classroom lecturers limit the level of student interaction to roll calls while requiring distance educators to achieve a high level of online interaction would be unfair. Furthermore, since the distinction between on- and off-campus learning is blurring and will continue to blur, our "academic common sense" would suggest that if new forms of quality assurance are needed, they are needed for all aspects of the educational experience, not just for distance learning.

5. Are these statements sufficiently consumer-oriented?

What do consumers—students, parents, employers, and others—want to know about quality? First, they want to know that the institution, the program, or the course is "as good as" others, that each conforms to "generally accepted practice" in the profession, and that each meets minimal or threshold levels of quality. For this level of quality assurance, current practices would appear to be adequate if the focus is on student learning and not on assumptions about learning (e.g., assuming that a robust governance process leads to high-quality student learning). Second, consumers want to know that the principles of good practice exemplified by the IHEP list are, in fact, being practiced. Is this a list of practices that all are willing to salute but that they carry out unevenly, or are these principles embodied in the day-to-day life of the institution?

More important, consumers want comparative information. They want to know how to differentiate between the hundreds, indeed thousands, of possibilities available to them. By analogy, a prospective buyer needs to know that a car runs, but what the buyer really wants to know is whether it runs better than the others. That’s the common-sense definition of quality assurance.

Our present quality assurance processes have been created by professionals for professionals. This is understandable, especially when many believe that students cannot make judgments about what constitutes high-quality education because they have not been trained to develop appropriate criteria. Many students do not know what their educational objectives are or why they are taking certain courses. In many instances, they have made certain choices simply because they needed credits and a particular course was available at a particular time. Since students do not have enough of a basis to make judgments, so the reasoning goes, "we" have to make the judgments for them. But when we make those judgments, we do it according to our rules, not according to what students may need or want. The best example of this phenomenon is, of course, the utter unwillingness of those in higher education to provide qualitative rankings that compare institutions and offerings.

As long as higher education has been placebound, students have had a limited number of choices available to them. The virtual world, however, opens up unlimited possibilities for collegiate study. Knowing that hundreds of institutions follow general principles of good practice like those on the IHEP list will not help students make a wise choice.

CONCLUSION

Quality assurance in U.S. higher education targets learning at three levels: (1) the institution, (2) the program or major, and (3) the course. Regional accreditation focuses on institutional quality assurance, emphasizing capacity and resources. Accreditation ensures that the internal processes that are presumed to lead to quality outcomes (e.g., qualified faculty and staff, adequate resources, curriculum oversight by faculty) are in place.

Whereas capacity and process measures appear to be both important and reasonable at the institutional or degree level, they do not appear to be sufficient at the program level. Hence specialized and professional program accreditation has arisen to provide greater specificity and differentiation among accredited institutions. Specialized accreditation organizations such as the International Association for Management Education, the Accreditation Board for Engineering and Technology, and the National League for Nursing Accrediting Commission focus on the major, with a heavy emphasis on disciplinary/professional peer review by colleagues in the field. This level of review involves greater specificity than the first.

Currently, our primary sources of information about quality assurance and our regulatory frameworks target institutional and program levels. For those students seeking a degree via distributed learning, traditional measures of quality assurance at the institutional and program level may indeed be sufficient. In those online programs that offer complete degrees, student must be admitted to the program first. In such programs, there is an attempt to integrate and apply campus-based knowledge to online course information, to infuse that knowledge base in the online environment. For example, the Sloan ALN Consortium and Illinois Online, both of which are program-oriented, include information on course completion, number of graduates, class size, program costs, articulation agreements, and so on.

At the program level, it seems clear that students will continue to make choices based on the reputation of the institution. People choose institutions because of the environment, and online environments become important to people for the same reason. The same kind of prestige factor that we associate with traditional campuses will develop in the online environment because of the quality of the community: the students, the faculty, and the staff. As long as most students take most of their courses at one institution, institutional and program quality assurance processes appear to be sufficient.

Yet in a distributed learning environment, where students face many choices, still greater differentiation is required. What is missing is a process of quality assurance aimed at the course level. The lack of evaluation at the course level is particularly critical as students continue to mix and match courses from multiple institutions.

Both regional and specialized accreditors are generally hesitant to look at course quality, a primary point of interest for consumers. There are indeed practical problems—primarily insufficient resources—to implement these finer levels of quality assurance. If we agree that the course needs to be added as a unit of analysis, how do we construct a quality assurance process that is doable? In part two of this monograph, we turn to a consideration of an alternative that can complement our traditional quality assurance processes, one that focuses on quality assurance from the student’s perspective.

Part 2: Quality from the Consumer’s Perspective

THE NATURE OF THE PROBLEM

What information do consumers need in order to make intelligent choices among the bewildering array of new and unfamiliar options available in the distributed learning environment? In the context of the related shifts toward privatization and the entry of new, for-profit providers into the education and training arena, the issues of consumer information and consumer protection take on even greater importance.

Do consumers approach the issue of quality assurance in the same way as do providers? Participants at the symposium were asked to undertake the following exercise to illustrate the problem from the consumer’s point of view:

Assume that you are a student looking for the "best" marketing course that is available online—a course that you can afford and that you can transfer to your home institution. What would you want to know?

Three well-regarded Web sites, which restrict their course listings to higher education institutions that are regionally accredited, were suggested as sources:

THE RESULTS

The output of a search for marketing courses using the DistanceLearn Database—the largest of its kind, with more that 18,000 courses currently listed—is reproduced in Figure 2. The search yielded the proverbial "firehose of information." Approximately 240 undergraduate courses are listed, a deluge of data that most consumers would find daunting to sort through. One symposium participant noted, "I am lost in the page-upon-page, course-upon-course list." This exercise strongly suggests that regional accreditation may be a necessary but not sufficient condition to determine quality from the student’s point of view.

In addition to the quantity problem, symposium participants identified a number of other deficiencies:

  1. Courses are listed by institution. If a student is looking for an introductory marketing course rather than a specific marketing course (e.g., "Marketing on the Web"), sorting by topic rather than by institution would be more useful.
  2. Courses are listed by course number. Even to students at the home campus, identifiers like ECO 221 and BUS104 have little meaning.
  3. There is no differentiation regarding enrollment requirements. Does a student need to matriculate at the institution or in the specific degree program (and can the student get in?) in order to enroll in the course, or can a student enroll regardless of status? To answer these questions, one must look at individual course descriptions.
  4. Do courses require face-to-face meetings? If a student can’t travel to Boise, Idaho, and the course requires on-campus examinations, he or she would want to eliminate that course as an option. Again, one must look at individual course descriptions to find this information, and frequently such information is missing.
  5. Prerequisite requirements are unclear. Some courses, for example, require "junior standing in business." Does that mean this standing is required at the student’s home institution or at the offering institution? If the latter, see the enrollment requirements noted in #3 above
  6. Often, the contact information provided does not list someone who can answer students’ questions directly. At one fully online institution, the president is listed as the contact. In many other cases, the admissions office is listed.
  7. Course requirements are not explicit. The database includes a category, "Course includes." Its purpose is to allow institutions to list assignments and exams so that prospective students can gain a sense of what kind and how much work is involved in the course. All too frequently in this category, one finds the response, "no data given."
  8. There is no information about the track record of the course. What is the DFW (drop, failure, withdrawal) rate? Were students successful in subsequent courses?

As one symposium participant summed it up, the single biggest problem of these Web sites is not their complexity but their inadequacy. Most course listings indicate that the institution has the information the student needs, but the student would need to call someone in order to find it.

In addition to the lack of information provided, the Web sites suffer from two other important problems from the perspective of quality assurance: (1) no information about the quality of the courses is provided (e.g., do the institutions use the IHEP principles of good practice?); and (2) no comparative information is provided (how would a student know which is the best marketing course for meeting his or her needs?). The impact of the lack of qualitative information is not limited to students; other stakeholders would like to be able to make judgments as well. Employers, for example, want to be able to sort among various offerings in order to make recommendations to their employees. From the consumer’s point of view, we have a long way to go.

Satisficing

When approaching the issue of quality assurance from a consumer decision-making point of view, one is struck by the relative nature of the words good and quality. Decision-making includes the theoretical assumption that the consumer is "all-knowing" or has perfect knowledge of all options. As people begin to make choices about their own resource allocations (time, money, energy, or other resources), they begin to gather information about how to make a more "informed" decision. Because it is not realistically possible to gather "all" information (it would use too much time or energy, and people have to make choices about their resources here also), consumers engage in a process of what consumer economists call "satisficing"—finding a satisfactory solution but recognizing there may be more than one solution. This differs from finding the optimum solution.

Consequently, notions of "quality" include a range of preferences that may be hierarchical or ordered; the ordering or hierarchy will vary depending on the situation and the resource mix of the individual making the choice. Applying this concept to courses suggests that finding the best marketing course is not the goal. Finding a satisfactory course that meets one’s preferences is a more realistic goal.

The need for tools

After viewing the output of this exercise, one symposium participant asked whether there are tools that can be provided to potential students to help them satisfice—help them assess whether the subject matter, the content, the delivery, the interaction with faculty, and so on will best serve their needs. Conversely, are there tools that would allow institutions to measure the effectiveness of their own courses and programs as well as those they might like to import from other institutions and organizations?

TECHNOLOGY: THE CAUSE AND THE SOLUTION

If technology is the "cause" of the problem—creating a bewildering array of online course choices—perhaps technology can contribute to the solution. Symposium participants next considered three popular commercial Web sites to see if their approaches might suggest ways to solve the problem of undifferentiated information overload. Each of these sites includes sophisticated software that enables multiple parties—including consumers, providers, and experts—to submit and review data about products, services, and trans-actions. What follows is a brief description of the key features of each Web site. (If you are unfamiliar with them, you may want to spend some time exploring each site.)

Amazon.com (http://www.amazon.com)

Amazon.com is well known as an online bookstore but has now expanded to offer many other products. An important attribute of the site is that it allows consumers to gain qualitative information about the products offered.

Using books as an example, visitors to the site have the following choices:

  • I have read this book, and I want to review it.
  • I am the author, and I want to comment on my book.
  • I am the publisher, and I want to comment on this book.
  • Correct errors and omissions in this listing.

Amazon.com offers narrative reviews plus a five-star rating system. Two types of reviews are presented: editorial (expert) reviews (e.g., published book reviews) and customer (consumer) reviews. Using the book The Perfect Storm as an example, as of February 7, 2001, about thirty expert reviews and 719 consumer reviews of this book were posted. (At the time of the symposium, 616 consumers had reviewed it.) The average customer rating for The Perfect Storm is four stars. One can read the full text of each review or can view the star system to gain a quick summary of customer responses.

Any visitor can rate the book on the five-star system. According to MovieLens, a University of Minnesota collaborative filtering Web site, several studies have shown that 1-to-5 consumer rating systems correspond with the systems used by "professional" critics. Providing additional rating intervals (for example, 1-to-10 or half-star ratings) does not improve the accuracy of the results.

Customer reviews are also ranked. Each time you read a review on the site, you are asked whether the information was helpful or not, and your vote is tabulated. The customers who write the most helpful reviews are deemed "Top Reviewers." The icon that appears next to some reviewers’ names is an at-a-glance way to see how helpful a reviewer is. The lower the number on the icon, the more helpful votes the reviewer has received.

 

eBay (http://www.ebay.com)

eBay is "the world’s first, biggest and best person-to-person online trading community. It’s your place to find the stuff you want, to sell the stuff you have and to make a few friends while you’re at it." eBay offers qualitative information about the trading process through its Feedback Forum. This forum allows you to rate both the buyer and the seller, a process that produces a "Feedback Profile."

Every eBay user has a Feedback Profile consisting of comments from other traders—an official "reputation." If you are a buyer, checking a seller’s Feedback Profile before you make a bid is one of the smartest and safest moves you can make. This Feedback Profile answers many questions about how a seller does business. Is the seller highly recommended by other buyers? Does he or she sell quality merchandise?

If you are a seller, reviewing Feedback Profiles of buyers can be helpful too. You can find out if a buyer is known as a great customer who provides fast payment. You can also see what bidders are looking for in a good seller. By exercising good business practices, you will earn positive testimonials within the eBay community. The more positive feedback you receive, the more stellar your reputation becomes!

Narrative comments are about one line in length. Sellers can also respond to negative comments.

Next to a member’s user ID, you will find a number in parentheses. This number is his or her Feedback Rating. For example, "Skippy (125)" means that a member’s user ID is Skippy and that the member has received 125 feedback comments from other eBay members. You can leave multiple comments in a member’s Feedback Profile, but they’ll count only once. This makes the system fair. No one person can "tip the scales" in either the positive or the negative feedback direction.

Members receive +1 point for each positive comment, 0 points for each neutral comment, and -1 point for each negative comment. Stars are awarded for achieving a particular Feedback Profile:

  • A yellow star represents a Feedback Profile of 10 to 99.
  • A turquoise star represents a Feedback Profile of 100 to 499.
  • A purple star represents a Feedback Profile of 500 to 999.
  • A red star represents a Feedback Profile of 1,000 to 9,999.
  • A shooting star represents a Feedback Profile of 10,000 or higher.

Does a high Feedback Rating mean that an eBay member has a great reputation? Not necessarily. In most cases, a high Feedback Rating is good news, but a member’s Feedback Profile should always be checked for any negative remarks. It’s best not to judge a user on his or her Feedback Rating alone.

 

Zagat.com (http://www.zagat.com)

Zagat.com contains the "most trusted and authoritative dining information online" and "delivers the dish on more than 20,000 restaurants, bistros, cafes, coffee-houses, diners, hotels and takeout joints" in forty-four cities worldwide. Zagat.com offers succinct and accurate feedback on the entire dining experience, including surveyors’ comments and a thirty-point food, decor, and service rating, plus cost estimates compiled from millions of annual surveyor reviews.

Anyone can rate a restaurant on the quality of its food, decor, and service by choosing a number from 0 to 3, as follows:

0 = fair to poor
1 = good
2 = very good
3 = excellent

To get the familiar Zagat 0-30 ratings, 0-3 ratings are averaged with those of other voters and multiplied by 10 to eliminate the decimal point. A reviewer can also add descriptive comments of approximately sixty-five words or less.

After selecting a particular city, one can search for restaurants by entering search criteria:

  • A minimum rating for food
  • A minimum rating for decor
  • A minimum rating for service
  • Maximum cost
  • Neighborhood
  • Cuisine
  • Special feature (e.g., open on Sunday, credit cards accepted, outstanding views, romantic spots, meet for a drink)

One can also display restaurants according to the following categories:

  • Top Food by Cuisine
  • Additional Good Values
  • Best Buys
  • Most Popular
  • Top Decor
  • Top Food
  • Top Outdoor
  • Top Romantic
  • Top Rooms
  • Top Service
  • Top Views

System characteristics

What are some of the characteristics of these systems?

  • Preferences. Each site offers a way to sort through all of the listings and display the output according to one’s preferences (e.g., "I’m looking for a Miami restaurant that is open on Sunday and whose maximum meal price is $40").
  • Consumer input. Each site offers a way for the consumer to express his or her views—as a free-form narrative (Amazon), as a one-line narrative (eBay and Zagat), and/or though a ranking system (Amazon and Zagat).
  • Expert input. One site offers a way for the expert to express his or her views (Amazon).
  • Ranking. Each site offers a simple way for the user to see a summary of consumer reviews—as an aggregate number of positives (eBay), as a five-point ranking system average (Amazon), or as a ranking system that combines multiple factors (Zagat).

In all cases, the software enables easy input and tabulation of the data. No research studies or surveys need to be conducted.

 

Analogies with higher education course listings

To draw analogies from these approaches—and from other consumer-oriented publications and organizations—for higher education course listings, we need to distinguish between what might be called "expert products" and "polling products."

If you want a recommendation about which product to buy, you might want to consult an expert in the field. That’s a function that magazines like Car and Driver and Sound & Vision perform. What characterizes "expert products" such as cars, boats, appliances, and electronics? First, experts can evaluate these items because there are relatively few products. Second, price is a factor in the buying decision, reducing the "universe" of items even further.

Another approach is to poll users or consumers of a particular product or service and tabulate their opinions. That’s what the Zagat guides and consumer-ratings services like J.D. Powers and Associates do, rating "polling products" such as restaurants, hotels, and airlines. Consumer input, especially when tabulated according to specific factors, can produce valuable information.

For other models, many in higher education also look to publications like Consumer Reports or Good Housekeeping, which test every product before giving a seal of approval. These processes, however, are not analogous to the higher education situation, since no group of experts could possibly evaluate the ever-growing number of online courses in hundreds of subject areas. Any "expert" can use and evaluate twenty toasters; no expert can enroll in and evaluate twenty marketing courses.

CHARACTERISTICS OF A STUDENT SYSTEM

Symposium participants then discussed the following question: If we wanted to build on the ideas offered by these three dot-coms to construct a system suited to the needs of higher education’s students, what would be its characteristics?

First, participants agreed that such a system should focus on the course level. It is very difficult for students to "compare" institutions and programs; after all, they typically receive only one degree. Furthermore, established accreditation processes do a good job of ensuring quality at the institutional level, and established specialized accreditors do a good job of ensuring quality at the program level. The major quality assurance gap in distributed learning exists at the course level. Students can easily evaluate and compare their experiences with different courses.

Second, symposium participants agreed that the system should include the following features:

  • Preferences: a way to sort all of the listings according to one’s preferences (e.g., "I am looking for a marketing course that I can enroll in as a non-matriculated student and whose maximum cost is $200")
  • Consumer input: a way for students to express their views, both in narrative form and as part of a rating system
  • Ranking: a simple way to see a summary of student reviews as a ranking based on a combination of factors
  • Expert input: a way to include experts’ views

Preferences

Technology allows us to find out what is important to consumers (their preferences) and then to customize the output displayed as a result. A student does not need to see all marketing courses that are available online but only those that meet his or her preferences.

All online course databases should allow the user to display the output according to the following choices:

  • Subject matter: Is the course categorized by academic area?
  • Level: Is the course offered at the graduate, undergraduate lower division, or undergraduate upper division level?
  • Delivery method or media: Is the course offered via videotape, World Wide Web, broadcast television, or some other method?
  • Cost: What are the tuition and fees for the course?
  • Prerequisites required: Are prior courses required for entrance to the course?
  • Campus visits required: Must one go to campus for any aspect of the course?

Enrollment requirements: Does one have to be a matriculated student at the institution offering the course?

Consumer input

It is clear that consumers want rankings in order to differentiate online course offerings. Many higher education providers also support that idea, but two primary objections typically arise when members of our community think about rankings. First, there seems to be little consensus about the factors that should be used to create those rankings. Second, many believe that since students do not have enough of a basis to make judgments, "we" (the experts) should make the judgments for them. We also object to third-party rankings, like those of U.S. News & World Report, because we believe that the wrong factors are used to generate these rankings.

Clearly, no group of experts can evaluate the hundreds or thousands of available courses, despite higher education’s preference for peer (expert) evaluation. Furthermore, experts cannot conduct comparative evaluations because they have not taken the courses. The most that experts can do is to evaluate the course "content" in the form of the syllabus and learning activities. Because expert evaluation of every course is a logistical impossibility, we fall back on assessing the capacity of the institution, the "institutional surround," to deliver the course. Does the institution have demonstrated ability to offer an online or distance learning course? Can the institution provide evidence that it is able to provide the services needed? Evidence of quality is typically demonstrated by reports on process (how the institution conducts its business—e.g., the teams of people involved in course development) and resources.

A superior way of demonstrating capacity would be to measure the results. Rather than assessing the capacity of the campus bookstore to deliver materials to distance students, for example, why not ask the students if they received their materials in a timely fashion? Including students’ responses to a series of structured questions as part of each course allows us to find out what is really happening in the course rather than assuming what is happening.

Ranking

In addition, rather than asking students whether or not they "liked" the course, we should ask students specific, prestructured questions. These questions should be designed to take into account the professionals’ perspective—those things that experts believe are necessary to ensure high quality. These questions would "operationalize" agreed-upon principles of good practice. Responses to these questions would generate an overall "satisfaction index" similar to the star rating systems used on the dot-com sites. Students could also have the opportunity to add a narrative comment if they wished.Participants at the symposium spent a good deal of time generating a list of questions that students should be asked. In our earlier discussion of the IHEP benchmarks, we saw that there is indeed a high degree of consensus in higher education regarding principles of good practice in distance learning, and the symposium participants’ questions reflected that consensus. Figure 3 lists the questions that could be posed to students. The questions are organized according to the IHEP categories, though two IHEP categories ("Faculty Support" and "Evaluation and Assessment") were excluded, and two additional categories of interest to students ("Value" and "Flexibility and Convenience") have been added. Like visitors to the dot-com sites, students would respond to each question using a 1-to-5 scale.

Symposium participants also discussed the following possible problems regarding the course-ranking questions for students.

Are these too many questions?This list includes twenty-four questions. Some participants noted that in surveys, people will not answer more than a threshold number of questions, probably about twenty. In this instance, the questions are organized and displayed to illustrate their correspondence with the IHEP listing, but clearly they could be reworked to arrive at the optimal number and organization.

How should the results be displayed? In keeping with the goal of establishing a "satisfaction index," there are several possible ways to display the output:

  • A star system: use x number of points to equal y number of stars.
  • A weighted star system: weight factors differently according to their relative importance, and calculate a "score."
  • Subcategory scores: display scores for each subcategory (perhaps course structure, course delivery, and student support), similar to the Zagat subcategories of "food, decor, and service."

How valid are student ratings? One symposium participant pointed out that a large amount of research has consistently proved that student ratings and expert ratings coincide. Furthermore, the literature on the validity of student self-ratings says that self-reports are at their best when (a) the questions are very clear, (b) they concern behaviors (e.g., "I spent x hours doing this") and attitudes (e.g., "I was treated well and got my complaints resolved"), and (c) they avoid much inference on the part of the respondent (e.g., "this course would be good for a person like X").

What about courses that are not always taught by the same faculty member? Several participants commented that online courses are so linked with the particular instructor that course rankings would be instructor-specific and could not be generalized to cover subsequent offerings of the course. Others replied that these questions are targeted at institutional standards that should be in place for all faculty members. If a course receives poor ratings, this is an indicator that the institution needs to take corrective action. Surely the institution is responsible if it offers a poorly taught online course? This methodology forces institutions to look at how courses are taught throughout the institution and raises the bar for quality control. It also benefits those institutions that maintain a high level of consistency in course development, design, and delivery.

Expert input

Rather than relying on experts to assess capacity—which would be unnecessary in this system, since students would be testifying to actual results—we need expert input to focus on providing evidence of effectiveness, especially evidence of learning outcomes.

Input from two kinds of experts is desirable: those who are external to the institution offering the course and those who are from inside the institution. Those external to the institution include the following:

  • Employers, who can supply data about student success on the job
  • Graduate and professional schools, which can supply data about student success in future study
  • Formal quality assurance organizations (regional accreditors, specialized accreditors, state agencies), which can collect data on learning outcomes
  • Consumer protection organizations, which can conduct independent studies regarding the quality of student experiences in online courses

Data provided from inside the institution could include the following:

  • Completion rates, grade distributions, class size
  • Instructors of follow-on courses (success rates in subsequent related courses)
  • Pass rates on standardized examinations
  • Studies by external teams of course quality
  • Support ratios, delivery times
  • Longitudinal studies
  • Explanations for poor performance (e.g., we replaced our old server with a new one that works)

IMPLEMENTATION ISSUES

Such a consumer-driven system could be implemented on multiple levels, including on an institution’s Web site, on a consortial virtual-campus Web site, on a newly created nonprofit or commercial Web site, and so on. Regional accrediting agencies and specialized accrediting organizations could decide to require student input in a standard format as part of their ongoing quality assurance processes. Individual institutions could use this system, or parts of this system, to develop a better feedback model for evaluating the quality of their own offerings. Although it is clear that such a system could have multiple applications, most of the symposium participants’ attention focused on the need to create some kind of mechanism—external to institutions—that would offer comparative quality information. As one participant said, there is a need to keep the emphasis on what the student wants to know rather than on what the institution wants to know for quality improvement. Symposium participants were genuinely interested in two things: (1) finding ways to change higher education’s quality assurance processes in a fundamental way, and (2) creating a system that both informs the marketplace and improves it.

Participants also recognized that regardless of the purpose for which such a system is developed, it would inevitably be used for other purposes. Although the intended audience may be students, others could use such a system. One participant warned of the potential misuse of this information in a govern-mental or regulatory context. Because multiple audiences may use such a system, we must be sure that the indices suit the needs of both students and experts.

The just-in-time, embedded evaluation methodology used by the dot-com sites is a potentially powerful device for higher education because the primary reviewers are consumers. Several symposium participants observed that the biggest hurdle to achieving a system that distinguishes legitimately between gradations of quality—rather than creating yet another pass/fail scheme—is gaining acceptance from a higher education system that uniformly detests official qualitative comparisons. For many in higher education, this idea may be too threatening. How can we open ourselves up to different levels of judgment in addition to letting experts determine the definition of good distance education? We need to stimulate these bottom-up models in order to bring in new perspectives. Many symposium participants stressed that if we end up with another pass/fail quality assurance system, it—like current accreditation practice—will be minimalist in nature and thus not accepted by the external community as very useful or contemporary.

In addition, many symposium participants felt strongly that an independent entity—neither the institution nor a government-based agency—would be the best organizer of such a system. As one participant observed, part of Amazon’s credibility is that people are evaluating a product that Amazon has not created. Even though institutions may use part of the methodology—for internal improvement purposes, for example—in all likelihood they would not allow negative comments about their institution to be published. At the same time, those at the symposium generally agreed that the higher education community needs to play a strong role in developing a consumer-based system or someone else—for example, U.S. News & World Report—will.

If an independent entity does decide to develop such a Web-based service, which courses should be included on its Web site? Two different points of view were expressed at the symposium. The first answer is that all courses offered by any accredited institutions should be included. Just as U.S. News & World Report considers all institutions of higher education in its rankings, so should a Web-based service include all online courses. Indeed, this would naturally weed out minor efforts in distance education when institutions neglect to provide good data about their courses or fail to attract student reviews. The second answer is to implement a kind of subscription service that would initially screen or limit the number of courses that would be included, an expanding "club" model. Since course data has to come from the providers, this model would provide an alternative to a market-driven option. Rather than trying to attract all institutions, this service would include those that want to be evaluated and that meet certain eligibility standards such as conformance to the IHEP benchmarks. The goal would be to identify a smaller number of courses from first-class providers, which would also participate in the funding of the operation.

Finally, symposium participants turned to the question of the relationship between student-informed systems and our traditional quality assurance processes. The student-informed course evaluation will eventually reflect on programs and institutions as well, the domain of accreditation. If we agree that the course is an important unit of analysis, what is the role of accrediting agencies in evaluating courses? Some in higher education feel that we do not have good information even about programs and institutions from a consumer perspective. One symposium participant commented that she had purchased every college guide that is published by third parties and had reviewed the information on a dozen accredited institutions. In several cases, the information in the guides was better than our traditional evaluations because their reviews were based on student information. We currently do not use student input in the accrediting process, primarily because we do not know how to do so. This technology-based methodology could be an enormously useful and powerful way to encourage institutions to draw information from students. Accrediting agencies need to spend more time thinking both about how to use student information and about how to disclose more qualitative course, program, and institutional information to assist the public in a more systematic way.

 

Conclusion: Quality Assurance in a Disaggregated World

Before the symposium, one of the participants asked the following questions:

For the purposes of ensuring quality, does it matter whether distance learning (or technology-mediated instruction) is viewed as "just an alternative delivery system" or as "a fundamental change in the higher education enterprise"? There are two points of view.

Those who argue that it is just an alternative mode of delivery claim that we are paying too much attention to distance learning and that it does not represent a significant change for the higher education enterprise. They argue that if distance learning is an alternative delivery system, little change is needed in external quality review as provided through accreditation. Those who argue that it represents an alternative education enterprise claim that it is a fundamental shift in the nature of higher education. They go on to say that this will call for significant modifications of quality review through accreditation.

What is the likely future for the structure of American higher education? Is Peter Drucker right—that the number of traditional institutions will radically diminish? Are others right—that distance learning is overstated and over-rated and will not have nearly the impact that Drucker suggests? Whether or not the number of institutions grows or shrinks, does the advent of distributed learning suggest major change "inside the envelope," or will the envelope itself change?

Some believe that the real fundamental change in higher education resulting from the impact of distance learning will not be on the structure of higher education. Rather, it will be a "stealth change"—sustaining the structure—but fundamentally altering what takes place within the structure (e.g., a radically altered faculty role, a new concept of classroom, etc.). The most likely scenario is that we will emerge with "hybrids": site-based and distance-based learning environments intermingled. If this scenario is correct, it implies the continuation of current quality assurance processes with their focus on institutions, capacity, and processes, with some minor alterations.

Others have a far different view of the future. Elsewhere, Bob Heterick and I have described the creation of what we call a "global learning infrastructure," which consists of far more than education as usual on the Internet. These excerpts from "The Public Policy Implications of a Global Learning Infrastructure" describe what we think the digital future holds for higher education:

Technology enables us to disaggregate the place, the content, the delivery, and judgments about the quality of education. Disaggregation unbundles the instructional process. By separating instruction from assessment, teaching from degree-granting, content development from content delivery, and service from compliance, traditional roles are redefined and new ones emerge.

The Internet enhances choice and challenges regulation. The Internet expands learning opportunities. Distance learning technologies enable learners to access education whenever and wherever they want. Online experiences offer educational opportunities to millions of learners previously constrained by time, location, and other factors.

The Internet lowers the threshold of entry to the higher education marketplace for new commercial and non-profit educational providers by eliminating many barriers. The development of ever more effective electronic modes of delivering education at a distance and the explosive growth of networks will continue to erode the geographic hegemony of higher education and continue to challenge current state regulatory mechanisms. Students will be more likely to select educational institutions based on offerings, convenience, and price than on geography.

Interactive multimedia and other technologies will change how we think about providers and whom we regard as providers. Learning resources that were once only available through education institutions will appear in retail stores in the form of multimedia software and other computer-based courseware. Consumers will be able to purchase learning products independently and learn at their convenience, collectively spending millions of dollars on education each year. This purchasing power will have a tremendous impact on who controls learning.

Education will no longer take place with the silos of individual institutions (or even their virtual equivalents). Instead education will occur within a dynamic global marketplace of customers and suppliers. With its emphasis on creativity and competition, this marketplace will enable a wide range of players—universities, media, publishers, content specialists, technology companies—to market, sell, and deliver educational services online.

The vision of a global learning infrastructure—a student-centric, virtual, global web of educational services—contrasts with the bricks-and-mortar, campus-centric university of today. It even goes beyond the paradigm of the virtual university, which remains modeled on individual institutions. The global learning infrastructure will encompass a flourishing marketplace of educational services where millions of students interact with a vast array of individual and institutional suppliers delivered via the Internet. It is being developed in phases, but will ultimately cross all institutional, state, and national borders.6

How you view these alternate scenarios of the impact of information technology and the Internet on higher education will directly influence your perspective on many of the quality assurance issues raised in this monograph. What should be clear to all is that the topic of quality assurance in distributed learning environments is a moving target that will require continued attention by all parties concerned about higher education. This monograph has tried to describe the state of the art on this topic, with a view toward suggesting what needs to be done in the immediate future. It is but a small beginning in arriving at the new paradigms that will need to be developed in order to ensure quality in online learning for our future students and for society as a whole.

Notes

  1. Institute for Higher Education Policy (IHEP), "Quality on the Line: Benchmarks for Success in Internet-Based Distance Education," April 2000, p. 7, http://www.ihep.com/quality.pdf (accessed February 2001).
  2. Council of Regional Accrediting Commissions (C-RAC), "Statement of the Regional Accrediting Commissions on the Evaluation of Electronically Offered Degree and Certificate Programs and Guidelines for the Evaluation of Electronically Offered Degree and Certificate Programs," September 2000, p. 1, http://www.wiche.edu/telecom/Guidelines.htm (accessed February 2001).
  3. American Federation of Teachers (AFT), Higher Education Department, "Distance Education: Guidelines for Good Practice," May 2000, p. 5, http://www.aft.org/higher_ed/downloadable/distance.pdf (accessed February 2001).
  4. IHEP, "Quality on the Line," p. 19.
  5. AFT, "Distance Education."
  6. Robert C. Heterick, Jr., James R. Mingle, and Carol A. Twigg, "The Public Policy Implications of a Global Learning Infrastructure," November 1997, http://www.educause.edu/nlii/keydocs/policy.htm (accessed February 2001).

 

Symposium Participants

Philip Altbach
Professor, Educational Administration
Boston College

Michael A. Baer
Senior Vice President, Programs and Analysis
American Council on Education

Steven D. Crow
Executive Director
North Central Association of Colleges and Schools

Thomas M. Duffy
Chief Learning Officer
UNext.com LLC

Judith S. Eaton
President
Council for Higher Education Accreditation

Russ Edgerton
Director, Pew Forum on Undergraduate Learning
The Education Trust

Chere C. Gibson
Professor
University of Wisconsin-Madison

Carolyn Jarmon
Associate Director
Center for Academic Transformation

Sally Johnstone
Director
Western Cooperative for Educational Telecommunications

David Longanecker
Executive Director
WICHE

Jamie P. Merisotis
President
The Institute for Higher Education Policy

Burks Oakley II
Associate Vice President for Academic Affairs
University of Illinois at Urbana-Champaign

Marianne R. Phelps
Director
U.S. Department of Education

Carol A. Twigg
Executive Director
Center for Academic Transformation

Jack M. Wilson
Co-Director, Severino Center for Technological Entrepreneurship
Rensselaer Polytechnic Institute

Ralph A. Wolff
Executive Director
WASC, The Senior College Commission

Virtual Participants

Robert C. Albrecht
Chancellor Emeritus
Western Governors University

George Connick
President
Distance Education Publications

Robert C. Heterick, Jr.
Former President
Educom

Jean Avnet Morse
Executive Director
Middle States Association of Colleges and Schools

RAPPORTEURS

Patricia Bartscherer
Program Manager
Center for Academic Transformation

Susan Oaks
Assistant Professor and Area Coordinator
SUNY Empire State College

 

Quality Assurance for Whom? Providers and Consumers in Today’s Distributed Learning Environment by Carol A. Twigg

© The Pew Learning and Technology Program 2001
Sponsored by a grant from the Pew Charitable Trusts.

Center for Academic Transformation, Rensselaer Polytechnic Institute
Dean’s Suite, Pittsburgh Building
110 8th Street, Troy, NY 12180
518-276-6519 (voice)
518-695-5633 (fax)

Return to Top of Monograph