Performance-based approaches to reforming US higher education quality assurance

Given the significant increase in federal funding and the even greater increase in family contributions to higher education over the past decade, as well as the overall student debt at $1.2 trillion, it is not surprising that there are increasing demands for regulation to focus more on student and financial outcomes. Concerns about outcomes and efficiency often lead regulators to consider performance-based reforms. Performance-based approaches to regulation rely on measurable proxies of the desired outcomes to evaluate the regulated entity. The evaluations may be made public to encourage consumers to “vote with their feet,” or they may be used by the regulator to distribute rewards and consequences.

This section describes some of the performance-based approaches to regulation and quality assurance that institutions and their regulators have adopted over the past decade. It focuses, in particular, on the obstacles the Obama Administration faced in implementing an outcomes-based college rankings and federal financing system as indicative of the larger challenges inherent in using a performance-based quality assurance system for such a heterogeneous sector. At the same time, these efforts have pushed the debate around norms of transparency and coordination in quality assurance, laying the groundwork for potential future reforms that rely on common data and definitions.

Prior to the Obama Administration’s efforts, some institutions began to create voluntary accountability and quality assurance frameworks based largely on outcomes. For example, in 2007, feeling pressure from federal regulators to provide more transparency about costs and outcomes, a group of higher education institutions belonging to the Association of Public Land Grant Universities (APLU) and the American Association of State Colleges and Universities (AASCU) created their own “Voluntary System of Accountability” (VSA) to enable the public to better understand the structure and impact of these institutions. Through the system, which now has approximately 400 members, each institution contributes a “college portrait” which describes college costs, an income-based estimation of available financial aid for applicants, and data regarding student outcomes, campus life and experiences and other information relevant to applicant decision-making. The college portrait has been moderately useful in providing information to prospective students, but critics argue that information has not been presented in a user friendly way, and that the use of data from standardized assessments offered too simplistic a view of learning across institutions.

Similarly, in 2011, the American Association of Community Colleges launched the Voluntary Framework for Accountability (VFA), which defines metrics for community-college-specific outcomes such as term-to-term retention rates, share of students who start in developmental courses, progress towards college level work, and data on transfers. The primary purpose of the data is to help institutions better understand how well they are serving their students, but the system was also designed to provide information to policymakers and the public.

These voluntary efforts have been accompanied by an increasing focus on outcomes in state-funding. Historically, states have allocated appropriations to higher education based on student enrollments. But over the past decade, many states have adopted an approach known as “outcomes-based funding” or “performance-based funding” (PBF), which uses formulas that identify particular outcome targets and “rewards” institutions for meeting these targets. PBF models incentivize universities to achieve specific policy objectives based on the percentage of state funding that is appropriated to accomplish those objectives. Currently, twenty-six states have PBF policies for their publicly-funded institutions, and many other states are considering this approach.

Most PBF policies tend to emphasize outcomes such as graduation rates and retention rates, but states include many different objectives in their regulations. In Tennessee, the law rewards institutions that increase graduation rates overall and among selected groups that tend to graduate at lower rates. In Ohio, institutions that produce STEM graduates receive incentive funding. And in Pennsylvania the law provides incentives for improvements in many areas, including fundraising and enrollment of first-generation students. In general, the amount of the incentive is small in relation to universities’ budgets, usually constituting only about one to five percent of the total budget. As a result, to date, the impact of such approaches has been mixed, and studies have found only small changes in outcomes in states with this approach. In addition, some studies have found that PBF policies produce unintended consequences such as greater restrictions on student admissions, which lead to a decrease in the diversity of student populations.

While regional accreditors have not used statistical outcome measures or minimum standards for accreditation, most national and programmatic accreditors set threshold requirements for metrics like completion rates, exam pass rates, and employment rates. Regional accreditors recently announced plans to use minimum graduation rates to direct further review of poor performing institutions. And some have incorporated assessments of more broadly defined competencies into accreditation—though not without pushback. For example, in a 2013 redesign, the Western Association of Schools and Colleges (WASC) incorporated into their standards a set of core competencies in five skill areas. To be accredited, institutions must “describe how the curriculum addresses each of the five core competencies, explain their learning outcomes in relation to those core competencies, and demonstrate, through evidence of student performance, the extent to which those outcomes are achieved.” Originally, WASC planned to have institutions compare their expectations for degree recipients to the Lumina Foundation’s Degree Qualifications Profile. The redesign effort was pared back after leaders at many institutions argued that the use of a common framework would lead to homogenization and increase the burden of compliance.

At the federal level, the Obama Administration undertook several performance-based reforms in the higher education sector. One, which has become known as the “gainful employment” rule, was introduced in 2012 and imposes tighter regulations on for-profit institutions and requires them to prove that students who pursue credentials in their programs will secure jobs upon graduation that compensate them for their investments. This rule responded to criticisms that too many students at for-profits schools take on unsustainable debt in exchange for degrees and certificates that carry limited value in the job market. Under the gainful employment regulations, programs whose graduates have annual loan payments greater than 12 percent of total earnings and greater than 30 percent of discretionary earnings in two out of any three consecutive years are no longer eligible for federal student aid for a minimum of three years. Gainful employment regulations also require that institutions provide disclosures to current and prospective students about their programs’ performance on key metrics, like earnings of former students, graduation rates, and debt accumulation of student borrowers.

In January 2017, the Department of Education released data showing that more than 800 programs had failed to meet accountability standards for the gainful employment rule. Ninety-eight percent of these programs were offered by for-profit institutions. Because of the restrictions these regulations could place on for-profit institutions, the gainful employment regulations have been the subject of withering dispute between the Education Department and the for-profit sector. The original rules were declared invalid by a federal district court, but so far, the revised rules have withstood legal attack. Their fate under the Trump Administration is far from clear.

The Obama Administration also increased its enforcement activities against institutions in financial distress. Perhaps the most prominent example is the Education Department’s removal of financial aid eligibility from the Corinthian Colleges system. The college, plagued by financial instability resulting from declining enrollments and legal battles over false advertising and unlawful debt collection practices, was forced to close after the Department placed it on “hold” status (preventing it from accessing the federal financial aid system). Strengthened standards for assessing institutional viability led to the closure of several small institutions and greater oversight of other institutions in financial peril, as well. And, in 2016, the Department terminated its recognition of the Accrediting Council for Independent Colleges and Schools, which accredited 245 institutions, most of which were for-profit (including several Corinthian Colleges locations). The decision, prompted by pervasive “compliance problems,” marks the first time that the federal government has officially denied recognition to an established accreditor.

Finally, the Obama Administration tried to promote transparency in the sector by providing more information to students and their families. In August 2013, President Obama announced that he would create a “college rating system” to assess the nation’s higher education institutions on their cost of attendance, student graduation rates, and graduates’ post-college earnings. The plan was to use these ratings to inform the allocation of federal funding, including financial aid, to institutions.

The tortured development of this effort—and its ultimate outcome—is indicative of the challenges involved in performance-based regulatory approach for the higher education sector. When President Obama first announced his rating proposal, he called for reforms to federal higher education financing that would reward colleges that offer low tuition, provide “value” (defined as programs that had high graduation rates), enable graduates to obtain good-paying jobs, and give access to low-income students. “What we want to do is rate them on who’s offering the best value so students and taxpayers get a bigger bang for their buck,” Obama explained. “Colleges that keep their tuition down and are providing high-quality education are the ones that are going to see their taxpayer funding go up. It’s time to stop subsidizing schools that are not producing good results.”

Although their responses to the former president’s proposal differed in tone and substance, higher education institutions around the country were vocal critics of the proposal, and their assessments were representative of more general critiques of performance-based regulation. Higher education leaders argued that such a rating system would be impossible to create because higher education is too diverse and has too many goals; the “value” of education was difficult to meaningfully quantify. In one characteristic argument, David Warren, director of the National Association of Independent Colleges and Universities, asserted that “private, independent college leaders do not believe it is possible to create a single metric that can successfully compare the broad array of American higher education institutions without creating serious unintended consequences.” Any rating system, Warren argued, would reflect policymakers’ choices more than those of individual students.

Although the Obama Administration claimed that the proposal would distinguish among different types of schools, higher education leaders and their lobbyists expressed concerns that such a proposal would further exacerbate the divide between the elite schools—where students from mostly wealthy backgrounds graduate at high rates and secure well-paying employment—and the many institutions that provide open access and have lower graduation and employment outcomes. “It’s not fair or reasonable, really, to rate institutions on their performance without consideration of the nature of their student body,” argued Peter McPherson, president of the Association of Public and Land Grants Universities. Furthermore, according to critics, an exclusive focus on limited metrics, such as earning data, could result in colleges neglecting programs in low-paying occupations such as teaching and nursing.

Higher education leaders also questioned the ability of the government to gather and manage accurate data on these complicated factors. “Several of the data points that the Department is likely to include in a rating system, such as retention and graduation rates, default rates and earning data—are flawed,” argued Molly Corbett Broad, president of the American Council on Education. “The Department of Education’s retention and graduate rates, for example, count as a dropout any student who transfers from one institution to another, regardless of whether they complete their education at another institution,” she continued. Furthermore, at the time, federal graduation rate calculations only included first-time, full-time students, leaving out most students who attend community colleges and for-profit schools.

In the summer of 2015, after more than two years of discussions with higher education institutions, educational advocates, and congressional leaders, the administration pivoted away from the idea of creating a rating system. Instead, the Department of Education released the “College Scorecard,” an online system providing a considerable amount of institution-level data on students’ academic, employment, and financial outcomes at the nation’s colleges and universities. “The College Scorecard aligns incentives for institutions with the goals of their students and community,” a White House statement reads, “although college rankings have traditionally rewarded schools for rejecting students and amassing wealth instead of giving every student a fair chance to succeed in college, more are incorporating information on whether students graduate, find good-paying jobs, and repay their loans.” A policy briefing that was published with the Scorecard notes both the challenges of comparing institutional performance across the sector, as well as the importance of baseline expectations and shared values.

Performance-based regulation reinforces—intentionally—competition among higher education institutions, exacerbating incentives for institutions to keep effective practices to themselves.

The college rating saga revealed both the growing push for outcome related transparency and accountability, as well as challenges inherent in performance-based regulation. Even if there is a general consensus on aspirations like accessibility, affordability, and quality, defining those goals concretely, measuring them meaningfully, and then applying them uniformly to the highly heterogeneous world of higher education creates its own kinds of problems—both technical and political. Furthermore, absent oversight of process, institutions may seek to game a narrow set of high-stakes outcome metrics, while failing to advance the underlying goals for which they are a measurable proxy, or undermining other goals not captured by the metrics (like broad and equitable access to higher education). Finally, performance-based regulation reinforces—intentionally—competition among higher education institutions, exacerbating incentives for institutions to keep effective practices to themselves.

Source

Recent report on applications and admission to US universities of foreign students in 2018 published by the Council of Graduate Schools (CGS)Enrol in Vyatka State University and study in ChinaChinese and Russian scientists have found a way to increase the resolution of nanoscopesNUST MISIS opens English-taught postgraduate programmesLaunch of the CAIE Colombia-2019 webinar seriesRussia and Africa: the restoration of relations in the field of higher educationHEIs from Asia, the Pacific and Europe are giving internationalisation ever greater importanceOECD published Trends Shaping Education 2019Implementing an Impactful Career Services Offering at Your Institution Russian language: facts and figuresLinguistic aspects of migration processes (Spanish version)Linguistic aspects of migration processes (French version)Linguistic aspects of migration processes (Arabic version)Entwicklung des wissenschaftlichen Raums in den GUS-StaatenHappy session in Artek (Spanish version)Happy session in Artek (Chinese version)„Glücklicher Durchgang“ in ArtekImmersion in the Russian language at the Winter school (Japanese version)Immersion in the Russian language at the Winter school (Chinese version)Eintauchen in die russische Sprache in der WinterschuleRussian language in Syria and Palestine (Arabic version)Lapta mobile game for learning the Russian language (Japanese version)Russian language in Japan (Japanese version)Linguistic aspects of migration processesThe CIS research area developmentHappy session in ArtekImmersion in the Russian language at the Winter schoolRussian language in Syria and PalestineLapta mobile game for learning the Russian languagePopularization and promotion of the Russian language in Russia and across the globeRussian language in JapanUseful tips for scholarship successAltmetrics, useful supplement for research impact measuringBrexit, Irish Higher Education and research: challenges and opportunitiesPresenting Joshua Kim's list of reviewed books of 2018G20 Education Ministers’ Declaration 2018A summer to remember Call for proposals for Erasmus plus projects for 2019Pskov State University Internship Programs, International and Research ProjectsStudy in Turkey. Tuition free education for Kazakhstan studentsHigher Education funding in England: past, present, and options for the futureRANEPA contributes to internationalization of higher educationAdmission to RRiF College of Financial ManagementLibertas International University. Erasmus plus student guideFive Essential Skills for Your Future CareerErasmus+ for (non)teaching staffThe Benefits of Using the News to Learn a Foreign LanguageTips for Learning Chinese LanguagesAcademic mobilityMarketing Strategies For Universities
To top