Education Market Failure? The Cost of Regulations on Private School Choice Programs

An Analysis of the Louisiana Scholarship Program Achievement Findings

“The approach has been to regard any market failure, however minor, as a sufficient excuse for government intervention. The market has failed, therefore the government should step in. But this is a basic error, because it involves a double standard. There is not only such a thing as a market failure, there is also such a thing as a government failure. The cure may be worse than the disease.”

-Nobel Laureate Milton Friedman on the harms of regulations

The question of market failure is one raised of late with respect to the Louisiana Scholarship Program (LSP), a voucher program enacted and launched in 2008 to help low-income students in “low-performing” Louisiana public schools. Two recent studies of LSP indicate initial negative student achievement results that do not align with the greater body of research showing positive or neutral outcomes from school choice programs in other cities and states.

We do not dispute the initial outcomes in Louisiana; rather, we seek in this writing to offer potential explanations for what appear to be anomalistic negative results in the larger body of research that has been done to date on school choice programs across America.

First conceived by Milton Friedman in 1955, school choice options, such as vouchers and education savings accounts, give parents the freedom to choose the best learning environment for their children with the funding that would have been spent on their children in public school. This ensures each child has access to an excellent education, regardless of race, income, background or ZIP Code.

There currently are 61 school choice programs operating in 30 states and the District of Columbia, and there have been 18 nationally recognized, random assignment studies to determine the effect of the programs on student achievement. Of those studies, which were conducted using the “gold standard” of social science, 14 found the programs in question had a positive effect on academic performance for all or some participants. Two determined the programs had a neutral effect. Now, two studies of Louisiana’s voucher program find the program had a negative effect on participants’ test scores in its first couple years.

These results are one part of Dr. Greg Forster’s A Win-Win Solution: The Empirical Evidence on School Choice, which we recently updated and released. The national report compiles results from rigorous empirical studies that examine the academic outcomes of school choice students, the academic effect of competition on public schools, the fiscal impact of school choice on taxpayers and government, racial segregation in schools and the effect of school choice on civic values and practices.

The National Bureau for Economic Research (NBER) released in December 2015 the first nationally recognized random assignment study to ever demonstrate that a school voucher program—LSP—had a negative effect on student achievement in its first year. Just two months later, the University of Arkansas released a similar gold standard study on the first two years of the same program. Though the University of Arkansas report showed first-year findings concurrent with NBER’s, the program’s second year resulted in minor academic improvements that may offer insight into the anomaly.

The University of Arkansas study offers a two-year window into the program by examining two groups of randomly sampled students who are essentially identical save one variable: a school voucher. One group of students applied and won the voucher lottery (the test group); the other applied but did not win the voucher lottery (the control group):

  • After two years of enrollment, LSP scholarship users scored 0.34 standard deviations lower than the control group in math, a statistically significant point estimate; this is equivalent to roughly 13 months of learning for students using vouchers to attend private schools;
  • After two years of enrollment, LSP scholarship users scored 0.18 standard deviations (roughly six months of learning) lower than the control group in English Language Arts (ELA), but this point estimate was not statistically significant, meaning that we cannot rule out no effect or a positive effect;
  • The size of the effects in the second year were smaller than those in the first year, implying that the treatment group may have improved after the first year;
  • These results were similar for subgroups of students based on gender and race/ethnicity.

These findings, as noted above, are unusual when compared to more than a dozen gold standard reports showing positive academic achievement for voucher students. Why, then, did the recent studies of the first two years of the LSP generate negative results?

It would be premature to proclaim the Louisiana program an educational market failure based on these initial data, particularly given that researchers have yet to untangle the reasons for these findings, and it may turn out that some or all of the causes can be remedied. For instance, over-regulation may have played a role in limiting the scope and quality of the program. Alternatively, it may be the case that the diverse curricula among private schools don’t align with the state tests.

Moreover, these results give us an opportunity to analyze and make improvements not just to LSP, but also to the design of all educational choice programs.

Researchers continue to examine empirical reasons for Louisiana’s negative outcomes but already have developed seven initial potential causes, of which policymakers should take note.

 How Do You Solve a Problem Like Louisiana?

The University of Arkansas researchers offer four potential explanations for Louisiana’s negative results:

  • Given that the LSP is the first statewide voucher program systematically studied (other rigorously studied programs serve smaller samples of students in urban school districts), the program’s scale may have played a role;
  • Private schools participating in the LSP may have had little time to prepare for incoming students because the timeframe for program implementation was short;
  • As the LSP requires students to have attended poorly performing public schools (no other voucher program has this requirement), participating schools may have been inadequately prepared to educate high-need students;
  • The group of participating private schools in the LSP were of lower quality, on average, than the groups of private schools participating in other voucher programs. This is plausible in light of another study showing that less than one-third of private schools agreed to participate in LSP.

The NBER study also offers indirect insight into the low-quality private school phenomenon. According to the report, “LSP-eligible schools experienced rapid enrollment declines relative to other nearby private schools before entering the program.” This finding gives us reason to suspect that the LSP faces problems given that the willingness among private schools to participate and expand is quite low, where only one-third of the state’s private schools participate in the program. Another study surveyed school leaders in Florida, Indiana, and Louisiana and found that less than a quarter of Louisiana school leaders planned to expand the number of spots available to voucher students.

This pattern suggests the program disproportionately attracted certain types of private schools, such as schools struggling to keep students enrolled, relative to the high-quality schools that do not have this enrollment issue. Participation in the program may also be adversely affected by excessively low voucher payments. Payments are capped at the sending district’s per-student expenditure.

Though it’s hard to say with certainty the reasons for the negative results (researchers undoubtedly will be digging into this question), we want to highlight three potential explanations in addition to the four previously mentioned from the University of Arkansas researchers:

  • Over-regulation Theory: Excessive regulatory burdens provide a disincentive for quality schools from entering the program, and those schools that are struggling to stay open (e.g. have declining enrollments) are willing to sacrifice autonomy for additional sources of funding. The problem with this is that if true, then it explains first year results of Mills and Wolf’s study, but not the second year’s results. Alternatively, if under-regulation is in play, it explains the second year results, but not the first year results.
  • Non-aligned Test Theory: State exams are not aligned with private schools’ curriculum (more aligned with public schools’ curriculum). This would seem more consistent with the Arkansas study’s results. The bump in the second year might be due to participating schools aligning their curriculums.
  • Strict and often arbitrary regulation by state and federal officials may have acted as a barrier to entry or served as a deterrent for higher quality private schools.

We conclude that the NBER and Arkansas studies of the LSP are rigorous and legitimate and should not be discounted. But it’s important that they be used properly and interpreted within a larger body of research, namely to learn lessons about what’s wrong with the program and to recommend improvements.

Researchers have yet to understand why the LSP has negative effects for participating students. In fact, the principal investigators are planning to address that critical question in the future.

Given the context provided above, however, these two studies certainly offer no rationale for dismantling all school choice programs or for other states to hold back on creating their own programs. It’s simply unreasonable to point to them as evidence that school choice doesn’t work when, in fact, a much larger body of other evidence suggests that it works in many other places.