Choose an audience +

School Choice FAQs

Does school choice have a positive academic impact on participating students?

Yes.

Studies conducted since the late 1990s convincingly show that school choice is an effective intervention and public policy for boosting student achievement.

Fifteen studies have examined the use of school vouchers by employing a method called a randomized control trial (or RCT), considered the gold standard in the social sciences.  Twelve of those studies have found statistically significant gains in academic achievement for some or all voucher students. Only two studies have detected negative effects, both of which observe the initial impacts of Louisiana’s statewide voucher program. One study’s findings were inconclusive because findings were not statistically significant.

Random-assignment methods used in RCTs allow the investigators to isolate the effects of school vouchers from other student characteristics. Because students are randomly assigned to a “treatment” and “control” group, these groups are, on average, the same or very similar with regard to all their characteristics (things we can and cannot measure such as motivation and family background). This ensures that the ONLY difference between the two groups is that one receives the intervention and the other does not. Students who applied for vouchers were entered into lotteries that produced random “winners” to determine who would receive the voucher and who would remain in public schools; this allows researchers to track “treatment” and “control” groups.

Random-assignment studies are relatively rare in educational research. However, when voucher programs are oversubscribed, a random lottery typically is used to determine which students will be offered vouchers. Applicants who are offered vouchers as a result of the lottery are a naturally occurring random-assignment treatment group, and applicants who are not offered vouchers are the control group. Both groups are made up of students whose parents applied to participate in the program; they are separated only by the result of the lottery. The essential distinction is whether or not one received the “treatment,” which is the school voucher in this case.

Highly-respected researchers have conducted gold standard RCT studies in five large cities: Milwaukee, Charlotte, Washington, D.C., New York City, and Dayton. Additionally, the two most recent studies have examined Louisiana’s statewide voucher program.

EVIDENCE: Gold-standard studies find vouchers improve learning or attainment.

Fifteen studies of school voucher programs have used random-assignment methods.1 In 12 of those studies, some or all of the students in the voucher (treatment) group achieved better academic outcomes than the control group. Those positive results achieved a high level of statistical significance, meaning we can be very confident that the better results in the treatment group were caused by vouchers instead of random chance. Four studies showed statistically significant academic gains for all students in the treatment group (voucher program participants).

In eight studies, the positive results for vouchers achieved statistical significance only among large student subgroups, rather than in the population as a whole. For example, in some cases the positive results for voucher students are only statistically significant for black students, who made up the majority of voucher users in those programs. These studies do not find any negative voucher effects on any student groups, and they find statistically significant voucher benefits for most students.

Just one study of the fifteen produced no statistically significant results. However, researchers have identified a number of serious violations of proper scientific methods in that study.If those flaws were corrected, the estimates in the study would have achieved statistical significance.

In addition, we can see from the evaluation of the D.C. voucher program that a school choice “treatment” had a positive impact on graduation rates. Specifically, the 2013 article from Wolf et al. found that students who had been selected randomly to receive vouchers graduated at a rate of 82 percent, which was 12 percentage points higher than students randomly selected not to receive vouchers. Even better, students who actually used the vouchers had graduation rates that were 21 percentage points higher. Virtually identical effects were found for students who came from public schools that had been deemed as failing.

The two studies that detect negative effects examined the Louisiana Scholarship Program (LSP). One study looked only at the first-year effects of the program and found that students who won a school-level random lottery and used a voucher experienced a substantial decrease in academic achievement compared to their first-time applicant peers who did not win the school-level random lottery and instead attended public schools. However, the authors suggest that the design of the program may have had the unintended consequence of attracting a set of private schools struggling to maintain enrollment, noting that these findings imply that these schools also provided lower educational quality compared to the public schools.

The study that looked at second-year effects of the LSP found that while lottery-winning voucher users had large and statistically significant negative impacts on math achievement in their first year of the program, the impacts were less negative in their second year of participation. These students also had statistically significant negative impacts on English Language Arts achievement in their first year of the program; however, this negative impact dissipates to insignificance by their second year of participation. The authors stated the results suggest that the negative impacts of the program may dissipate over time.

Random Assignment Studies Finding Vouchers Impacted Student Achievement

Statistically significant increases for all students
Statistically significant increases for subgroups
Not statistically significant/neutral
Statistically significant decreases for subgroups
Statistically significant decreases for all students
4
8
1
0
2

 

Random Assignment Voucher Studies

Authors
Area
Report Year
Years Studied
Voucher Benefit?
Mills and Wolf
Louisiana
2016
2011–12 to 2013–14
Negative: Math
Abdulkadiroglu, Pathak, and Walters
Louisiana
2015
2011–12 to 2013–13
Negative: Math and Reading
Bitler, et al.
New York, NY
2015
N/A
Subgroups: Math
Chingos and Peterson
New York, NY
2015
1996–97 to 2006–07
Subgroups: Graduation Rates
Wolf, et al.
Washington, D.C.
2013
2004–05 to 2008–09
All Students: Graduation Rates
Subgroups: Reading
Jin, Barnard, and Rubin
New York, NY
2010
1997–98
Subgroups: Reading, Math
Cowen
Charlotte, NC
2008
1999–00
All Students: Reading, Math
Krueger and Zhu
New York, NY
2004
N/A
N/A
Barnard, et al.
New York, NY
2003
1997–98
Subgroups: Math
Howell and Peterson
New York, NY
2002
1997–98 to 1999–00
Subgroups: Reading, Math
Howell and Peterson
Washington, D.C.
2002
1998–99 to 1999–00
All Students: Reading, Math
Howell and Peterson
Dayton, OH
2002
1998–99 to 1999–2000
Subgroups: Reading, Math
Greene
Charlotte, NC
2001
1999–00
All Students: Reading, Math
Greene, Peterson, and Du
Milwaukee, WI
1998
1990–91 to 1993–94
All Students: Reading, Math
Rouse
Milwaukee, WI
1998
1990–91 to 1993–94
All Students: Math

*We define a “study” as a unique set of one or more data analyses, published together, of a single school choice program. “Unique” means using data and analytic specifications not identical to those in previously reported studies. A “publication” is a means of reporting results to the public by report, paper, article, book, or book chapter. By this definition, all data analyses on a single school choice program that are reported in a single publication are taken together as one “study,” but analyses studying separate programs are taken as distinct studies even if they are published together.

 

Suggested Citation
“Does School Choice Have a Positive Academic Impact on Participating Students?,” Friedman Foundation for Educational Choice, last modified Feb. 26, 2015, http://www.edchoice.org/school_choice_faqs/does-school-choice-have-a-positive-academic-impact-on-participating-students.

 

Notes

1. Jay P. Greene, Paul E. Peterson, and Jiangtao Du, “School Choice in Milwaukee: A Randomization Experiment,” in Learning from School Choice, ed. Peterson and Bryan C. Hassel (Washington, D.C.: Brookings Institution, 1998), pp. 335-56; Cecilia E. Rouse, “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” Quarterly Journal of Economics 113, no. 2 (May 1998), pp. 553-602, doi:10.1162/003355398555685; Greene, “Vouchers in Charlotte, Education Next 1, no. 2 (Summer 2001), pp. 55-60, http://educationnext.org/vouchersincharlotte/; William G. Howell and Peterson, The Education Gap: Vouchers and Urban Schools, rev. ed. (2002; repr., Washington, D.C.: Brookings Institution, 2006); John Barnard, Constantine E. Frangakis, Jennifer L. Hill, and Donald B. Rubin, “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association 98, no. 462 (June 2003), pp. 299-323, Alan B. Krueger and Pei Zhu, “Another Look at the New York City School Voucher Experiment,” American Behavioral Scientist 47, no. 5 (Jan. 2004), pp. 658-98, doi:10.1177/0002764203260152; Joshua M. Cowen, “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte,” Policy Studies Journal 36, no. 2 (May 2008), pp. 301-15, doi:10.1111/j.1541-0072.2008.00268.x; Hui Jin, John Barnard, and Rubin, “A Modified General Location Model for Noncompliance with Missing Data: Revisiting the New York City School Choice Scholarship Program using Principal Stratification,” Journal of Educational and Behavioral Statistics 35, no. 2 (Apr. 2010), pp. 154-73, doi:10.3102/1076998609346968; Patrick J. Wolf, Brian Kisida, Babette Gutmann, Michael Puma, Nada Eissa, and Lou Rizzo, “School Vouchers and Student Outcomes: Experimental Evidence from Washington, DC,” Journal of Policy Analysis and Management 32, no. 2 (Spring 2013), pp. 246-70, doi:10.1002/pam.21691; Matthew M. Chingos and Peterson, “Experimentally Estimated Impacts of School Vouchers on College Enrollment and Degree Attainment,” Journal of Public Economics 122 (Feb. 2015), pp. 1-12, doi:10.1016/j.jpubeco.2014.11.013; Marianne Bitler, Thurston Domina, Emily Penner, and Hilary Hoynes, “Distributional Analysis in Educational Evaluation: A Case Study from the New York City Voucher Program,” Journal of Research on Educational Effectiveness 8, no. 3 (July–Sept. 2015), pp. 419-50, doi:10.1080/19345747.2014.921259; Atila Abdulkadiroglu, Parag A. Pathak, and Christopher R. Walters, “School Vouchers and Student Achievement: First-Year Evidence from the Louisiana Scholarship Program,” NBER Working Paper 21839 (Cambridge, MA: National Bureau of Economic Research, 2015), http://www.nber.org/papers/w21839; Jonathan N. Mills and Patrick J. Wolf, The Effects of the Louisiana Scholarship Program on Student Achievement After Two Years, Louisiana Scholarship Program Evaluation Report 1 (Fayetteville: Univ. of Ark., School Choice Demonstration Project, 2016), http://educationresearchalliancenola.org/files/publications/Report-1-LSP-Y2-Achievement.pdf.

2. The authors invented an idiosyncratic method for classifying students by race then arbitrarily applied that definition to black students but not to other students. They also added to the data set new students for whom information was missing, reducing the quality of the study’s data. When data for a given factor are missing for all students (as in the Charlotte studies), researchers simply have to go without it. However, it makes no sense to add students with missing data to the sample where we already have plenty of students for whom those data are present. In addition, they were highly selective in their choice of statistical models; they had to use just the “right” model to prevent the positive results for vouchers from being statistically significant. Caroline M. Hoxby, “School Choice and School Competition: Evidence from the United States,” Swedish Economic Policy Review 10, no. 2 (2003), pp. 9-65, http://www.regeringen.se/content/1/c6/09/52/71/66cbb4f6.pdf; Peterson and Howell, “Voucher Research Controversy,” Education Next 4, no. 2 (Spring 2004), pp. 73-78, http://educationnext.org/voucherresearchcontroversy.

Questions on School Choice?

Choose your path.

Receive School Choice Updates In Your Inbox