Friday Freakout: Why Doug Harris is Wrong About the Empirical Evidence on School Choice

Tulane University Professor Douglas N. Harris gets the research wrong in his response to a Wall Street Journal editorial praising school choice. The Journal referenced the fourth edition of my A Win-Win Solution report, which reviews the empirical research on school choice programs. Harris mischaracterizes the research and my report.

This iteration of Win-Win reviews over 100 findings from empirical studies of the effects of educational choice on academic outcomes for students and schools, taxpayer savings, segregation in schools and civic values and practices. The Wall Street Journal editorial included this statement: “A meta-analysis last year by the Friedman Foundation found that 14 of 18 empirical studies analyzing programs in which students were chosen at random by lottery found positive academic outcomes.”

EdChoice is always open to feedback on our research publications; you can find our contact information here. And I for one always enjoy a robust public debate! But we have to get the facts right, and Harris doesn’t. Here’s what he had to say—and why he’s on the wrong track.

HARRIS: (1) The Friedman report lists 18 studies of academic outcomes. But there are only six separate cities/states that have had randomized voucher programs; this means 12 of the 18 studies are similar analyses of the exact same students. The Friedman Foundation, now known as EdChoice, has double-counted these studies.

Well, he is correct about one thing: for 20 years, EdChoice was the Friedman Foundation, and we’re proud to carry forward Milton and Rose’s intellectual legacy of educational choice.

Now about those studies. First, I was very transparent about the way I counted research findings. The protocols for including a study are clearly laid out and defended at the beginning of the report. The rules I used were perfectly reasonable, and I note that Harris does not even pretend to offer an argument against them.

If Harris thinks it’s somehow illegitimate to count multiple studies of a single program as multiple studies rather than pretending only one study has been conducted, that’s only his own ignorance. As all good scientists know, replication is the essence of science. One study never proves anything; it is only the replication of a finding across multiple studies that creates real certainty. Hence it is crucial to know how many studies of a given program have been conducted.

Oh, and Harris has the number of locations wrong. There have been random-assignment studies of eight programs in seven locations, not six (see p. 14 of the report).

HARRIS: (2) EdChoice’s unusual standards for including studies have the effect of excluding two large, rigorous “quasi-experiments” (of voucher programs in Indiana and Ohio) that reflect poorly on vouchers.

These studies were not included in Win-Win mainly because I do not (yet) have a time machine. The Ohio study was released after Win-Win was written. And the Indiana study, although it has been referenced in the media, hasn’t actually been published yet. Preliminary claims about findings have been shared with the press before publication, but we can’t evaluate the merits until we see the study.

It’s funny, whenever studies finding positive effects from school choice get early press coverage, opponents of choice shriek that nothing is valid until it’s peer-reviewed! Apparently they’re allowed to grab at whatever half-informed headlines serve their purposes, but we’re not allowed to cite anything until it’s too old to be valuable.

However, to be fair, it’s true that even if they’d been available, these studies would not have been included in Win-Win. Harris is right that they do not meet the study inclusion criteria on participant effects because they were not based on random assignment.

Harris is wrong, however, to say that this is “unusual.” To the contrary, focusing on random-assignment studies is an accepted practice in reviews of school choice research. As I explained clearly and transparently in Win-Win, this focus is justified by the superior methodological quality of these studies.

Finally, if Harris wants to look beyond just the random-assignment research, that would require us to look at all relevant studies, not just a couple of studies Harris cherry-picked. As I explain in Win-Win, that’s a huge field to sort through.

HARRIS: (3) The review counts as positive any study that finds a positive effect for any racial or other subgroup of students. However, if you separate students into enough groups, someone is bound to benefit. Researchers call this the “multiple comparisons” problem.

The studies reviewed in Win-Win did not “separate students into enough groups” until they found a positive result. The data available tend to be limited when it comes to what information on students is available, and sub-group analyses tend to focus on a few background factors, with free- and reduced-lunch designations, ethnicity and low academic performance among those most commonly reported.

I do count a study as having found a positive effect if it finds a positive effect for a given sub-group of participants and no negative effects on anyone else. (I would have done the same for a negative subgroup finding, but no studies have found any.) If some student groups benefitted while others were not visibly harmed, isn’t that a positive outcome? Do positive results for an ethnic group or for low-performing students not matter, so long as other groups didn’t see worse results? Policymakers and stakeholders use the information in these studies, whether it’s overall or for a specific sub-group, to help them design new programs and improve existing ones. It’s unfair to throw out positive results for some groups because there was no visible effect on other groups or across the aggregate population.

HARRIS: Academic researchers would never carry out an analysis like this. In reality, there are eight programs with rigorous evidence—four positive, three negative, and one with no effect. But if these results are weighted based on the number of students in each study—a standard research practice—the overall effect of vouchers on student achievement in the studies is negative, substantially so. Even if we threw out the three negative results, the four positive ones are limited to a specific student subgroup: African Americans in urban school systems.

Harris’s main claims are incorrect here. But first, I’d be remiss if I didn’t point out to Harris (and to the Wall Street Journal, which made the same error) that Win-Win is not a meta-analysis. It is a systematic review of the available empirical studies. A meta-analysis combines data from different studies. This matters because Harris is demanding I use methods that are appropriate in meta-analyses, or at least some meta-analyses (the fact that every school choice program is different makes these methods problematic when studying choice). More importantly, I do not use meta-analysis methods for the deeply nefarious reason that I was not conducting, and never claimed to be conducting, a meta-analysis.

If Harris is looking for a meta-analysis, a group of University of Arkansas researchers released one in May 2016 looking at 19 studies of 11 school choice programs around the world. It found overall positive and statistically significant achievement effects of using school vouchers.

Now to the more important claims. He references eight programs of which we have “rigorous evidence.” There are, in fact, eight programs of which we have random-assignment studies. Of these, only one—Louisiana—has negative findings. One—in Toledo, Ohio—shows no visible effect. The other six have positive effects. (One study purported to find no visible effect from one of these programs, but that study’s bizarre methods have been discredited and other research finds positive effects, as I described in Win-Win.)

Harris is also wrong to say that positive effects are limited to African-American students, or to any specific subgroup. The D.C. voucher program produced big gains in graduation rates for all students in random-assignment research. A 2002 random-assignment study of an earlier, privately funded voucher program in D.C. also found positive academic effects across all participating students. Two random-assignment studies of vouchers in Charlotte, N.C., and two random-assignment studies of vouchers in Milwaukee also found gains across all participating students.

Do you know where Harris could have found all this information? On page 14 of Win-Win, where I laid it out in a nice, very easy-to-read graphic. Keeping track of these kinds of things is why research reviews that are not meta-analyses are helpful—at least helpful to those who read them.

Now comes my favorite part. Toward the end of his rebuttal, Harris notes that the Ohio study he referenced also found positive results for those students who were eligible for a voucher but did not use one, an outcome that also was true in 31 of 33 studies included in Win-Win. (There has actually been another positive study since Win-Win came out, so fire up the time machine and make that 32 of 34 studies!)

But Harris is quick to discount the importance of this finding. His excuse is that the same could be said of other kinds of choice programs, like charters.

That’s quite a concession to school choice! Far more students would remain in public schools than would exit using school choice, even if universal choice were available. The fact that school choice consistently improves public schools—and is the only education policy with such a consistent track record across a large body of high-quality empirical studies—is one of the strongest arguments in its favor.

*Opinions expressed by our guest bloggers are their own and do not necessarily reflect those of EdChoice.