The holy grail of research is the causal claim. That is, if you can prove that A causes B, not that A is related to B or that A is similar to B, you can cure diseases, create rockets that leave our atmosphere, and design social programs that help people live better lives.
In medicine, physics, or chemistry, determining causation is comparatively easy. Medical trials take place under carefully regulated circumstances. Patients are sequestered away, have their diets and behaviors regulated, and are kept in a “double blind,” meaning that neither they nor the researchers studying them know if they have received real medicine or a placebo. Experiments in the physical sciences use elements that behave in predictable ways. If you’re wondering how iron and sulfur will react when mixed, every time you test it, the atomic mass of Iron will be 55.845 and the atomic mass of sulfur will be 32.065. You can verify your results a thousand times, and provided that your method is the same, you will get the same results.
Such is not the same for social scientists. Because human beings are free to take advantage of programs in education, healthcare, or anything else to the degree and level they wish and because so many other factors can influence human behavior, it can be hard to tell if a particular program causes particular results.
Take, for example, a program to get more people to vote. An interested party wants to drive up voter turnout so they pay to rent busses to ferry people to the polls. All day, the busses have volunteers on them counting the number of people who ride the bus, and they count 2,500 riders. At the end of the day, the organizer says “our program got 2,500 people to vote today.” People might clap and cheer, but is that true? It’s hard to say. Maybe those folks were going to vote anyway and just wanted a free ride.
In order to isolate causality in social science, researchers need to try and mimic what their colleagues do in medicine—create two equivalent groups of people, one that receives the “treatment” of the policy and one that does not.
One strategy to do this is to look out into the population and compare people who use a program to people who do not based on similar characteristics, called a “matched comparison.” In our bus example, a careful researcher might look at the 2,500 riders and “match” them each to similar people on census records that did not ride the bus. The non-riders could be the same race, gender, have the same income, and live in a similar area. The researcher could then compare the rate at which people who rode the bus voted to the rate at which those similar who did not ride the bus voted and then claim the difference as a cause. Solves the problem, right?
Not necessarily. Maybe it is that people who chose to ride the bus are in some ways different from those who did not in ways that are not captured in census records. If, for example, those that chose to ride the bus have some greater underlying motivation to vote than those who stayed behind, the bus probably didn’t matter.
The best thing a researcher can do is try to create two equal groups before the “treatment” beings. To make sure they are equal, researchers can randomly assign people to each group. How? Well the researcher could work with the bussing program to encourage people to sign up to get a bus to come to their house. Then, they can hold a lottery wherein half of the people on the signup list get the bus to come to their house and half don’t. Then the only thing that makes the bus riders different from the non-bus riders is the offer of a ride. Comparing the ridership rates of each group can isolate the effect of the program.
The same is true for education, particularly school choice. Those students who choose to participate in voucher programs might be different from those that do not in motivation, parental involvement, perseverance, or any number of other unmeasured ways that could affect their performance in schools. Using randomized control trials allows researchers to compare students whose only difference is winning or losing a lottery. This nets out all of those unobserved difference and yields a true causal claim.
Luckily, in most cases, school voucher programs have more people wanting to participate than slots available. This has allowed researchers to hold lotteries and randomly assign students to receive vouchers. That research is well summarized in the Greg Forster’s A Win-Win Solution: The Empirical Evidence on School Choice
ABOUT THE AUTHOR
is a research fellow in education policy studies at the American Enterprise Institute.