Ricardo Hausmann argues against RCTs as a way to test development interventions, and advocates for the following approach instead of an RCT that distributes tablets to schools:
Consider the following thought experiment: We include some mechanism in the tablet to inform the teacher in real time about how well his or her pupils are absorbing the material being taught. We free all teachers to experiment with different software, different strategies, and different ways of using the new tool. The rapid feedback loop will make teachers adjust their strategies to maximize performance.
Over time, we will observe some teachers who have stumbled onto highly effective strategies. We then share what they have done with other teachers.
This approach has a big advantage over a randomized trial, because the design can adapt to circumstances that are specific to a given classroom. And, Hausmann asserts, it will yield approaches whose effectiveness is unconfounded by reverse causality or selection bias.
Clearly, teachers will be confusing correlation with causation when adjusting their strategies; but these errors will be revealed soon enough as their wrong assumptions do not yield better results.
Is this true? If so, we don’t need to run any more slow, unwieldy randomized trials. We can just ask participants themselves what works! Unfortunately, the idea that participants will automatically understand how well a program works is false. Using data from the Job Training Participation Act (JTPA), Smith, Whalley and Wilcox show that JTPA participants’ perceived benefits from the program were unrelated to the actual effects measured in an RCT. Instead, they seem to reflect simple before-after comparisons of outcome variables, which is a common mistake in program evaluation. The problems with before-after comparisons are particularly bad in a school setting, because all student performance indicators naturally trend upward with age and grade level.
An adaptive, evolutionary learning process can generate a high-quality program – but it cannot substitute for rigorously evaluating that program. Responding to Hausmann, Chris Blattman says “In fact, most organizations I know have spent the majority of budgets on programs with no evidence whatsoever.” This is true even of organizations that do high-quality learning using the rapid feedback loops described by Hausmann: many of the ideas generated by those processes don’t have big enough effects to be worth their costs.
That said, organizations doing truly effective interventions do make use of these kinds of rapid feedback loops and nimble, adaptive learning processes. They begin by looking at what we already know from previous research, come up with ideas, get feedback from participants, and do some internal M&E to see how things are going, repeating that process to develop a great program. Then they start doing small-scale evaluations – picking a non-randomized comparison group to rule out simple time trends, for example. If the early results look bad, they go back to the drawing board and change things. If they look good, they move to a bigger sample and a more-rigorous identification strategy.
An example of how this works can be found in Mango Tree Educational Enterprises Uganda, which developed a literacy program over several years of careful testing and piloting. Before moving to an initial RCT, they collected internal data on both their pilot schools and untreated schools nearby, which showed highly encouraging results. The results from the first-stage RCT, available in this paper I wrote with Rebecca Thornton, are very impressive.
My impression is that most organizations wait far too long to even start collecting data on their programs, and when the results don’t look good they are too committed to their approach to really be adaptive. The fundamental issue here is that causal inference is hard. That’s why it took human beings hundreds of thousands of years to discover the existence of germs. Social programs face the same problems of noisy data, omitted variables, strong time trends, and selection bias – and arguably fare even worse on those dimensions. As a result, no matter how convincing a program is, and how excellent its development process, we still need randomized experiments to know how effective it is.