We recently got access to preliminary data on math exam scores from the randomized evaluation of the NULP. There are no effects of the program on average math scores. Even though that’s a null result, it’s a pretty exciting finding – let me explain why.
Below is a preliminary graph of the math results by study arm and grade level. This is for the main treated cohort of students, so P1 (first grade) is from 2014, P2 is 2015, P3 is 2016, and P4 is 2017. Because the exam changes over time, I am just showing the percent correct. Also, the exam got harder at higher grade levels. Thus you don’t see progress from year to year here, even though fourth-graders can definitely answer harder questions than first-graders. There are potentially some subtasks where a comparison can be done but even the subtasks got harder.
There are clearly no treatment effects on math scores in any grade. A regression analysis confirms this pattern.
Why would we have expected any effects? My own prior was a combination of three factors. I’ll explain each, and then what I think now:
- Advocates of the “reading to learn” model argue that if you build reading skills that helps you learn other things, so we should see positive spillovers from reading skills onto math.
However, it’s not clear how much reading is really going on in math classes in northern Uganda, so maybe this is not a concern.
- The “Heckman equation” model argues that soft skill investments early in life are critical for later-life gains. That might suggest a null effect here, since nothing the NULP does directly targets soft skills. If everything comes through the soft skill channel, other interventions will have limited positive spillovers and not persist.
The counterargument, of course, is that this model does not predict that other interventions will not help.
- If teachers are time-constrained, emphasizing reading more could lead to negative spillovers from the NULP onto non-targeted subjects. This is potentially a major concern – for example, Fryer and Holden (2012) find that incentivizing math tests leads to improvements in math ability, but declines in reading ability.
While this is a legitimate concern, it looks like the NULP did not suffer from this problem.
Now that we have the results, I think #1 is probably not a practical consideration in this context and at this grade level. #2 just doesn’t make strong predictions. So that leaves us with #3 as the only viable theory.
This is great news, because we have evidence that the NULP does not cause significant declines in performance on other subjects. That addresses a common question people have about the huge reading gains from the program that are documented in Kerwin and Thornton (2018). Did they happen because teachers stopped teaching math, or put less effort into it? We now know the answer is “no”. That bodes very well for the potential benefits of scaling up the NULP approach across Uganda and beyond.
This post was originally published on the Northern Uganda Literacy Program blog, and is cross-posted here with permission.