Do field experiments put the method before the question?

Chris Blattman has another post – his most pointed and strongest yet – telling people to get out of field experiments because the market is crowded:

Most field experiments have the hallmarks of a bad field research project. There are four:

  1. Takes a long time. Anything requiring a panel survey a year or two apart, or a year of setup time, suffers from this problem.
  2. Risky. There are a hundred reasons why any ambitious project may fail, and many do.
  3. Expensive. This is driven by any kind of primary data collection, but especially panel or tracking surveys, and especially any Africa or conflict/post-conflict research.
  4. High exit costs. This is where experiments excel. If your historical data collection, formal theory, or secondary dataset isn’t working for you, you can put it aside. If your field experiment goes poorly, not only are you stuck with it to the bitter end, but it will take more not less time.

These are all important considerations for any research project, but I was more struck by his aside that he is “suspicious whenever someone puts the method before their question.” Do people running field experiments put the method first? I would argue that they do so less than folks who write (credible) non-experimental social science papers.

The procedure for writing a paper based on a field experiment is 1) think of something you’d like to study and 2) try to come up with an experiment that lets you study it. What about non-experimental papers? Academic lore holds that the current process for grad students writing in economics is 1) sit in a room for four years trying to think of a natural experiment* that happened somewhere and 2) write a paper about whatever that natural experiment is. This is why, for example, we know a lot about the financial returns to education for students who would drop out of school if not for rules that force them to stay until age 17, or the benefits of getting a GED for someone who barely passes the necessary exam.

Let’s take a concrete example: the price elasticity of labor supply. Should we care about the labor supply of cab drivers? Trick question – it doesn’t matter whether taxi driver labor supply is interesting! What’s important is that variations in weather mean that their effective wage changes at random, so we can study their labor supply. That’s where the light is.

In contrast, experiments let us pick our topic and then study it. For example, Jessica Goldberg ran an experiment studying the exact same issue (how labor supply responds to changes in wages) but with a representative sample of Malawians, doing the most common kind of paid work in the country (informal agricultural labor). This kind of work is also common in much of sub-Saharan Africa. Her method – a field experiment – let her pick the topic of her research, and as a result what she studied is the most important category of labor across a wide region.

I’m not saying that Camerer et al.’s cab driver paper isn’t good research, or even that Goldberg’s paper is better. My claim is much simpler, and very hard to dispute: the former paper’s topic was much more driven by its method (finding a useful instrument) than was the latter’s by its method of setting up a targeted experiment.

There are exceptions to this pattern – sometimes a government agency or an NGO has an experiment they want to run that falls into your lap, for example, and some IV-driven research is based on a targeted search. In general, however, it’s misleading to claim that the experimental method often comes before the topic.  That’s a key advantage of running experiments: an RCT lets us choose where to shine the light instead of constantly standing under streetlamps.

I suspect this isn’t what Blattman was driving at – there are topics where observational research is more appropriate (or even the only option, e.g. almost anything in international trade) and we shouldn’t stop studying them just because we can’t do RCTs on them. Nevertheless, the knee-jerk assumption that RCTs are methods-driven rather than topic-driven is pretty common, and, I think, wholly misguided.

5 thoughts on “Do field experiments put the method before the question?”

  1. Hey now, I resemble that, um, asterisk in your second paragraph!

    As a counter-example to your last point, I relate the following indirect quotation from someone who might be considered an authority, “As a methodological matter, if you’re going to be working in [developing country X], you should randomize something.” I took Blattman’s aside to be literally addressing the situation as described: student walks into his office, starts out with, “I want to do a field experiment for my dissertation”, in short starting the conversation with a method, not an actual topic of interest – sort of like the anecdote from Dan Silverman in our third year seminar, “I was at [somewhere or other] and student meeting after student meeting started with, ‘I’ve got a model'”.

    None of which is to claim that I actually disagree with your main piont 🙂

    1. That asterisk was going to explain what a natural experiment was, but it is instead left as an exercise for the reader.

      I’ll concede that Blattman is kind of an outlier within development (and I’m guessing he enjoys his newfound role of anti-RCT iconoclast – almost everyone else who agrees with him is in an older generation of economists). So this has evolved into a defense against other applied microeconomists.

      I never really made the link before just now, but I am definitely one of those people with a model (and have been described as such to third parties). Worst of both worlds?

  2. More than anything this is just that economists don’t like metholodogy driven research. Economists would much rather hear: “I have a question,” than, “I have a dataset/experiment/model.” (This is of course something I struggle with as well, having worked on fail(ing) projects that fall into the 1st and 3rd bin).

    1. This is a totally reasonable attitude – I’m definitely skeptical of the tendency for our fetish for identification to take us away from real questions of interest and toward questions that are easy to answer. But I’m convinced the problem is concentrated in IV-based empirical research, not experiments. Consider the history of the Vietnam draft lottery instrument, for example: http://www.technologyreview.com/article/508381/the-natural-experimenter/ Clearly a case of convenience. Maybe it’s a question we were interested in anyway? I’m not sure.

      1. I guess ideally you have a question and then find the perfect way to answer it. Part of the “credibility” stuff is narrowing the range of questions: we limit ourselves to questions that we can answer (near) perfectly. But still economists want the focus on the question rather than the way of answering it.

        The obvious flaw here is that most economists have a relatively narrow toolkit, and so the questions that they try to answer will be limited by their methodological competence.

Leave a Reply

Your email address will not be published. Required fields are marked *