The only thing I know about Africa is it's far

About halfway through my current 48-hour-long (!) voyage to Mulanje District in Southern Malawi, I am reminded of Chris Rock’s classic rant about how little he was taught about Africa in school, and how goddamned far away it is.

The only thing I know about Africa is it’s far – Africa is far, far away. Africa is like a 35-hour flight. So you know that boat ride was real long.

Here’s a link to the beginning of the whole bit about education, from his Bring the Pain standup act. It’s all great, but the part about Africa starts at 40:25. And Rock isn’t totally off-base. Sure, you can get to some parts of Africa from the US pretty quickly, but going to many parts of the continent takes a crazy long time.

Indeed, the most recent leg of my journey, from Atlanta’s scenic Hartsfield-Jackson International Airport to O.R. Tambo on the outskirts of Johannesburg, is the fourth-longest nonstop commercial flight in the world, and the longest operated by a US carrier. In all, I will spend some 27 hours on planes and another 14 or so in airports to get to my destination (plus several hours of other transit). This route, which gets me directly to Blantyre’s Chileka Airport, avoids up to a day of extra travel time and a long bus ride from Lilongwe down south.

A lot of this could clearly be done more quickly, in theory. For example, Johannesburg is an airline hub for Africa because it’s a big city that’s had a major airport for a long time, not because it’s a conveniently central location a la America’s Salt Lake City. And the effective remoteness of places like Southern Malawi has real consequences for people other than itinerant development economists: it raises prices of imports and weakens the region’s ability to export goods at a profit.

Alice Walton demonstrates how you should report on the crappy public health story of the week

Sue Dynarski links to a typically awful news article on a correlational public health study, along the lines of the “chocolate prevents cancer!” garbage that inspired this blog’s name. This one is about a claimed link between coffee and early mortality in people under 55. What’s striking about this case is that I had already bookmarked an article by Alice Walton on the same study that does a great job of presenting the results for a mass audience.

Here’s what Alice gets right:

1) Emphasize the preliminarity and uncertainty of the results. Walton does this consistently, from the very beginning:

Those of us under 55 who drink a lot of coffee – more than four cups per day – may be at greater risk of an early death. [emphasis added]

2) Put the study in the context of the existing literature. She points out that these results are inconsistent with the “mishmosh of coffee studies all pointing at different outcomes”

But perhaps most uplifting of all is to remember that findings from a number of earlier studies contradict the new one and suggest that coffee is actually, at least on average, good for us. In fact, one recent study in The New England Journal of Medicine, following some 400,000 people, suggested that drinking up to six cups per day is actually linked to reduced mortality from all causes – 10% for men and 15% for women.

3) Talk about the theoretical reason for the reason – or lack thereof:

One problem is that no one really knows what mechanism/s could explain the coffee-death link. Some are candidates, however: There’s coffee’s ability to boost epinephrine (adrenalin) levels in the body, its inhibition of insulin function (though this is controversial), and the fact that it may raise blood pressure and homocysteine levels, which are both known to increase heart risk (though since heart disease was not increased in the study, these seem less likely).

4) Distinguish between simple correlations and actual measurement of causal relationships. Chip Lavie, the study’s author, is totally up front about this. He’s not really an outlier, either – most researchers who do this kind of work realize that their results are a guide to future research rather than established, incontrovertible facts. The real culprits in overselling these correlational results are university PR departments.

Also keep in mind, the current study only points to a correlation, not cause-and-effect. And it only measured coffee consumption at one time-point, not many throughout the years. There could be a lot of other things at play. “It is impossible to know if this association is causal or just an association,” says Lavie, “so one does not want to over-state or over-hype the dangers of drinking more than 28 cups per week, although I personally will make an effort to keep my cups at 3 or less most of the time.”

5) Take the effect size with the crazy huge magnitude out of the lede, if you mention it at all. Dynarski points out that the 20%-50% range cited by the US News article on the coffee study is completely implausible:

Here is a sniff test of the magnitude of this estimate: a similar, correlational analysis showed that light smoking (less than half a pack of day) is associated with an increase in all-cause mortality of 30%. Heavy smoking (more than half a pack a day), an increase of 80%.  These magnitudes are in the same ballpark as the coffee study, which immediately suggests to me that the coffee estimates are absurd.

To Walton’s credit, she puts that number deep in the article’s text and caveats it heavily. The news media needs more science writers like her, and much less of just copy-pasting official press releases and adding in some hyperbole at the top.

Why do so many more people do research in Malawi than in Haiti?

Eric Chyn passes along this Marginal Revolution post in which Tyler Cowen asks “to what extent is the choice of venue for study due to what I will call ‘social science infrastructure’?” By “social science infrastructure” Cowen means having a pool of experienced field workers, plus a population that is accustomed to being studied and other less-tangible factors:

I don’t mean roads and bridges. I mean having trained armies of local assistants, data gathering and processing facilities, populations which are used to signing informed consent forms, medical clinics which understand how to work with social scientists and register data, and other less visible assets.

This list hits on some important factors, but it is missing at least of key items:

  1. Existing data, in large quantities and available to the public. I have a colleague who works mostly with secondary data and was born and raised in Haiti but doesn’t do research on the country because there’s nothing to work with. Less data means fewer existing papers, a smaller existing literature available for informing your research (and selling it as publication-worthy!), and it makes running power calculations harder. Running an experiment on a population that has never been studied is an exercise in wasting your time and money.
  2. Experience in the country, both on an individual basis and on the part of one’s colleagues and advisor. It’s much easier to do research in a place where you know the language, have friends, understand aspects of the culture, can get around, and so forth. A very close proxy for having these things personally is knowing some
  3. Ongoing projects to work on. These are a great way to get experience in the country, to pilot survey questions, and even to run mini-experiments. This isn’t only for grad students – faculty tack their questions onto surveys and jump into collaborations as well.

Cowen essentially answers his own question by noting that a large number of field RCTs are set in Western Kenya. Malawi is another development research hotspot, and Northern Uganda seems to be growing as one as well (and I have worked in both). Social science infrastructure is essentially the whole reason why there is so much geographic concentration of development research.

But he missed out on asking the broader question: why are there such huge gaps in social science infrastructure across countries? The answer is suggested by the three components I listed above. What ties them all together is path dependence. At some point fairly long ago, people decided to start doing research in country X. The reasons for this are some mix of the obvious practical issues noted by Cowen (security, language, etc.) and totally idiosyncratic things. This then makes the next set of projects drastically easier: all of a sudden people know someone working in a country, they have a contacts to ask about who to hire as employees, they can pilot their work while interning on another study, and so forth. Then those people have colleagues and students, and the cycle continues.

In the case of Malawi, one of the key early events was the beginning of the MDICP, a longitudinal study of contraceptive use and sexual health behaviors. A tremendous share of the foreign social science researchers that work in Malawi have close connections to that project (including myself – my advisor worked on the 2004 wave, and used it as a platform for her job market experiment). The same goes for the human capital needed to do research – to pick one example, IKI, emerged from the local MDICP research staff.

It’s hard to overstate the strength of the path dependence effect in development research. The difficulty of blazing your own trail is a huge barrier to getting work done. From my perspective, the marginal cost of doing a project in Malawi is a tiny fraction of doing the same thing in Haiti. On the benefit side of the ledger, I’d agree that more people should study Haiti, but economics in particular puts strikingly little weight on the geographic origin of a given dataset. To first order, the economics profession thinks American data is all you need to study anyway; getting people to take any developing-country data seriously can be a challenge, so it’s not surprising that there’s a limited payoff to doing research in a new or under-studied locale.

Africa's next technological revolution?

Nancy Marker passes on this NPR piece about residents of Kenya’s Mathare slum using satellite images and GPS to put their community on the map and lay claim to their property. This is a hugely encouraging development even if this is the last step: activists are able to use these maps to shame officials into fixing problems, and to provide assurance to homeowners that if they allow pipes to be laid

Even better, it could be one of the first steps toward what I think might be the next stunning technological leap for sub-Saharan Africa (and much of the developing world). Fifteen years ago, “stunning technological leap” and “Africa” rarely appeared in a single paragraph. Then cell phones happened:

Mobile phones allowed Africans to work around the perennial problems of poor infrastructure, badly-regulated utilities, and bureacratic gridlock that had kept virtually all of then from having a telephone. The continent essentially skipped over home telephones entirely, moving directly to mobile phones. Africa now has more mobile subscribers than the US, and Kenya is the world leader in mobile phone payments.* Ever since the advent of Africa’s cell phone miracle, I had been pondering what made it happen and where the next huge breakthrough will happen.

The key element seems to be pent-up demand for a given service that is constrained (maybe by infrastructure costs, or by badly-functioning bureacracies). When a technology appears that allows the constraint to be bypassed, a huge boom occurs. This fits the adoption of mobile phones to bypass landlines, and the adoption of M-Pesa to bypass formal banking (since banks are hard to reach and expensive to use). It also matches the emerging spread of mobile-phone internet access in Africa: people want to use the internet for a variety of reasons but physical computers require large fixed costs and a decent amount of infrastructure, hence there is a move directly to phone-based internet.

I see telling hints of a similar pattern in terms of maps and street addresses. Anybody who has spent time in the developing world has seen how common it is for streets to have no name, or the name to be unknown to most people. Even if some kind of neighborhood or street name exists, many buildings have no numbers and the house and building numbers that do exist are very poorly documented. This makes finding places you haven’t visited before, or receiving mail or other shipments, a nightmare. The cost of fixing this system would be fairly exorbitant – you would need to unify all the disparate patterns and names already under use, and try to prevent new residences from popping up without acquiring appropriate numbers. In places like Mathare, you’d have new structures built before you even finished mapping and naming the community.

So this appears to be a case of high infrastructure costs preventing people from obtaining usable addresses. But the very system of addresses is a strange, pre-modern relic. Plug an address into a web mapping service, and it has to guess what the actual coordinates of that address are so it can give you a map. For my childhood home, Google Maps currently points to the wrong driveway. Moreover, adding new addresses can be problematic: what if there aren’t enough spaces between numbers? How to indicate multiple units in one complex? In the future, I suspect that we will hand out the GPS coordinates of our homes as often as we give people our street numbers (and by hand out, I mean send directly using a smartphone app).

Africans, many of whom lack addresses and need to walk all the way into the center of town to do things like receive mail, have an opportunity to get there first – skipping over street addresses entirely. High-quality GPS receivers already have a 3-meter resolution, enough for all but the most densely-packed slums, and advances in the system will allow ever-greater precision. If ground-based enhancements are added then even current technology allows centimeter-level precision. I’m not yet sure where the money is in promoting this innovation, but if I were an entrepreneur looking at growth areas in African markets, I’d begin with GPS coordinates.

*For my technologically-deprived US readers, I should clarify that in Kenya you can use you phone to pay for things and transfer money, and this has been possible since 2007. For anybody reading this outside the US, I should note that we still use paper cheques to send money and pay for things; it’s like time-traveling back to the 1930s.

Fact-checking Trading Places

With modest embellishments, the movie’s depiction of commodities trading is accurate and realistic, according to the latest Planet Money episode. I was surprised.

The end of the episode also has a neat little explanation of how Billy Ray and Winthorpe’s short-selling scheme works. Also described in this article. The key detail is that it is possible to sell a promise to sell FCOJ (or anything else being traded) at some future date and price. If the price is expected to go higher than the agreed-on price, people will jump at the opportunity – and if it falls, you can make a tidy profit. The protagonists, knowing that the rally in orange juice is based on faulty information, win big and simultaneously bankrupt Duke & Duke. Awesome.

Negative externalities in marriage markets and polygamy

Democracy in America says that it’s time to think about legalizing polygamous marriages:

If the state lacks a legitimate rationale for imposing on Americans a heterosexual definition of marriage, it seems pretty likely that it likewise lacks a legitimate rationale for imposing on Americans a monogamous definition of marriage. Conservatives have worried that same-sex marriage would somehow entail the ruination of the family as the foundation of society, but we have seen only the flowering of family values among same-sex households, the domestication of the gays. Whatever our fears about polyamorous marriage, I suspect we’ll find them similarly ill-founded.

I’m an economist (Well, a grad student. But still.) I don’t claim to know the state’s rationale for anything, let alone marriage policy. My take is that policy tends to emanate from what the median voter wants, rather than from any kind of cost-benefit analysis. But if the question is about the costs and benefits of gay marriage and polygamy, then indeed that calculus is quite different: polygamous marriage differs sharply from homosexual marriage in that causes a damaging imbalance in marriage markets.

To fix concepts, let’s be clear that when people say “polygamy” they mean “polygyny”, or the pairing of one husband with multiple wives. Polygyny is far more common, both across human history and today. It is sustainable only in fast-growing populations, for reasons that are obvious if you think about them for a second: the sex ratio among US adults is 1.00, meaning there is almost exactly one man for every woman. That means everyone can get married if we all happen to be straight and monogamous (and have sufficiently malleable standards for mates). Suppose the population were a fixed size, and the number of men exceeded the number of women by just one lonely guy. He’d be lonely indeed: the marriage market would not clear – you can only marry one person – and he’d be desperate. He would bid the effective price of a male partner down, way down, in order to get married at any cost. This would leave women with all the bargaining power, and still leave one man unmarried.*

If the population is growing, and men marry down in age, then polygyny can work out just fine. There’s always a new crop of young women to marry and everyone can find a spouse, even if some men take four wives. But population growth is very low in most of the developed world, which probably has much to do with its low rates of polygyny. In the modern US, any appreciable number of polygynous marriages would leave huge numbers of men out in the cold.

This would have negative effects on men in marriages as well, as those excluded from marriage fight for partners: “Don’t want to marry me? What if I raise the kids and also hold down three jobs?” Sometimes these desperate attempts actually create polyandry: one of my favorite courses in college was taught by a professor who research showed that polygyny in pre-modern China led to multiple-husband marriages. The fact that fundamentalist Mormons “discard their surplus boys” is not some random horrible thing they do, unrelated to their marriage practices. It is an essential component of polygynous marital patterns. The men in control of fundamentalist LDS communities do it in order to keep the marriage market favorable for themselves.

This is fundamentally different from gay marriage. Homosexuals and bisexuals are a small share of the US population, and homosexual identity is roughly equally common among men and women. That means that legalizing gay marriage, inasmuch as it encourages people to leave the heterosexual marriage market**, will not lead to imbalance. Legalizing polygamous marriages will.

The state has a compelling policy interest in discouraging plural marriages, inasmuch as such marriages are overwhelmingly polygynous and not polyandrous. A classical liberal would argue that you have the right to do whatever you like as long as you do not hurt others – this is why economists think that markets should operate free of government interference, most of the time. But sometimes what you do in a market harms others. If you build a bar next to my house, the noise affects me: it imposes a negative externality. In principle, even if all the abuses of women and children that so often accompany American polygamy were erased, the very institution of polygamy damages society as a whole. By leaving the remainder of the marriage market imbalanced, it harms not just the men left out of marriages but every male.

*This is not hypothetical – it’s a real issue, that actually happens. Kerwin Charles documented it in some black communities in the US, where high rates of incarceration among men have left women at a severe disadvantage. As any aspiring medical resident can tell you, matching markets are brutal. Any imbalance in the two sides makes life very unpleasant.
**It also seems unlikely that homosexuals would marry the opposite sex in the absence of a ban on their marrying, whereas people forbidden to marry multiple people will likely still marry one.

Not whether but how much

Last week I was lucky enough to attend the Hewlett Foundation’s Quality Education in Developing Countries (QEDC) conference in Uganda, which brought together both Hewlett-funded organizations running education interventions and outside researchers tasked with evaluating the projects. (My advisor and I are working with Mango Tree Uganda to evaluate their Primary Literacy Project.) Evaluation was one of the central themes of the conference, with a particular focus on learning from randomized controlled trials (RCTs). While RCTs are clearly the gold standard for evaluations nowadays, we nevertheless had a healthy discussion of their limitations. One area that got a lot of discussion was that while randomized trials are great for measuring the impact of a program, they typically tell you less about why a program did or did not work well.

We didn’t get into a more fundamental reason that RCTs are seeing pushback, however: the fact that they are framed as answering yes/no questions. Consider the perspective of someone working at an NGO considering an RCT framed that way. In that case a randomized trial is a complicated endeavor that costs a lot of effort and money and has only two possible outcomes: either you (1) learn that your intervention works, which is no surprise and life goes on as usual, or you (2) get told that your program is ineffective. In the latter case, you’re probably inclined to distrust the results: what the hell do researchers know about your program? Are they even measuring it correctly? Moreover, the results aren’t even particularly useful: as noted above, learning that your program isn’t working doesn’t tell you how to fix it.

This yes/no way of thinking about randomized trials is deeply flawed – they usually aren’t even that valuable for yes/no questions. If your question is “does this program we’re running do anything?” and the RCT tells you “no”, what it’s really saying is that no effect can be detected given the size of the sample used for the analysis. That’s not the same as telling you that your program doesn’t work; it’s the best possible estimate of the effect size given the data your collected, and telling you that the best guess is small enough that we can’t rule out no effect at all.

It is true that running a randomized trial will get you an unbiased answer to the “yes” side of the yes/no does-this-work question: if you find a statisticall significant effect, you can be fairly confident that it’s real. But it also tells you a whole lot more. First off, if properly done it will give you a quantitative answer to the question of what a given treatment does. Suppose you’re looking at raising vaccination rates, and the treatment group in your RCT has a rate that is 20 percentage points higher than the control group, significant at the 0.01 level. That’s not just “yes, it works”, it’s “it does about this much”. This is the best possible estimate of what the program is doing, even if it isn’t statistically significant. Better yet, RCTs also give you a lower and an upper bound on what that how much figure is. If your 99% confidence interval is 5 percentage points on either side, then you know with very high confidence that your program’s effect is no less than 15 percentage points (but no more than 25).*

I think a lot of implementers’ unease about RCTs would be mitigated if we focused more on the magnitudes of measured impacts instead of on significance stars. “We can’t rule out a zero effect” is uninformative, useless, and frankly a bit hostile – what we should be talking about is our best estiamte of a program’s effect, given the way it was implemented during the RCT. That alone won’t tell us why a program had less of an impact than we hoped, but it’s a whole lot better than just a thumbs down.

*Many of my stats professors would want to strangle me for putting it this way. 99% refers to the share of identically constructed confidence intervals that would contain the true effect of the program, if you ran your experiment repeatedly. This is different from there being a 99% chance of the effect being in a certain range: the effect is a fixed value, so it’s either in the interval or not. It’s the confidence intervals that vary randomly, not the true value being estimated. The uncertainty is in whether the confidence interval contains the true value of the effect, rather than in whether the true value of the effect lies in the range. If that sounds like pure semantics to you, well, you’re not alone.

Are GMOs per se unethical? I doubt it

I had an interesting conversation at a barbeque last weekend at which a lot of the attendees were in Michigan’s SNRE program (“snerds”, in the campus lingo), and we got into talking about GMO foods. Snerds mostly dislike GMOs, whereas I tend to think they’re a good thing. One thing I tried to do was get at the range of different factors that cause people to oppose them, because I think they are too often confounded. I was especially interested in separating the question of ethics from all the other things people worry about. Here’s what we came up with:

  1. Monsanto. They produce a lot of GMO foods and seeds. I don’t know a ton about their business practices but they sound like jerks and monopolists.
  2. Pesticide and herbicide use. Apparently you can use genetic engineering* to make crops that are more robust to these, and then they get used more, leading to overuse. I have heard the opposite claim as well – that GMO crops let us use less of these noxious chemicals.
  3. Unintended consequences. Who knows what could happen if these things get out into the wild?**
  4. Substitution away from other beneficial farming practices. If people use GMOs, they won’t move toward multicropping, which has ancillary benefits.
  5. Ethics. It is unethical to create organisms in the lab by combining genes from multiple species.

Ethics was the one point where the anti-GMO folks and I fundamentally disagreed. Not the ethics of what Monsanto does, which sound awful, but the basic ethics of modifying life. As I put it, I don’t see using fish genes to modify the genetic code of tomatoes as unethical in any basic sense, whereas one person I was chatting with absolutely did. In particular, she claimed that it was unethical to combine genes from different species.

Now, I’ve always been confused by a lot of what gets called ethics. For example, I once took a test about a (very unrealistic) scenario where you can either choose to kill one person or choose to let five die. The test varied the description, but I chose the same answer every time and apparently my answer – which seems like the only defensible one to me – is one that only 10% of people will ever pick. But my take is that there should be some general principle that underlies judgments of what is ethical and what is not. And I can’t see one driving the belief that adding genes from one species to another is unethical, for three reasons:

First, we already do tons of genetic modification of organisms, which very few people call unethical. If you want to create a pug, for example, the strategy is to: a) breed lots of small dogs; b) wait for mutants to show up with weird smushed faces that make it hard for them to breathe; and c) cross-breed those mutants to isolate that gene. We didn’t go get it from another species, but we waited for it to show up via mutation, which seems fundamentally identical. That might sound unethical (and maybe it is – most pugs I’ve met seem miserable) but if you replace weird faces with a herding instinct, you’ve got border collies.***

Second, we’re just talking about moving around chemicals in sequences of DNA. Most biochemistry isn’t inherently ethical or unethical – but specific acts, like reproducing smallpox, might be unethical, while producing a drug to suppress HIV infection might be very ethical.

Third, “species” is not a well-defined concept. Some taxonomists have put endless effort into defining where one species stops and another ends, but as Darwin pointed out in The Origin of Species****, there are no bright lines demarcating species. All living things exist on a gradient of relatedness, and there’s really no reasonable way to say when one species ends and another begins. My interlocutor said she was comfortable with the traditional definition of species: two groups of animals are of different species if, when they reproduce together, they produce infertile young. By this definition, however, grizzly bears and polar bears are the same species – which might leave neither eligible for endangered species protection.

One thing I want to separate here is the ethics of doing genetic modification of organisms from unethical acts during or resulting from the process. For example, cross-breeding dogs to develop new breeds is not unethical, but creating a breed with horrible congenital problems would be.

Now, it’s not impossible to defend the ethical claim that it’s wrong to modify one animal’s genome by using genes from another’s. But you’d have to come up with a definition of species you’re willing to stick to, and then you’d have to take the idea that this is unethical as a first principle: no mixing of kinds allowed. And that seems like an arbitrary rule, out of place in an ethical framework dedicated to preventing harm.

*What happened to calling this stuff “genetic engineering”, by the way? It sounds a lot more futuristic than “GMO”.
**There is one case in which we do know the answer, and that is concern over “terminator” genes that make later generations of an organism infertile. For simple reasons of natural selection, there is no risk of such genes becoming dominant in the gene pool. We can worry about a lot of stuff with GMOs, but all our crops ceasing to reproduce is not an issue.
***Some people claim that the herding instinct is actually a modified version of the hunting instinct, in which case the mutation is actually the part where they don’t kill certain prey.

****Technically On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. It’s a remarkably readable and fully-developed book – and it covers lots of the subtle details of evolution, that I would have guessed were sorted out fairly recently.

How to stop eating away at your health capital when you're busy (or abroad)

Economists like to frame everything in terms of economics. One fairly hot topic these days is “health capital”, a stock into which we invest (by avoiding disease, eating well, exercising, and so forth) and which later pays a return (more success in school, higher wages, and what have you). Doug Almond’s fascinating paper about Ramadan is one demonstration of how big these returns can be: a small change in in utero nutrition can have massive effects on eventual physical and mental health, and impact eventual wealth as well. Health capital will also depreciate over time if you don’t maintain it: if you want to stay healthy you have to keep eating well and exercising.

When I go to the developing world to collect data, the cost of maintaining my health capital goes way up. It’s often difficult to find space or time to exercise, and depreciation (from pollution, water-borne illnesses, stress, excessive UV radiation, etc.) is so high that it often feels like I’m actively spending out of the principal of my endowment. Eight months collecting data in Malawi? That’ll be 5% of your overall health, please.

I’m sure I’m not unique in this problem: most people lose access to their preferred diets or exercise regimens for various reasons at some point, and being short of the time, space, or equipment needed for a workout is a common complaint. I’ve never had much in the way of a solution to this problem before, but this NYT piece points to a new workout that takes only seven minutes – and is summarized in a single nifty diagram.

A seven minute miracle workout is no surprise: I have various friends who swear by at least half a dozen different magic workouts between them. The unexpected thing here is that this one appears to be based on (some) actual science. The linked article is a review, but references previous research that not only suggests their approach may work, but actually appears to test it directly.*

I plan to try this out during my next trip to Africa – tentatively scheduled for mid-June.

*They overstate their case a bit, making claims that are supported by references to similar exercises. This just seems to be crying out for an experiment.

Should we keep providing foreign aid through governments?

The default process for providing foreign aid is to direct the money through country governments. I’ve long had my doubts about doing things this way: most of the problems that people like Bill Easterly and Dambisa Moyo attribute to aid really boil down to the fact that, when aid money is directed to governments, it becomes a fungible, capturable resource that is quite similar to natural resource proceeds.

Now it seems there are increasing challenges to that default from across the development world. In a great interview about cost effectiveness, Bill Gates highlights Somalia, which has no functioning government but fairly high vaccination coverage, and Nigeria, which has much less success with vaccines:

Well, in Somalia they’ve given up using the government. The money goes through the NGOs. Whereas in Nigeria they’ve designed a system where the federal government buys the vaccines, the state government provides the electricity, and the one level down below that provides the salaries. It’s just a bad design. You know, the north of India has very poor vaccination rates, so we picked a state up there with 80 million people and we drove it from 30 percent to 80 percent. But they had a really good chief health minister and the federal government was providing lots of money and lots of good technocrats, so the skills were there, as long as you employed them in the right kind of system.

Gates takes a very nuanced view: it’s not that funding vaccinations through governments can’t work, but the conditions need to be right. Ethiopia has done well, but Nigeria has not. He’s also not saying that the only reason that vaccination programs have failed in Northern Nigeria is because of the difficulties of running them through the Nigerian government – widespread urban legends about vaccines have played a major role – but the failings of the government are still an important factor.

In the latest edition of the Development Drums podcast (transcript here) Daron Acemoğlu and James Robinson discuss their book Why Nations Fail. I’m not broadly in agreement with their take on development, mainly because I don’t see their findings as actionable. But I agree strongly with the policy implication that Acemoğlu highlights, which is that providing aid through governments can be bad because it can support extractive institutions:

Daron Acemoğlu
But it’s a better formula than saying whoever is in power, we’re going to hand the power, the money to them.

Owen Barder
Is it implicit in what you are saying that some of the current aid modalities, particularly government to government aid, tend to reinforce the elites?

James Robinson
Absolutely, yes. I mean, I’d say our view was that you know, at the end of the day, that’s probably – if you asked where do all these development problems come from in Africa, are they created by the perverse incentives generated by the aid industry? Our answer to that would be no: they are much more deeply rooted in the history of these societies and you know, so sure you can find examples where aid kept in power, you know, Mobutu for another five years and he wouldn’t otherwise have been there but what did you get instead, you know?

We might not be able to do too much to promote beneficial institutions, but it’s pretty clear we can (and unfortunately do) support crappy ones, not by giving foreign aid, but by doing so through horrible governments. Don’t like what Mobutu is doing to the Congo? You don’t have to cut the people of the Congo off, just their government.

The obvious question is why we were giving aid through governments in the first place. There must be a reason, and any change to the process needs to consider the benefits of the current default as well as the costs. The basic argument I’ve heard, as Owen Barder put it during the Development Drums podcast, is “the thinking of providing aid through the governments is to try to build a stronger social contract between citizens and the state.” This claim is crying out for quantification: how much social contract strengthening actually occurs when aid goes through governments? Is that a valuable end in itself, and if so how much do people value it? What about eventual benefits of other kinds – do they happen? How much?

Given all the light being shed on the downsides of automatically sending aid through governments, it’s no longer enough to have qualitative evidence that sending aid through governments promotes their legitimacy. We need to know how much, and whether it’s worth the cost.

Hat tip: Amanda Stype for the Gates interview.