Random review All Reviews Rating Form Contact

The Anti-Politics Machine by James Ferguson

Everyone familiar with Effective Altruism knows that “good intentions aren’t enough.”

If you want your charitable giving to mean something, you also need to measure your favorite program’s effects with good statistical data.

But we don’t always clarify that good intentions and accurate data still aren’t enough. You also need to know that you’ve collected the right data and asked the right questions, and these are both much, much harder than the introductory effective altruist material tends to let on.

I first picked up James Ferguson’s The Anti-Politics Machine a year ago, expecting to read about a failed development project that could have benefited from an evidence-based approach. But instead I found an intervention that could have been backed by every experiment in the world and still would have fallen apart, a program so profoundly shaped by the lens of “development economics” that its practitioners misinterpreted almost every facet of what they were doing.

In the interest of space, I’ll focus on the two ideas from the book that have shaped my thinking the most -- recognizing the bundle of assumptions and biases he calls “development discourse”, and Ferguson’s “anti-politics machine” critique of technocratic decision-making. I will end by providing some more subjective takeaways for the Effective Altruist (EA) movement.

I. Development Discourse

“The statistics are wrong, but always wrong in the same way; the conceptions are fanciful, but it is always the same fantasy.” [1]

In 1975, the World Bank released a report on Lesotho: a tiny, mountainous country surrounded on all sides by the much larger nation of South Africa. This report, written by and for outside “development experts”, set out to identify problems that were holding the country back which could be solved by simple, technical “interventions”.

The report portrayed Lesotho as a country “virtually untouched by modern economic development” after gaining independence, whose “traditional subsistence peasant society” had been disrupted by depleted soil and failed crops, compelling most of its young men to find work in nearby South Africa.[2] To solve this problem, the report made a variety of recommendations, many of which involved boosting agricultural productivity and connecting farmers in Lesotho to better markets to sell their crops and especially their livestock.

Was this a fair characterization of the situation? Ferguson, as an anthropologist studying the “development experts”, wants us to ask a slightly different question: how does the development community construct what it considers to be knowledge? When this report passed through committees, what tests did it have to pass to be accepted as accurate? What kinds of errors is the system set up to catch, and how does it catch them?

Ferguson’s description of the process is somewhat cynical and some of the errors would have been caught today, particularly in the basic statistics. Since 1990, when the book was published, the rise of randomized experiments has revolutionized development economics. We now have a much firmer grasp on how to test causality, how to quantify effect sizes, and how to make reasonable statistical claims. In other words, the data points our narratives are built around have become much, much more reliable.

But the process by which we go from data to narrative is still relatively casual and poorly theorized, and this step was the main source of the World Bank group’s troubles. In the data-gathering process, they noted several facts:

  1. Most of the population in rural Lesotho grew crops, but they did not make very much income from them.
  2. More than 60% of the area’s young men were working in mines in nearby South Africa and sending back remittances.
  3. Many families had large flocks of underfed cattle. Even when money was tight, the team rarely observed cattle sales.

Each of these is true, but they need to be set within a larger narrative to inform the work of “development.” Instead of doing the ethnographical and historical work it would take to understand Lesotho’s particular history, political system, and culture, the World Bank’s team of experts substituted their notions of what “less-developed countries” are like[3], shaped just as much by countries very different from Lesotho and the array of “solutions” the team had to offer as they were by anything to do with the context they’d set out to study.

To the economists writing the report, the fact that cattle were not being sold was a clear sign of a market failure -- surely this meant that either the cattle were too low quality to sell, or the population did not have access to markets. Since this story (1) made sense of a data point that was otherwise confusing, (2) fit with economists’ intuitions for why a person might not sell their “product”, and (3) lent itself to being solved with standard tools (programs to improve cattle quality! programs to connect people with markets!), it seemed like a natural and parsimonious explanation of the facts. So natural and parsimonious, in fact, that the authors don’t seem to have thought to check whether it was actually true.

The same sort of jumps happened in interpreting the other two facts. The authors assumed that, as a “less-developed country”, Lesotho’s rural economy was driven by agriculture. The fact that income from crops was low was therefore a sign that the population were primarily “subsistence” farmers who could be “developed” with access to better agricultural tools. As further evidence for this theory, the failure of agriculture was a perfect explanation for why young men had recently (it was assumed) been forced to travel across the border for work.

While Ferguson focuses on Thaba-Tseka, I think the “development discourse” lens he describes is easiest to understand by imagining how it might function in a very different context. If the same World Bank economists were to study an American suburb, they might learn that:

  1. Most households grow fruits or vegetables in their yards, but make little profit by selling them.
  2. The majority of household breadwinners have a fairly long commute to work, which they complain endlessly about.
  3. People pour money into valuable assets they call “Roth IRAs”, but do not seem to sell them even when they could use the cash.

Should the World Bank conclude that suburban America is populated by subsistence farmers whose inability to grow good crops has forced them into long commutes? Are 401(k)s merely a “product” that young Americans hold onto because they’re too hard to sell?

Through careful ethnographic work (using the research method of actually talking to people) Ferguson discovers that, while the three facts we noted were true, the report’s interpretation of them was rooted entirely in fantasy. Much like our hypothesized suburbanites, the people of Lesotho’s Thaba-Tseka region grew food, but they did not consider farming their main priority or source of income. Indeed, even in the best of years a typical household would grow less than half of the food they consumed, with most money coming from remittances sent back from the mines, or from services (or beer) sold to those with mining income.

The practice of mining in South Africa, far from being a recent response to deteriorating agriculture, had been a central part of the Lesothan economy for more than a century. Villagers’ economic complaints rarely referenced agriculture (since they were not farmers), but rather focused on labor practices and immigration laws they felt allowed the mining companies to exploit their work and keep them in poverty.  

Similarly, the decision not to sell cattle was part of a complex traditional arrangement Ferguson labels “the bovine mystique”, in which cattle functioned both as a means of paying bridewealth and a way of saving for retirement when one was finally too old or injured to work in the mines. The system itself was relatively controversial among the villagers, but it’s difficult to imagine access to markets was a primary concern when Ferguson’s interviewees knew 1) where the market was, 2) how to get there, and 3) what the going price of cattle was on a near-daily basis.

What I find so striking about the World Bank report (and Ferguson’s devastating deconstruction of it) is not that it’s a bad piece of development writing, but that even fifty years later it’s still a thoroughly unremarkable one. Economic development papers and talks tend to have a fairly fixed structure:

  1. A quick introduction to “the setting”
  2. Carefully gathered numerical data, with a variety of statistical arguments and robustness checks to show that one or two “main results” have been accurately reported.
  3. A story that plausibly explains these numbers (either a potential mechanism for an effect, or an explanation of why the effect turned out to be null)

If these stories are challenged, it is not because there is no actual evidence for them, but because an economist in the audience has thought of their own preferred theory. If the speaker can find some data point that contradicts the questioner’s idea, this is thought to “confirm” the original story. Since audience members (who often have little specific knowledge of the region) are unlikely to ask questions like “what if this village just has an incredibly complicated set of social conventions around cattle that prevents their sale even without market barriers in place?” or “do the region’s economic challenges have more to do with this very specific regulation in South African immigration law?”, plausible-sounding stories that explain one or two numerical data points tend to gain traction in the literature whether or not they have anything to do with reality. Mark McGovern famously noted this trend in a review of two of Paul Collier’s books, writing:

“Much of the intellectual heavy lifting in these books is in fact done at the level of implication or commonsense guessing. And the common sense is surely not that of the inhabitants of the countries being dissected, but that of the highly educated elite located primarily in Western Europe and North America. In those passages where Collier does lay out the thinking behind his explanations, they are always coherent and plausible, but the chain of causal relations makes it evident how fragile these models typically are.” [4]

The World Bank report’s fundamental misdiagnosis of the challenges Lesotho faced formed the basis for a series of failed “development initiatives”, most notably the Thaba-Tseka Development Project, a joint venture funded by the Canadian International Development Agency, the World Bank, the Government of Lesotho, and the UK Overseas Development Ministry.

The project focused on providing technical solutions to the “problems” the World Bank report had identified: better agricultural techniques, easier access to markets, and increased government capacity to provide public goods. Each piece faced serious problems in execution, largely because interventions shown to have the sorts of “positive effects” randomized experiments might demonstrate elsewhere in Africa were not necessarily well suited to Lesotho’s unforgiving, mountainous terrain.

But even more seriously, the project was so enveloped in “development discourse” that nobody thought to question whether they were working on problems their “recipients” cared about, or merely the ones the “tools of development” were capable of solving. As Ferguson writes, “The promise that crop farming could be revolutionized through the application of a well-known package of technical inputs was so firmly written into the project’s design that it was difficult for those on the scene to challenge it, or even to confront it.” [5]

Perhaps the only thing that has changed since Ferguson wrote is that we have tools to better identify these failures: the development literature continues to be littered with failed trials and interventions based on unchecked assumptions. One of the most famous is the British Department for International Development’s 90 million pound Tuungane project, whose Congolese incarnation sought to rebuild village governing institutions that the country’s civil war had destroyed. One of the most convincing explanations of its failure[6] is that it may not have been necessary to begin with: the implementers do not seem to have checked whether the institutions had actually been weakened by violence, and baseline reports indicated that residents were relatively satisfied with village governance before the project even started![7] More research is needed to clarify the situation -- research which might have been useful to carry out before spending a £90 million on a “fix”.

Part of this, perhaps, comes from the usual overconfidence that other social scientists like to accuse economists of. But there are much bigger systemic problems at play. Development work tends to run on short timelines: grad students and postdocs need to publish quickly for their careers to advance, NGO funding runs on 5-ish year cycles, and charities (particularly in “high-risk” areas) face extremely high employee turnover rates. This simultaneously limits the accumulation of institutional knowledge, while incentivizing practitioners away from the time-intensive process of understanding a particular context in favor of “getting results quick.”

Similarly, the recent introduction of experimental evidence to the development field is a wondrous thing, but the revolution has to continue: randomized experiments can tell us about the effect an intervention had somewhere, but even the best methods of applying this kind of evidence to a specific context remain somewhat arbitrary and subjective. As EA begins to fund more complex (but potentially more effective) interventions, a key step will be to get a more systematic handle on how to gather evidence about specific places-- countries, states, even villages -- and how to match the tools we have to people who might benefit from them.

II. The Trouble with Technocrats

“But even if the project was in some sense a ‘failure’ as an agricultural development project, it is indisputable that many of its ‘side effects’ had a powerful and far-reaching impact on the Thaba-Tseka region. [...] Indeed, it may be that in a place like Mashai, the most visible of all the project’s effects was the indirect one of increased Government military presence in the region” [8] 

As the program continued to unfold, the development officials became more and more disillusioned -- not with their own choices, but with the people of Thaba-Tseka, who they perceived as petty, apathetic, and outright self-destructive. A project meant to provide firewood failed because locals kept breaking into the woodlots and uprooting the saplings. An experiment in pony-breeding fell apart when “unknown parties” drove the entire herd of ponies off of cliffs to their deaths. Why, Ferguson’s official contacts bemoaned, weren’t the people of Thaba-Tseka committed to their own “development”?

Who could possibly be opposed to trees and horses? Perhaps, the practitioners theorized, the people of Thaba-Tseka were just lazy. Perhaps they “didn’t want to be better.” Perhaps they weren’t in their right mind or had made a mistake. Perhaps poverty makes a person do strange things.

Or, as Ferguson points out, perhaps their anger had something to do with the fact that the best plots of land in the village had been forcibly confiscated to make room for wood and pony lots, without any sort of compensation. The central government was all too happy to help find land for the projects, which they took from political enemies and put in the control of party elites, especially when it could use a legitimate anti-poverty program as cover.  In Ferguson’s words, the development project was functioning as an “anti-politics machine” the government could use to pretend political power moves were just “objective” solutions to technical problems.

 A local student’s term paper captured the general discontent:

“In spite of the superb aim of helping the people to become self-reliant, the first thing the project did was to take their very good arable land. When the people protested about their fields being taken, the project promised them employment. [...] It employed them for two months, found them unfit for the work, and dismissed them. Without their fields and without employment they may turn up to be very self-reliant. It is rather hard to know.” [9]

Two things stand out to me from this story. First, the “development discourse” lens served to focus the practitioners’ attention on a handful of technical variables (quantity of wood, quality of pony), and kept them from thinking about any repercussions they hadn’t thought to measure.

This is a serious problem, because “negative effects on things that aren’t your primary outcome” are pretty common in the development literature. High-paying medical NGOs can pull talent away from government jobs. Foreign aid can worsen ongoing conflicts. Unconditional cash transfers can hurt neighbors who didn’t receive the cash. And the literature we have is implicitly conditioned on “only examining the variables academics have thought to look at” -- surely our tools have rendered other effects completely invisible!

Second, the project organizers somewhat naively ignored the political goals of the government they’d partnered with, and therefore the extent to which these goals were shaping the project.

Lesotho’s recent political history had been tumultuous. The Basotho Nationalist Party (BNP), having gained power upon independence in 1965, refused to give up power after losing the 1970 elections to the Basotho Congress Party (BCP). Blaming the election results on “communists”, BNP Prime Minister Leabua Jonathan declared a state of emergency and began a campaign of terror, raiding the homes of opposition figures and funding paramilitary groups to intimidate, arrest, and potentially kill anyone who spoke up against BNP rule.

This had significant effects in Thaba-Tseka, where “villages [...] were sharply divided over politics, but it was not a thing which was discussed openly” due to a fully justified fear of violence.[10] The BNP, correctly sensing the presence of a substantial underground opposition, placed “development committees” in each village, which served primarily as local wings of the national party. These committees spied on potential supporters of the now-outlawed BCP and had deep connections to paramilitary “police” units.

When the Thaba-Tseka Development Project started, its international backers partnered directly with the BNP leadership, reasoning that sustainable development and public goods provision could only happen through a government whose role they primarily viewed as bureaucratic. As a result, nearly every decision had to make its way through the village development committees, who used the project to pursue their own goals: jobs and project funds found their way primarily to BNP supporters, while the “necessary costs of development” always seemed to be paid by opposition figures.

The funding coalition ended up paying for a number of projects that reinforced BNP power, from establishing a new “district capital” (which conveniently also served as a military base) to constructing new and better roads linking Thaba-Tseka to the district and national capitals (primarily helping the central government tax and police an opposition stronghold). Anything that could be remotely linked to “economic development” became part of the project as funders and practitioners failed to ask whether government power might have alternate, more concerning effects.

As we saw earlier, the population being “served” saw this much more clearly than the “servants”, and started to rebel against a project whose “help” seemed to be aimed more at consolidating BNP control than meeting their own needs. When they ultimately resorted to killing ponies and uprooting trees, project officials infatuated with “development” were left with “no idea why people would do such a thing,” completely oblivious to the real and lasting harm their “purely technical decisions” had inflicted.[11] 

Have any EA projects had this sort of unexpected political side effect? I think it’s genuinely hard to tell without further research, but the possibility is frightening. (There’s been a little bit of research on the quantitative side --Recent research[12] has found, for instance,  that GiveDirectly’s 2014 unconditional cash transfer trial increased community participation but did not change voting patterns, so at least in 2014 the Kenyan government wasn’t using the program to stay in power. Was this the right question to test? I am not sure, especially without a more qualitative survey to see if there are other avenues we should be worried about.)

III. Takeaways for Effective Altruism

So what do we do as effective altruists (hereafter “EAs”)?  I see three key takeaways.

The first is a clear need for more qualitative research. GiveWell makes some qualitative judgments about charities, but Ferguson’s work illustrates the need for qualitative evaluation of the interventions themselves to see if the underlying studies have captured all of the “right” variables.

Randomized experiments are really good at testing hypotheses, but by their very nature they can’t tell you about variables you didn’t decide ahead of time to measure. Are there significant side effects (positive or negative) we’ve missed from massive malaria net distributions? I don’t know, but if so they are not likely to be discovered by a bunch of Americans and Europeans sitting in a room and trying to guess the best things to measure. Rather, they’re probably already known (or suspected) by the people experiencing them, and a first step to finding out is going and asking them. (A second step is finding the right people to ask them -- real expertise in qualitative research is a rare and valuable skill.)

Of course, qualitative research is messy and sometimes the people you interview are wrong or have other agendas. So once we have an “on-the-ground” hypothesis or concern, there will often be good reason to use a randomized trial or quasi-experimental method to test it or try to understand how much of a concern it might be! This sort of interdisciplinary approach is starting to gain traction in academia, but it has yet to be seriously applied in the EA sphere.

There’s another angle to this: Ferguson’s most incisive insights arise not from studying the people being “served”, but by studying the development practitioners themselves. Other social scientists have continued this trend, from McGovern’s An Anthropologist Among the Mandarins and Robinson’s How Different Social Scientists Think to Marchais, Bazuzi, and Lameke’s The Data is Gold, and We Are The Gold-Diggers and Omar Bah’s webcomic Mzungus in Development and Governments. Each new paper illuminates the research process in new ways, and provides tools both to do better research and to identify potential weaknesses in the pre-existing literature.

I think one of the highest impact investments an Effective Altruist fund could make right now would be to hire a handful of trained anthropologists (or other outside experts in qualitative research / ethnography) to hang out in places like GiveWell or the Machine Intelligence Research Institute for a few years and really study how effective altruism works as a system. How are decisions being made, and how is evidence being used to make them? What does “EA discourse” help make visible and which problems and concerns does it hide from our view? How do the positionalities of typical EA researchers affect their views of what’s important or what’s plausible?

I have my guesses, and I’m sure you have yours. But I had my guesses about development economics, too, and I missed nearly everything Ferguson (and the authors mentioned two paragraphs up) uncovered. What more are we missing?

The second is an emphasis on local context. As funding gaps for “low hanging fruit” like malaria disappear, EA is going to have to focus on more complicated interventions, which are likely to be fairly context-specific -- after all, why should an agriculture program that works in the flattest parts of the Sahel be expected to work the same way in the Maloti Mountains?

Ferguson notes about several of the Thaba-Tseka project’s failed arms:

“Tanzania may be very different from Lesotho on the ground, but, from the point of view of a development agency’s head office, both may be simply ‘the Africa desk’. In the Thaba-Tseka case, at least, the original project planners knew little about Lesotho’s specific history, politics, and sociology; they were experts on ‘livestock development in Africa,’ and drew largely on experience in East Africa.” [13]

For any sort of context-specific intervention to work, an intimate knowledge of the specific history, needs, and geography of individual villages and regions is necessary.  The development world has slowly made steps in this direction, but it’s not clear to me that the EA community has a clear way of acquiring, accessing, or working with this information.

I don’t think there’s a magic bullet to solve this problem, but in the long run any solution will probably need to involve a) on-the-ground, qualitative research and b) real representation in the EA network from areas EA organizations are interested in working. The development industry has a shameful history of infantilizing and ignoring the opinions of “locals”[14], and I think the conversations I’m starting to see in EA about diversity and representation of different parts of the Global South need to continue if we’re going to get enough serious knowledge of local contexts to effectively direct funding.

The third is a continued need to take politics seriously. This is one of the most challenging issues in charitable giving: when is it okay to work with a government doing terrible things to deliver humanitarian aid? To what extent does an NGO feeding the hungry lend its legitimacy to or cover for an authoritarian regime’s misdeeds?  I don’t have anything close to a full answer (and I don’t think anyone does), but Ferguson’s work exposes a possibility I hadn’t thought of before, in which “technical” and “apolitical” projects can expand the power of the state in unforeseen and potentially dangerous ways.

After writing The Anti-Politics Machine, Ferguson largely gave up on the idea of charitable or state-based aid. (Understandably, I think, given that he spent most of a decade watching its most horrific side effects first-hand).  It’s ironic, then, that I think his book’s practical value is greatest to those of us who still hold onto hope in its possibilities. May we have ears to hear the voices telling us where our work has fallen short, and eyes to see what it could become.