When people hear of research and the scientific method, international aid may not be the first thing that comes to mind. For The Millennium Challenge Corporation, however, it is the first and only thing on their mind.
It is a common misconception that aid to foreign countries is a waste; money falls into the wrong hands and volunteer efforts prove fruitless. Over the past decade, studies and research have been conducted to shine a light on the truths of this myth and assess the effectiveness of aid initiatives no matter what the verdict may be. The outcomes of these experiments may help guide policy, as some analysts hope.
One of the biggest studies conducted thus far has been from the US foreign-aid agency, The Millennium Challenge Corporation. One of MCC’s larger projects focuses on farmer training in countries such as Armenia, El Salvador, and Ghana. After much observation, the MCC published that in fact the skills and education taught to the farmers did help them sell more products but did little to actually reduce their poverty levels for reasons they cannot explain as of yet but are now at least aware of.
How exactly do organizations such as the MCC and universities use the scientific method to study the effectiveness of aid? Think back to elementary science. The most basic of an experiment had two groups, the experimental group and the control group, both chosen at random. In development research, these ‘groups’ are actually groups of people: communities, villages, families. The experimental group is enrolled in the aid project (for example, testing the effectiveness of bed nets in preventing the spread of malaria). One group is given the nets while another is not. This part of the process has created some uproar within the clinical research community. Jeffrey Sachs, a sustainable-development economist at Columbia University finds them to be unethical, preventing much-needed assistance to a group of people for the sole purpose of data collection. There are also scientists who see the entire concept of analyzing aid programs as destructive because it may prematurely cut a new program without giving it the chance to grow. Rachel Glennerster, a director of the Abdul Latif Jameel Poverty Action Lab at MIT, sees it differently. For her, the randomized controlled trials are in fact underused and prove to be the most effective out of other options in weeding out failing programs.
So what is to be done? Using such research methods gives organizations and donors a better look at what works and what doesn’t, a necessity for any entity to survive and grow. But should researchers be able to ‘randomly’ control the very survival of other human beings just to ensure an effective policy? When a perfected and efficient policy could ensure the survival of hundreds of thousands of people, then perhaps the answer is yes.
Even when data is concluded and theories published, how will the policymakers and researchers become aware of them? The International Initiative for Impact Evaluation (3ie), an NPO based out of Washington D.C. has plans to launch a database that aims to remove the possibility of bias when conducting reviews of projects. Reports, both positive and negative, will be listed through the organization and available to registered members seeking data to improve or analyze foreign aid policies.
Such efforts are vital for any humanitarian efforts if they wish to legitimize their ideas and goals. Without the money of the donors, projects will go nowhere. Without a guarantee of success, there will be no donors. While the randomized-type research conducted by the MCC and similar groups may be resting on unstable grounds, it provides them the sort of evaluations they need to improve their tactics and guarantee successful initiatives. Even in terms of basic science, “negative results are integral to the research process…it is important for researchers and donors to become more tolerant of them” despite the instinctual fear of losing funding.”
– Deena Dulgerian