Tag Archives: rcts

Tom Kane on Education RCTs

“If our goal is to change behaviour and drive policies towards more effective solutions, what we have done so far is a complete failure. People who are running the What Works Clearing House don’t even have a theory [of how evidence would affect policy], or to the extent that they have a theory, its been proven wrong. … We’re just deluding ourselves if we think the 5 year, $15 million studies are having any impact whatsoever.”That’s Tom Kane (somewhat echoing Lant) on the Education Next podcast. His preferred alternative to the RCT+systematic review approach though has nothing to do with crawling on any design spaces. Rather it’s doing much more quick turn-around quasi-experimental research using the multitudes of outcomes data now being collected in the US for teacher and school accountability purposes

Posted in Aid & Development | Tagged | Comments closed

Metrics, evidence, and RCTs in global health: An Interview with Vincanne Adams

Vincanne Adams, PhD, is Professor of Medical Anthropology and Vice Chair of the Department of Anthropology at UCSF. She is the author of numerous books Read More

Posted in Featured Content, General Global Health, Hub Originals | Also tagged , , , , | Comments closed

Effective Altruism, RCTs, NGOs, & the Government End-Game

Good Ventures just gave a $25 million unrestricted grant to Give Directly on the advice of Givewell. That’s a lot of good news in one sentence, but it’s not even the best part. Givewell buried the lede when they mention around paragraph 20 that; “GiveDirectly plans to discuss partnerships with the following types of institutions:- Donor aid agencies- Developing country governments (national and local). (For example, several governors in Kenya have already approached GiveDirectly about running cash transfer programs in their counties.)”That’s what it’s all about. To really get sustainability and scale in social policy you need government involvement – that’s why the best NGOs combine a mixture of immediate direct service delivery in places where government just doesn’t have the capacity to deliver, with support to interested governments to build that capacity for the longer-term, often at the local level where administrators struggle to actually implement well-designed central policy documents, and with innovation in new models of service delivery, that governments might later adopt, of which GiveDirectly is clearly a strong example

Posted in Aid & Development | Also tagged , , , | Comments closed

New evidence on (lack of) external validity

“Site selection bias” can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site selection bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the United States. Predictions based on rich microdata from the first 10 replications substantially overstate efficacy in the next 101 sites. Several mechanisms caused this positive selection. For example, utilities in more environmentalist areas are more likely to adopt the program, and their customers are more responsive to the treatment

Posted in Aid & Development | Tagged | Comments closed

Ethical concerns with cervical cancer screening trials in India

Recently, the Indian Journal of Medical Ethics published an article by Dr. Eric Suba regarding ethical and scientific controversies about large-scale longitudinal randomized trials of various Read More

Posted in Cancer, Featured Content, General Global Health, Hub Originals, Maternal & Reproductive Health, Social, Women & Children | Also tagged , , , , , , , , , | Comments closed

5 Ways to Improve Your Impact Evaluation

Impact evaluations are supposed to tell us what works in development, and a lot of time and money goes into them. It’s unfortunate, then, when they fail to report their results clearly. One of the things I found most shocking, looking through a large database of impact evaluations, was how often academic papers omitted information that is critical for interpreting the study’s results and figuring out how well they might apply to other contexts. This blog post draws on data from over 400 studies that AidGrade found in the course of its meta-analyses. Here are five embarrassing things many papers neglect to report: 1) Attrition It’s normal for some people to drop out of a study

Posted in Aid, Aid & Development, HIV/AIDS, Hub Selects, Infectious Disease, Malaria, Politics, Publications | Also tagged , , | Comments closed

Evidence-based policy-making US-style

Based on our rough calculations, less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely. … Since 1990, the federal government has put 11 large social programs, collectively costing taxpayers more than $10 billion a year, through randomized controlled trials, the gold standard of evaluation. Ten out of the 11—including Upward Bound and Job Corps—showed “weak or no positive effects” Just in case you thought that there was any danger of the whole results agenda and RCT-fetishism taking over in American politics.

Posted in Aid & Development | Also tagged | Comments closed

Nicholas Kristof and Aid

By Arvind Subramanian – I am a big admirer of Nick Kristof, of the passion and concern that animate his books and columns, and of the must-do-can-do spirit that they embody. But sometimes his soft heart gets ahead of the hard head, leading to misleading and intellectually insupportable advocacy of foreign aid. A good example is today’s column.

Posted in Policy & Systems | Also tagged , , , , | Comments closed