"Evidence-based" Philanthropy Gone Wrong: The Myth of How Small Schools Failed

Philanthropists these days often talk about “evidence-based” or “effective” philanthropy. They view philanthropy of old as misguided, or too often based on the donor’s whims rather than on evidence as to what works. Just as success in business or finance depends on a relentless focus on results, philanthropy should bring that same evidence-based approach to solving social problems.

Evidence-based philanthropy is a wonderful idea, one that we ourselves constantly strive to achieve at the Laura and John Arnold Foundation. But foundation officers should be aware of an insidious danger, that of thinking that you are making “evidence-based” decisions when, in fact, the evidence is not rigorous or reliable at all. Indeed, “evidence” that is of poor quality can completely contradict more rigorous evidence.

A good example of this is the Small Schools Initiative, which launched smaller high schools all around the country with the support of the Bill & Melinda Gates Foundation. The conventional wisdom is that the Small Schools Initiative was abandoned after the evidence showed that shrinking the size of American high schools didn’t work (and, indeed, that school districts should beware of meddlesome philanthropists). But the most rigorous evidence consistently shows that the Small Schools Initiative had strong positive effects, especially for poor minority students. The real story here is that not all evidence is the same. If foundations want to be evidence-based, they have to pay close attention to the quality of the evidence, and should demand randomized trials wherever possible.

Rewind to how this all started. About 14 years ago, many philanthropists and school districts decided that high schools often turn into dropout factories because they are too large and impersonal, and that high schools would be more effective if they were smaller and more tailored to the needs and interests of individual students. Cities across the country (including New York, Chicago, Indianapolis, Oakland, and Sacramento) started closing large high schools and launching smaller schools instead.

Here’s where inadequate “evidence” came into play. The new initiative was initially evaluated by a couple of large research firms, and the results seemed disappointing (see here and here). But those evaluations relied on the weakest possible evidence: mere “descriptive statistics” that report how students are doing at given schools compared to district averages (see page 11, 21, or 26 of the final report, for example).

The problem is that students and families can select which schools they want to attend. If some parents select smaller schools in the hopes of getting a struggling student on the right track, that will contaminate any comparison to the good students whose parents were happy with their existing schools and did not choose to move. That makes it hard to tell whether any effect is due to a particular school itself, or to the types of parents who chose the schools.

In other words, the initial evaluations did not really provide “evidence” at all, not in any rigorous sense.

Before long, though, much more rigorous evidence started to come in, all of it showing that the Small Schools Initiative was an impressive success, especially given that it was serving such disadvantaged students. A team of prestigious economists from MIT and Duke found that smaller high schools produced “large and consistent score gains” on New York State tests, an “increase in credit accumulation, attendance, and graduation rates,” and made students “considerably more likely to enroll in college and . . . less likely to require remediation in reading and writing.”

Another team of economists funded by the U.S. government’s Institute of Education Sciences independently studied the Chicago Small High School Initiative, where almost all of the students were poor minorities. The researchers found that “small schools students are substantially more likely to persist in school and eventually graduate.”

Meanwhile, the research firm MDRC was hired to look at the New York Small Schools of Choice program. The New York study would be based on the most rigorous design: randomized lotteries that were used throughout New York City to give high school students a shot at going to the school of their choice.

In 2010, MDRC released the first round of results that tracked more than 20,000 students at over 100 schools. Importantly, these schools “served a population that almost exclusively comprised low-income students of color.” Students who enrolled in small schools were substantially less likely to receive failing grades, more likely to accumulate credits on time, more likely to attend schools, and a full 6.8 percentage points more likely to graduate, which is “roughly one-third the size of the gap in graduation rates between white students and students of color.”

MDRC then released reports in 2012, 2013, and 2014 that followed additional students for longer periods of time. If anything, the results kept getting stronger: by 2013, three cohorts of students had shown an average 9.5 percentage point improvement in their graduation rates, and the effect was even stronger (13.5 points) for the black male student population, a cohort which has traditionally been hard to reach. Moreover, the 2014 update found that small school graduates were 8.4 percentage points more likely to have enrolled in college, and once again, the effect was stronger for black males (11.3 percentage points).

In short, bad evidence showed that the Small Schools Initiative had failed, while good evidence showed that it was amazingly successful for poor minority students. This dramatic contradiction highlights the danger for foundations that want to be “evidence-based.” Not everything with a line graph counts as “evidence,” and philanthropists need to be more wary of research firms, academics, or evaluation officers who are peddling descriptive charts rather than the results of rigorous randomized trials. Unless they distinguish good evidence from bad, philanthropists will be led in the wrong direction.

 Stuart Buck is Vice President for Research Integrity at the Laura and John Arnold Foundation