If you want to bring in the big bucks for your nonprofit, it helps to show hard evidence that your group is having an impact. Right?
Well, yes and no.
There have been a number of studies lately on just how much such evidence matters to donors. The results are surprising, since they suggest that "proving" what a good job your group is doing doesn't necessarily convince people to support that work.
Caroline Fiennes analyzed the results of some recent studies for Third Sector. Here are a couple of highlights from the piece:
A paper published last year reported on an experiment at a US charity, Freedom From Hunger. It divided its donor list into two random groups. Those in one group received a conventional solicitation with an emotional appeal and a personal story of a beneficiary, with a final paragraph suggesting that FFH had helped that beneficiary. Those in the other group received a letter identical in all respects – except that the final paragraph stated (truthfully) that "rigorous scientific methodologies" had shown the positive impact of FFH's work.
Donations were barely affected. The mention or omission of scientific rigour had no effect at all on whether someone donated. It also had only a tiny effect on the total amount raised. People who had supported that charity infrequently were not swayed. However, people who had previously given a lot – more than $100 – were prompted by the material on effectiveness to increase their gifts by an average of $12.98 more than those in the control group.
The last point is key: Evidence of impact has an affirming effect for existing donors, this study suggests, getting them to expand their giving. We see this all the time in our reporting: Donors will make an initial investment, see how it goes, and then give at a higher level if they think their money is being well spent. And many do want to see evidence.
Fiennes next moves on to ratings:
A separate study in Kentucky looked at whether donors give more when there is an independent assessment of the charity's quality. Donors were each approached about one charity from a list; each charity had been given a three or four-star rating (out of four) by the information company Charity Navigator. Half the donors were shown the rating; the other half weren't. The presence of the ratings made no meaningful difference to their responses.
Well, that's interesting, and hardly what the folks at Charity Navigator might want to hear. But then Fiennes mentions a third study:
It was a multi-arm, randomised, controlled test in which a large number of US donors each received appeals from one charity out of a set of charities that had various Charity Navigator ratings. Half of the appeals included the charity's rating; the other half did not.
The overall effect of presenting the information was to reduce donations. Showing the ratings brought no more benefit to the high-rated charities than not showing them. For charities with a rating of less than four stars, showing the rating reduced donations; and the lower the rating, the more it reduced donations.
So it looks like those ratings do matter after all. But they matter more like those letter ratings restaurants from the department of health. An "A" rating is not going to lure you in to a restaurant, but a "B" may well keep you away. Fiennes says:
Donors appeared to use evidence of effectiveness as they would a hygiene factor: they seemed to expect all charities to have four-star ratings, and reduced donations when they were disappointed – but never increased them because they were never positively surprised.