How Might AI Impact Nonprofits and Foundations? Here's a Crash Course

Urban Images/shutterstock

Back in August, the Technology Association of Grantmakers (TAG) conducted a quick online poll of its members about artificial intelligence — aka AI. TAG asked its members two things: “What is the No. 1 question you're being asked about AI at your organization?” and “What would be most helpful for you right now in navigating the AI hype cycle?”

Fifty-four members replied, and their responses give us a solid window into where the philanthrosphere is with regard to AI right now. In a nutshell, respondents said they’re curious about how to incorporate AI into their processes and how to conduct due diligence when evaluating which AI tools to use, and expressed concerns about AI’s ethical risks, including data security.

Their No. 1 response, though, probably nails the real question for many of us when it comes to artificial intelligence: “What is AI, and should we care about it?” My colleague Paul Karon has reported on the many philanthropic funders that have contributed to the technology’s development, as well as on the controversy around AI and efforts to mitigate the dangers an unchecked AI landscape may present. But as the TAG survey appears to indicate, the sector also needs to understand the basics: First, just what this technology is, and second, how AI could both help and hinder nonprofits in the day-to-day pursuit of their missions.

In order to get some answers, I sat down with TAG’s outgoing Executive Director Chantal Forster. Drawing on that conversation and some additional research, here’s a primer on AI and nonprofits: what AI is, nonprofit and funder uses for it, and who is funding nonprofit AI tools right now.

A non-tech geek’s guide to AI

The first thing to know is that despite the current hype, artificial intelligence isn’t all that new: It’s been around for about 50 years, with increasingly widespread use since the 1980s. For instance, Forster told me that her father earned his masters degree in artificial intelligence in the 1990s and used his expertise to create motorized robots that could detect and move around obstacles in steel mills. 

The second thing to know is that whether or not you’ve checked out ChatGPT, you’re already constantly using AI. When you see a page of “if you liked” recommendations on Amazon, or your word processing or email application fills in suggested words for you as you type, you’re dealing with AI. These versions of artificial intelligence are “predictive” — they’re using past actions or word choices to predict what a user will want next. The version of AI that’s causing equal parts excitement and panic today, though, is called “generative AI.” Forster said that this next generation of AI was developed in 2017, and it represented a big leap forward for the technology.

In the past, Forster said, AI programs could only understand a single word at a time. The new software, though, has what she called “a both/and awareness” of both single words within sentences and the context around those words. Take computers’ new ability to learn from context, add in today’s processing power allowing software to harvest, store and manipulate vast amounts of data, and top that off with both a big marketing campaign and the fact that generative AI’s simplest applications are easy enough for anyone to use, and that’s how we’ve arrived at the situation we’re in today: a climate where everyone seems to be talking about AI while very few of us have a solid grasp on what it’ll mean for nonprofit work and society at large.

What can nonprofits and funders use AI for today?

Forster said that larger nonprofits are probably using the simpler version of AI, predictive AI, right now to do things like, say, predicting the number of shelter beds they’ll need over a given weekend by factoring in things like that weekend’s weather forecast. Generative AI’s uses, on the other hand, can include helping with tasks like composing fundraising form letters and writing and evaluating grant applications. 

While sending out fundraising letters, for example, generative AI is able to take mail merge fields to an entirely new level by personalizing every letter in a mass mailing. A single mass mailing to supporters could congratulate one donor on a child’s birthday, celebrate a married couple’s anniversary, and include a quick mention to yet another donor about the specific item or action their gift paid for. “All of that stuff can be automated with generative AI, and that is a huge time-saver for fundraisers,” Forster said.

Beyond fundraising letters, a quick online search revealed that companies are already marketing AI software to write and read grant applications. In other words, we’re already at the point where a nonprofit could use an AI program to generate a basic grant application, which might then be evaluated, at least initially, by the funder’s own AI program.

All of this raises a big question. As Forster put it: “If nonprofits are using [generative] AI to create grant applications, and foundation staff are using AI to understand grant applications, what's the point of the grant application?” Hence, Forster believes generative AI will significantly change grants administration. Nonprofits could also conceivably use AI to customize post-grant reporting, another way to save possibly hundreds of hours’ worth of time a year.

Ethical and practical concerns for nonprofit AI use

Beyond fundraising administration, though, generative AI also has the potential to help nonprofits pursue their missions. For example, imagine an AI-based tutoring system that’s able to fully customize online lessons for each student. Think Duolingo’s Max application for language learning, but for uses including everything from reading comprehension to financial literacy. Right now, nonprofits’ reach is frequently constrained by the costs of hiring, training and retaining people to carry out those kinds of work. But “if you're in an organization that provides these sorts of services, and you could suddenly scale from reaching 5,000 to 50,000 people through AI, why would you not do that?” Forster said.

Of course, this leads to the question of the degree to which it’s ethical to replace front-line workers with software — if at all. That’s a particularly relevant question right now, as substandard wages continue to contribute to a long-term nonprofit hiring crisis amid record levels of staff burnout at many organizations. What’s to keep funders from pressuring nonprofits to subscribe to an AI company rather than either hiring more workers or offering living wages and benefits to the employees already on staff? 

On the other hand, while I’ve sought to bring attention to philanthropy’s responsibilities to the (human) nonprofit workforce, the counterpoint is worth mentioning: Is the goal of any given nonprofit its actual mission, or to perpetuate itself and its employees? “If I don't need as many people to realize as much outcome, what is the point of me having all those people?” Forster said. “Do I deserve to have that money, or should I be realizing more outcomes in the world? As the CEO of a nonprofit I have to ask myself these questions ethically.”

Fair treatment of nonprofit workers is far from the only moral issue for nonprofits and funders considering adopting AI tools. There are also concerns surrounding allegations that AI companies have violated creators’ copyrights and people’s privacy, not to mention several reports of low pay and other exploitative practices at AI companies. Funders and nonprofits alike, particularly those with a focus on any kind of racial or economic justice work, would do well to deeply research any AI tools they’re considering purchasing or leasing to ensure they aren’t enabling abuses with one hand that they’re fighting with the other.

Other ethical and practical considerations around nonprofit AI involve factors like cost and security. Companies that sell AI grantwriting and grant evaluation services typically run on a subscription model, which raises the question of who will pay those ongoing costs — which, for one grantwriting AI application I found, can top off at nearly $900 a year. These fees don’t include the cost of buying computers that can make use of those subscriptions, nor do they reflect monthly internet fees or the cost of hiring, training and retaining workers with sufficient education and experience to manage the AI program and catch errors in the software’s final product. 

Without funder support, will small nonprofits that are desperate to keep up be forced to rely on “free” AI applications that offer little to no protection for the privacy of their data? And if nonprofits across the board are increasingly pressured to use AI to provide services, where does that leave recipients who aren’t able to access or use those services because they can’t afford or use a computer or smartphone, or even because they’re in need of the kind of trauma-informed emotional support that only another human can offer?

Who’s funding nonprofit AI? 

Right now, there are still many more questions than answers when it comes to AI in philanthropy and the nonprofit world. But one question we can answer is how many funders are currently making grants to help nonprofits explore and possibly use AI tools: basically, not that many. On the private foundation side, the Patrick J. McGovern Foundation has been making grants through its Emergent AI+Society focus since 2018 and seems to be the granddaddy of nonprofit AI funding. The science and tech funder Schmidt Futures has funded the use of AI in postdoctoral research, and currently funds AI2050 fellowships “to solve hard problems in artificial intelligence through multidisciplinary research.” 

The Gates Foundation also recently embraced AI with the announcement of $5 million in “Grand Challenges” grants to 50 organizations that are “developing global health and development solutions for their communities using AI-enabled large language models (LLMs).” Unsurprisingly, Google.org is another player in this funding space, with its latest climate-focused Impact Challenge awarding grants to a number of efforts using AI to aid environmental research and resilience.

On the for-profit side, in September, Salesforce announced its “Salesforce Accelerator—AI For Impact” effort to “provide mentorship, technology and $2 million in funding to help six nonprofits accelerate AI solutions to supercharge their social impact.” According to Salesforce, “The grantees will also gain access to a 24-month contract for donated Salesforce products to support the development of their proposed AI-driven initiatives.” Hopefully, the grantees involved will be able to find additional funding if they want to keep using those Salesforce products. And last but not least, the U.S. government is all over AI. A quick search found grants being offered by the agencies you’d expect, like the National Institutes of Health and the National Science Foundation, but the National Endowment for the Humanities is also offering AI money. However, none of those grants are geared toward supporting nonprofits’ day-to-day operations.

When it comes to AI funding, virtually all of the available grant money involves activities like research. The list of funders backing AI to support the day-to-day operations of nonprofits is growing and will no doubt continue to do so, but right now, there’s simply not that much money out there for these uses.

Avoiding irrational hope and fear

So will generative AI destroy the world (and nonprofits) as we know them, or is it the silver bullet that will solve everything from climate change to saving democracy? Rather than panic or breathless celebration, Forster suggests keeping in mind the fact that we’re currently in a hype cycle, much like the hype cycles that surrounded the introduction of personal computers, smartphones and social media. “The way to navigate a hype cycle is to (a) understand [the new technology] and (b) attempt to think strategically about the risks and the benefits,” she said, rather than immediately forming fervent, opposing camps.

“There is a reality here that things are going to dramatically change, so let's start developing expertise, fluency and understanding and owning our responsibility,” Forster said. “Philanthropy has a responsibility here to shape the responsible usage of AI in the social sector. If we don’t step up and take responsibility for this, who is going to?” 

Given the history in the U.S. of allowing tech companies to define the limits of responsible use for the gadgets they’ve invented (or, often, lack of limits), we can only hope that the philanthropshere takes that challenge seriously in the years ahead.