Funders Have a New Tool to Help Navigate the Fast-Moving AI Landscape

NicoElNino/shutterstock

Like it or not, generative AI is going to be one of the biggest issues in philanthropy tech for years to come. And unlike, say, the landline telephone — which took 50 years to become ubiquitous enough to be considered essential — ChatGPT is less than two years old and has already become the subject of a White House executive order and policymaking by the EU. AI development, and reaction to it, are moving extremely quickly. The philanthrosphere will need to take a proactive approach to have much in the way of meaningful input into how it all turns out. 

The speed of AI development, and its scope and potential impact, mean that funders also need to be proactive in terms of creating policies about how, when and whether to adopt AI tools within their organizations. Combined with existing social media and data technology, new developments in AI could cause disruptions that dwarf past tech stumbles in the sector, potentially compromising a funder’s ability to carry out its mission.

Fortunately for those in the sector looking to get AI right, last month, Technology Association of Grantmakers (TAG) introduced Version 1 of its “Responsible AI Adoption in Philanthropy: An Initial Framework for Grantmakers” in partnership with data use consultant firm Project Evident. As the title says, the framework is primarily structured to help funders answer essential questions while planning if, whether and how to implement AI internally. At least a few of the framework’s questions could also be helpful for funders contemplating involvement in the discussion about larger policy issues surrounding the technology. The framework arose from responses to a survey of concerns about AI that TAG sent to its members last summer and was further refined during a workshop at the TAG 2023 conference last November. 

The report is designed to be practical, tactical, and “really help with decision-making” around AI, said TAG Executive Director Jean Westrick. With that in mind, the framework is depicted graphically as a circle, not a linear path, to allow users to come at the task of creating their overall AI policies in the ways that best suit their needs. For example, one user may decide to start out by defining what ethical considerations will be used (including issues like data security and privacy) to make decisions about whether, when and which AI tools to adopt, and then move on to use those considerations when generating internal policies around vetting potential vendors. Another might start with the framework’s organizational considerations, including the steps needed to train staff and provide change management support. The goal, Westrick said, is to help funders “build things (AI policies and tools) that meet human needs.”

Granted, the full 15 pages of the framework lay out what looks like a lot of work, which, of course, needs to happen in addition to all of the other tasks that staff are carrying out every day. But the key way to make that work easier — and ensure that the ultimate decisions around AI will work organization-wide — is to bring tech staff to the decision-making table. “The more you break down barriers and create opportunities for cross-functional teams to come in, the stronger your delivery will be, the better your outcomes will be, and the more job satisfaction you'll have," she said. “Technology folks should not be in the back room.” 

Finally, while Westrick believes that funders that aren’t at least considering adopting AI tools are doing themselves a disservice, she also said that it’s important to know when AI is not the answer. 

“Everybody's talking about AI,” she said. “This is the shiny object and everybody wants to do something that's sexy and fun and new. But there's always a trade off” — for example, if an organization were to use AI to replace human employees who have been the “secret sauce” to helping serve its mission. “That would be detrimental,” she said, and doing so “would not be aligned to your mission and your values.”

The bottom line is that, yes, it will take a lot of work to get AI adoption right. But that work is going to have to be done — either on the front end, with systematic planning and implementation, or at the back end, with crisis management when things go wrong. The philanthrosphere has a big opportunity, given how new this technology is, to set an example for other sectors to get AI right. 

The AI framework is only on version one, and TAG plans to continue refining it over time with input from users. For now, though, this framework makes it possible for funders to at least ask a lot of the right questions rather than going forward into AI blindly. Just that step alone could make a positive difference.