Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security

The Open Philanthropy Project is one of the biggest practitioners of effective altruism, a rationalist approach to giving that starts with a wide range of issues and then attempts to pick priorities based on the total good possible per dollar granted.

The cause probably most associated with effective altruism is global health, where it’s comparatively easy to measure how funding impacts lives. Open Philanthropy has given millions in recent years to causes such as preventing malaria and controlling schistosomiasis, a disease caused by parasitic worms.

But the logic of effective altruism has also led the organization down some unexpected paths. For example, if there’s even a small chance that giving can prevent a global catastrophe that could impact generations of humanity, wouldn’t that be warranted?

That’s the premise that led Open Philanthropy to explore the potential risks of artificial intelligence, one of two topics within its “global catastrophic risks” initiative (the other being biosecurity). The organization has given over $110 million to date to the issue, including its largest grant on the topic—$55 million to establish a new research and policy analysis center at Georgetown focused on AI.

That recent grant makes the Open Philanthropy Project potentially the largest funder backing oversight of the fast-moving field of AI, an area of giving that’s drawn interest from donors like Elon Musk and Reid Hoffman in recent years. The grantmaker has a particular focus on long-term and geopolitical risks as AI becomes more ubiquitous. The newly established Center for Security and Emerging Technology (CSET) will provide research and analysis to policymakers on AI and other technologies.

“We think that the future of AI will have enormous benefits to humanity, and also pose some significant risks,” says Senior Research Analyst Luke Muehlhauser. “Open Philanthropy is really focused on that long-term consideration in AI development, and making sure that humanity can seize those benefits, and avoid the risks, and make sure the benefits are broadly distributed.”

Of course, that’s a profoundly unpredictable space, and oversight efforts are vastly overshadowed by corporate and government money going toward advancing AI technology. This makes the outcomes of Open Philanthropy’s investments on AI quite uncertain. But Muehlhauser explained the demand the grantmaker is trying to meet and why it’s worth the risk.

Rewards and Risks

Artificial intelligence has become a hot topic for a set of funders, including massive private sector investment pouring into research, and philanthropists coming at it from many angles. 

We’ve been documenting this giving, with standouts including the late Paul Allen, auto and tech companies, and a bunch of grants to boost universities in the field. Our friends over at the Chronicle of Philanthropy totaled up some $583 million in donations toward this space since 2015. 

Particularly fascinating are efforts to explore and highlight potential negative consequences. That’s included a philanthropy-backed center anchored at MIT and Harvard, looking at the legal and ethical concerns of AI, and how the technology might impact areas like criminal justice and democratic norms. 

Open Philanthropy Project CEO Holden Karnofsky first took an interest in the issue around 2007, and over time, went from skeptic to convert. “I previously saw this as a strange preoccupation of the [effective altruism] community, and now see it as a major case where the community was early to highlight an important issue,” he wrote in 2016. 

The organization’s decision-makers blog at length about the evolution of their thinking on issues, part of the effective altruism goal of starting out agnostic about causes and reasoning your way toward grantmaking decisions. The outfit is a result of a merger between GoodVentures, the philanthropy of Dustin Moskovitz and Cari Tuna, and GiveWell, a nonprofit that evaluates causes for donors based largely on effective altruism principles. Those principles are often debated, meaning different things to different people, but Open Philanthropy considers itself an effective altruist organization based on its overall goals.

Nearly everything about the organization comes across as highly rational, so accordingly, Muehlhauser isn’t one to throw around doomsday scenarios about AI. But that’s partially due to the amount of uncertainty involved.

“Because AI developments are moving so rapidly and AI could be a very general purpose technology, and could transform a lot of different parts of society, there are huge benefits there, and also huge risks,” he says. 

One argument for concern he cites is the paper “Technology Roulette,” by national security consultant Richard Danzig. Danzig makes the case that decision-makers in security tend to pursue technological superiority, but that doesn’t necessarily lead to greater security in the case of AI and other tech, as it expands the risk of accidents, unanticipated effects, misunderstandings and sabotage.

“The multinational reliance on ever-advancing technological capabilities is like loading increasing numbers of bullets into increasing numbers of revolvers held to the head of humanity. Even if no one ever wants a gun to go off, inadvertent discharges may occur,” Danzig writes. 

The Case for Taking on Catastrophe

Open Philanthropy made the potential risks of AI a priority for giving in 2016. In addition to the latest grant establishing CSET at Georgetown, the funder supports research fellowships and other universities and institutes working on the topic.

Such a potentially distant and uncertain threat might seem unlikely for a funder that takes such a calculated perspective. And Open Philanthropy Project does give quite a lot to the effective altruism standby of global health and development, with more than $327 million in giving making it its largest focus area to date. 

Muehlhauser explains that all of the organization’s decisions hinge on three criteria—importance, tractability and neglectedness. When it came to AI, he says, Open Philanthropy judged it to be an issue that was very important, and yet mostly neglected by the funding community. The tractability—whether they could make a difference—was a lot less certain, and still is, he says.

“Because we're focused on those long-term issues and those global issues, we always have some uncertainty about how much impact we can really have,” he says. 

Ultimately the team deemed that taking on AI was worth it, based on another principle they embrace called “hits-based giving.” This is a VC-like approach that assumes much of its funding won’t have the intended effect, which is OK as long as some funding hits in a big way. 

“We are open to things that have even potentially a quite small chance of having a positive impact, so long as if that impact happened, it would be large enough,” Muehlhauser says. So the idea is that Open Philanthropy tries to layer some giving with uncertain outcomes, along with grants that have more tangible impacts.

And if you’re wondering (as I did) why climate change wasn’t a more obvious global catastrophe to pick as a priority, Open Philanthropy’s spokesman points out that they have actually given millions related to climate change, and deem it a major issue that could become a larger focus in the future. But in choosing initial priorities, other issues stood out as more neglected and tractable.

A Gap Between Tech and Policy

Another factor Open Philanthropy takes into account is how much of a logical fit a topic is for philanthropy specifically. In the case of AI, the profit motive means the private sector is going to go full throttle on advancing the tech. Meanwhile, though, if we’re looking far enough out beyond election cycles, there’s not as much incentive for government to prioritize concerns about the long-term consequences of AI's rise. 

Policymakers have a hard time even grasping what the threats posed by AI may look like, which was a big motivator behind the Georgetown grant. “We noticed over the last couple of years that there was a lot of demand for advice about AI policy in D.C.,” Muehlhauser says. 

The concept for CSET came from its founding director, Jason Matheny, who formerly worked for the national intelligence R&D agency IARPA. The idea is to study security impacts of technologies like AI and provide a level of analysis to policymakers that previously had not existed. 

At this stage, part of Open Philanthropy Project’s goal regarding AI is simply being, well, open about where it might go. Its theory of change is extremely simple, and “hopefully, properly humble,” Muehlhauser says—fund great people in the field and things will probably go better than if they didn’t. 

There’s been a lot written about the concept of effective altruism, including some fierce criticism. I tend to have mixed feelings about the movement, which often feels too prescriptive and top-down. At the end of the day, I also find that it actually has a lot more in common with traditional philanthropy in terms of goals—taking risks, finding a niche, having an impact, etc.—than the way it’s sometimes portrayed.

At the same time, there is that openness to possibilities—from animal welfare to runaway tech—and the ongoing transparent analysis of its own choices that makes the Open Philanthropy Project such a compelling endeavor.