Former PayPal comrades Reid Hoffman, Peter Thiel and Elon Musk have gotten the band back together to fund a $1 billion effort around artificial intelligence.
The initiative, known as OpenAI, is a nonprofit research company focused on investigating how AI can benefit human life. In the words of OpenAI itself: "Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
Musk, along with Sam Altman of Y Combinator are the co-founders of the project, while Hoffman and Thiel front an effort committing $1 billion in investments for the center. AI research scientist Ilya Sutskever, who has been in the employ of the Google collective known as the "brain team" for the last three years, is on board as research director.
A relatively unprecedented undertaking, this move comes on the heels of an open letter signed by many in the scientific community expressing concern with the potential pitfalls of artificial intelligence. In fact, just last year Musk donated $10 million to the organization that penned the letter—the Future of Life Institute—for a program specifically tasked with addressing this very issue. In turn, the institute distributed $7 million in grant money last year to researchers exploring different questions related to AI. Paul Allen is another philanthropists who's interested in artificial intelligence. The Paul G. Allen Family Foundation backs research into this area through its Allen Distinguished Investigator awards.
- $10 Million Isn't Much to Fight Humanity's "Biggest Existential Threat," But It's a Start
- Allen’s Interest in AI, and Philanthropic Synergy
Musk, in particular, has spoken quite a bit recently on the potential implications of AI down the road, and he's keen on getting out ahead on this issue. In the past, Musk has referred to AI has humanity's "biggest existential threat." Regarding his decision to open the center, he said, "We could sit on the sidelines... or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity."
While we expect more philanthropic dollars from Musk and others to go toward AI work, it's interesting to see this area attract big impact investment capital. That certainly fits the interests of tech types, many of whom are looking beyond traditional philanthropy in approaching major problems.
The structure of OpenAI is another example of the blurring lines in the social sector. It's a nonprofit, but also a private company, a setup that tracks with some of the organizational models we've been seeing in Silicon Valley, like the Chan/Zuckerberg Initiative, the Emerson Collective run by Laurene Powell Jobs, and the Omidyar Network.
One explanation for how OpenAI is structured is Musk's hesitancy about balancing the interests of shareholders with what he views as the public good—per OpenAI's website: "Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."
And just as long as real AI doesn't turn out like it does in the movies, that should be just fine.