$10 Million Isn't Much to Fight Humanity's "Biggest Existential Threat," But It's a Start

As we have long known from The Terminator movies, computers will one day become sentient and upon waking up will immediately seek to destroy humanity, probably as payback for Windows 8.

Hilarious jokes aside, it may not be an idle worry. Several high-profile smart people, including Internet-automotive-rocket industrialist Elon Musk and physics genius Stephen Hawking, have been sounding the alarm recently about artificial intelligence (AI). Hawking told the BBC last year that the "development of full artificial intelligence could spell the end of the human race."

Musk, founder of SpaceX and Tesla Motors, is the farthest thing from a technophobe. But last year, he called AI humanity's "biggest existential threat" and pledged $10 million to the Future of Life Institute (FLI) to run a global research program to keep AI beneficial. FLI recently announced $7 million in grants to 37 research teams.

Grants ranged from about $36,000 to $340,000, mostly to teams at universities and companies in the U.S. and England. Projects range from the deep mathematics of AI to its ethicsto models that predict when AI systems and networks will fail or make mistakes.

You've likely seen some of the news stories about FLI's open letter, released July 28, at the International Joint Conference on Artificial Intelligence in Buenos Aires, which warned of an AI arms race, and called for a ban on the use of AI for autonomous weapons. It was signed by more than 1,000 AI and robotics specialists, along with other academics and industry leaders, among them Bill Gates and Apple co-founder Steve Wozniak.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, is pretty clear about the scale of the threat as well as its imminence:

Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

If you're old enough to remember the Cold War, when nuclear war was a more palpable threat and grade schools actually ran nuclear attack drills, you can appreciate any effort to keep technology peaceful. Fear of technology is nothing new. Sometimes the anxiety is driven by myth and meme, as in the current anti-vaccine movement. Sometimes it's well documented and near universally accepted by scientists, as in the link between fossil fuel use and global warming.

It's not possible to know the difference every time, but while critics of Musk and his co-signatories say fears of AI are off-base, it's always a great idea to keep a close watch on the creation of new ways to kill people.