Tue, 07/28/2015 - 6:00pm
Greg Watry, Digital Reporter
Stephen Hawking, Noam Chomsky, Elon Musk and Steve Wozniak are among the many signatories of a letter warning against the integration of artificial intelligence (AI) and weapons, something the Future of Life Institute sees as feasible within years.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow,” the organization said in an open letter.
The Future of Life Institute is a volunteer-run research and outreach organization, which is currently focused on the potential risks of human-level AI.
“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc,” continues the letter.
While the letter doesn’t dismiss the idea of AI on the battlefield, the organization believes it could be integrated to make the battlefield safer, especially for civilians.
The letter has collected over 1,000 signatures, and was presented at the International Joint Conferences of Artificial Intelligence, held in Buenos Aires, Argentina.
Musk previously called AI “our biggest existential threat,” and there was a need for “regulatory oversight,” according to the The Guardian.
In early 2015, Musk donated $10 million to the Future of Life Institute for a global research program aimed at keeping AI beneficial to humanity. The money was disseminated by the Future of Life Institute via a grant application process. Thirty-seven teams were selected and will use around $7 million of the donation. Projects and studies include a Carnegie-Mellon Univ. project aimed at making AI systems explain their decisions to humans, a Stanford Univ. study on how to keep economic impacts of AI beneficial and projects at Univ. of California, Berkeley and Oxford Univ. on developing techniques for AI systems to learn what humans prefer based on behavior observation.
In an interview with BBC News, Hawking warned full AI could be the end of the human race. “It would take off on its own, and re-design itself at an ever increasing rate,” he told BBC. “Humans, who are limited by slow biological evolution, couldn’t compete.”
Wozniak, however, has a different view of AI on the whole. At the Freescale Technology Forum 2015, Wozniak said it would be hundreds of years before AI replaces humanity, and it may turn out to be a good thing. “They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature,” he said at the forum, according to TechRepublic.
According to the article, a scenario where AI takes over the world would be contingent on everything being controlled by computers, as is being explored with the “Internet of Things.”
Already, Carnegie Mellon Univ. is turning its campus into a living laboratory for Google-funded Internet of Things research.
Anind K. Dey, the lead investigator of the project and director of the university’s Human-Computer Interaction Institute, says in an interview with R&D Magazine, the goal by the end of the month is to have two or three spaces on campus outfitted with sensors.
According to the university, the sensors will help create smart environments. One example is Snap2It, a system that connects users to a printer or projector by taking a picture of it on a smartphone.
• CONFERENCE AGENDA ANNOUNCED:
The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason. Learn more.