menu hero image
AOAV IGw AWS FINAL 150ppi_LR

Artificial intelligence research call

AOAV IGw AWS FINAL 150ppi_LR

Prominent scientists and researchers from industry and academia have signed an open letter calling for artificial intelligence (AI) and smart machine research to focus on developing systems that are “robust and beneficial” to humanity. The letter links to a document outlining “research directions that can help maximize the societal benefit of AI” that includes a list of legal, ethical, and other questions relating to ‘lethal autonomous weapons systems,’ also known as fully autonomous weapons or killer robots.

On 15 January, one of the letter‘s principle signatories Tesla Motors and SpaceX founder Elon Musk announced a donation of US $10 million for research grants aimed at implementing the call of the open letter, which he said he signed because “AI safety is important” and he wants to “support research aimed at keeping AI beneficial for humanity.”

The donation will be administered by the Future of Life Institute, a volunteer-run non-profit organization that works “to mitigate existential risks to humanity.” The open letter was issued after the Institute convened a conference on the “Future of Artificial Intelligence” in Puerto Rico on 2-4 January 2015, attended by Musk and dozens of other signatories to the letter such as Skype co-founder Jaan Tallinn.

Technical expert Dr. Heather Roff of the University of Denver, who is a member of the International Committee for Robot Arms Control, which is a co-founder of the Campaign to Stop Killer Robots, was invited to attend the Puerto Rico conference to give a presentation on autonomous weapons. The Campaign to Stop Killer Robots welcomes the call expressed in the open letter for AI research that is beneficial to humanity and appreciates the inclusion of autonomous weapons concerns as an interdisciplinary research priority.

As Rebecca Merrett has reported–in one of unfortunately few serious reviews to date by media–the 12-page “research priorities” document attached to the open letter asks legal and ethical questions of rapidly advancing intelligence and autonomy in machines. It looks at liability and law for autonomous weapons and vehicles, machine ethics, and the privacy implications of AI systems. The document was drafted by several co-authors including Professor Stuart Russell of the University of California (Berkeley), co-author of the standard AI textbook “Artificial Intelligence: a Modern Approach” and co-founder of the Future of Life Institute, Professor Max Tegmark of MIT.

In the “research priorities” document section on “Computer Science Research for Robust AI” (page 3), the authors note that “as autonomous systems become more prevalent in society, it becomes increasingly important that they robustly behave as intended,” and state that the development of autonomous weapons and other systems has “therefore stoked interest in high-assurance systems where strong robustness guarantees can be made.”

The document outlines four ways in which “an AI system may fail to perform as desired correspond to different areas of robustness research,” namely verification, validity, security, and control. The key question posed under control is “how to enable meaningful human control over an AI system after it begins to operate?” or “OK, I built the system wrong, can I fix it?”

Under a section elaborating on “control” (page 5), the authors note that “for certain types of safety-critical AI systems – especially vehicles and weapons platforms – it may be desirable to retain some form of meaningful human control, whether this means a human in the loop, on the loop, or some other protocol.” The document also observes that “there will be technical work needed in order to ensure that meaningful human control is maintained.” The document asks “how should ‘meaningful human control’ over weapons be defined?” and “how can transparency and public discourse best be encouraged on these issues?”

The Future of Life Institute has stated that the open grants competition for the AI safety research project will be administered through an online application to be made available from its website next week at: http://futureoflife.org. According to the Institute, “anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere.” It plans to award the majority of grants to AI researchers and the remainder to AI-related research involving other fields, including economics, law, ethics and policy. The donation will also be used to hold meetings and conduct outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents.

On 29 January, Campaign to Stop Killer Robots spokesperson Stephen Goose of Human Rights Watch will debate Professor Ron Arkin of Georgia Tech on autonomous weapons at the annual conference of the Association for the Advancement of Artificial Intelligence in Austin, Texas. The debate will be moderated by AAAI President Thomas G. Dietterich of Oregon State University.

As of January 18, the open letter has been signed by more 4,000 individuals.

For more information, see:

Image: (c) Future of Life Institute artwork, January 2015

mary

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us