menu hero image
Marta Kosmyna, Silicon Valley Lead for the Campaign to Stop Killer Robots with robot campaigner David Wreckham. Photo: Ari Beser.

The Power of a Question in Silicon Valley

Marta Kosmyna is the Silicon Valley Lead for the Campaign to Stop Killer Robots. She is based in San Francisco, USA and in her role, she seeks to energize the technology sector, elicit political engagement, and support tech allies to bring their expertise to the United Nations.

Marta Kosmyna, Silicon Valley Lead for the Campaign to Stop Killer Robots with robot campaigner David Wreckham. Photo: Ari Beser.

Asking a question can spark a movement. All it takes is for one person to stand up, question the status quo, and think critically about the current state of affairs. We saw this last year with Project Maven, and I see it every day in my work as Silicon Valley Lead for the Campaign to Stop Killer Robots. So far, thousands of tech workers have pledged to ensure their work will never contribute to fully autonomous weapons, and over 200 companies support a treaty to ensure these weapons are never used.

When it comes to the use of artificial intelligence in warfare and policing discussing ethics is not enough. While guidelines, principles and self-regulation are an admirable first step, it’s clear that in order to curb the threat posed to humanity by fully autonomous weapons — “killer robots” — urgent legislation is needed. The Campaign urges governments to negotiate new international law that will retain meaningful human control over the use of force.

Each year, our Campaign documents growing opposition from the tech industry to the development and use of fully autonomous weapons. Our tech allies understand the limitations of this technology, and have been bridging the gap between tech and policy.

Already we have seen the unintended consequences of emerging technologies. If tech workers are concerned about their work being used in killer robots, it’s important to remain vigilant, consider the end-use of the technology they develop, and question the partnerships, customers and investors involved in high-risk projects. Next time an industry leader, company executive or policymaker takes the stage, ask them directly about fully autonomous weapons. It’s important to get them on the record so they can be held accountable, and to ensure their statement aligns with the values, mission and vision they profess to represent.

Power of Question in Sillicon Valley Sidewalk Graffitti

One example of chalk stenciling by the Campaign in New York near the offices of Amazon, Microsoft, Palantir and Clarafai. Photo: Clare Conboy.

The Campaign to Stop Killer Robots has taken the stage at many tech conferences and ethics-related events this year to raise the alarm over the threat posed by fully autonomous weapons. Just this year, our campaigners presented at Viva Technology in France, True North in Canada, Defcon in the United States, and Inteligencia Artificial y Ética: El Desafío de las Armas Autónomas Letalesin in Argentina, among many other events worldwide.

We always encourage you to ask us your toughest questions, but some of the questions we hear most often are answered below:

Government officials, including treaty negotiators, will need input from technical experts to ensure the treaty banning killer robots does not stifle innovation, and instead ensures that the research and development of artificial intelligence continues unhindered. Biologists have not found that the Biological Weapons Convention has hurt their research and developments, nor do chemists complain that the Chemical Weapons Convention has negatively impacted their work. In fact, if the technology to develop fully autonomous weapons is permitted to develop without regulation, many artificial intelligence experts, roboticists, and technology workers fear that positive applications of artificial intelligence will suffer. Any malfunctions, mistakes or war crimes committed by fully autonomous weapons systems would receive negative publicity, resulting in public push back on the trend to develop artificial intelligence and other emerging technologies.

 

Power of Question in Sillicon Valley Sidewalk Graffitti 2

One example of chalk stencilling by the Campaign in New York near the offices of Amazon, Microsoft, Palantir and Clarafai. Photo: Clare Conboy.

We are a single-issue coalition focused on securing a treaty to ban fully autonomous weapons and retain meaningful human control over the use of force. Our members often question specific military projects that could pave the way towards fully autonomous weapons, but we do not ask or advocate for companies not to work with militaries or governments. We advise technologists to consider the partnerships, customers, and investors they work with, and think critically about the consequential outcomes of any high-risk business relationships they enter into.

 

 

Our Campaign is not anti-technology. It does not oppose military or policing applications of artificial intelligence and emerging technologies in general. We oppose autonomy in the critical functions of selecting and engaging targets, in all circumstances. As a human-centered initiative, we believe a new international treaty banning killer robots would bring many benefits for humanity. New law would help to clarify the role of human decision-making related to weapons and the use of force in warfare, policing and other circumstances.

The current development of AI and emerging technologies is outpacing policymakers’ ability to regulate, and this is seen vividly in the case of fully autonomous weapons. Technology companies and workers must commit not to contribute to the development of fully autonomous weapons. Many technologies under development are “dual-use”, meaning they can be employed in various scenarios — civilian, military, policing, etc. Therefore, it is crucial that the tech sector remain vigilant and always consider the anticipated end-use.

Weapons are not designed to save lives, they are designed to take lives. Fully autonomous weapons would be unpredictable. By reacting with their environment in unexpected ways, they could cause fratricide or harm to friendly troops. Improved precision can be achieved without removing meaningful human control from individual attacks. The Campaign seeks to prohibit the development and a specific application of certain technologies, codify limits on its intended use, and ensure accountability under international law. There are no victims of killer robots yet, and we want to keep it that way.

Since 2014, more than 90 countries have participated in the diplomatic talks on lethal autonomous weapons systems at the United Nations and a majority of those states now see a need for a new treaty to prohibit or restrict such weapons. A handful of military powers, notably Russia and the United States, oppose any effort to regulate such weapons systems. These talks are an exercise in knowledge-building and transparency, but must result in the negotiation of a new treaty. This is the only multilateral response able to address the threat to humanity posed by autonomous weapons. The existing laws of war were written for humans, not machines, and must be strengthened to ensure accountability and future-proof against further technological developments. A new treaty would establish a powerful norm stigmatizing the removal of meaningful human control from the use of force. This would drive compliance even by countries and actors that do not initially sign.

www.stopkillerrobots.org Photo: Ari Beser. Graphic: Clare Conboy

If this article resonates with you as a tech worker, check out the Campaign to Stop Killer Robots resources for technology workers: www.stopkillerrobots.org/tech or join the conversation using #TechAgainstKillerRobots


Original Article posted on Medium.com.

Marta Kosmyna

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us