In a world where tech companies and governments quietly race to build increasingly autonomous weapons, your personal data is valuable. Email exchanges, location data, photos and video – all of it could contribute to the development, production, and fine-tuning of a fully autonomous weapons system. Fully autonomous weapons, or ‘killer robots’, are weapons that would select and engage targets without any meaningful human control. These systems would delegate life and death decisions to machines, programs, and algorithms – crossing an ethical red line, contravening law designed to protect civilians, and destabilizing global security.
How will big data and surveillance feed into the development of autonomous weapons? How will algorithmic bias and machine learning impact the building or training of killer robots? What does this mean for how we choose to give our data away? Who will be held accountable when these systems fail or make mistakes? And, what can we really do about it?
This session will ask participants to imagine how their data might be contributing to the development of killer robots, and who the future victims of those systems would be. While focusing the discussion on civilians in conflict, the dialogue will also explore the implications for migrants and refugees, protesters, human rights defenders, environmental activists, and policed communities. The discussion will ignite conversation around how our data is connected to the future victims of killer robots; what ethical, legal, and technical issues these systems raise; and present the roadmap for a new international treaty to ensure there are #NoFutureVictims.
Register here by Monday 20 July, 11:59pm PDT.