menu hero image

Real Stories

Laura

Hi! I’m Laura,  a software engineer. I live in Dublin, Ireland with my partner and our two utterly spoiled cats (although we are currently trying to buy a 170 year old house an hour away from the city, because we’re both now working remotely most of the time). Outside of work and mortgage paperwork, I’m currently spending a lot of time reading and studying, because I’m writing up an MA thesis part-time.

Image alt text

Targeted advertising, bank loan approvals, predictive policing… autonomy in weapons, automation is increasingly becoming part of our lives. What types of digital dehumanisation are you most worried about and why?

I worry about the cumulative effect of all of it! The most obviously concerning systems are those where there are dramatic impacts on individuals – things like weapons, as well as anything related to policing or justice.

However, we are too often building software that undermines people’s ability to understand their world and to take control of it where necessary.

I think the recent UK Post Office scandal is a good example of this – the Horizon computer system would frequently show shortfalls in money taken at local offices, leading to legal action against local postmasters and employees. The whole affair has been called one of the ‘most serious miscarriages of justice in recent English history’.

There were definitely ethical shortcomings on the part of the Post Office and Fujitsu, but also the key to the whole mess is the fact that the Horizons system doesn’t seem to have been designed to give people using it a way to understand the transactions in the system and to correct problems. It fundamentally undermined the autonomy of the local postmistresses and their ability to understand their world on their own terms. That’s bad software design. They must have felt like they were in one of Kafka’s stories.

Every day now, people are trapped in some sort of mini-Kafkaesque technology nightmare like this, whether it’s getting their virtual exam proctoring software to work, getting their job application through the thickets of automated filters, or getting benefits they may be entitled to. We shouldn’t be building software like this.

Do you believe that individuals can make a difference?

Absolutely – history is full of individuals who did make a difference – Martin Luther (of the 95 theses), Rosa Parks, Nelson Mandela. Today we have people like Greta Thunberg, who has put so much energy into the environmental movement; and like Max Schrems, who continues to shake up international data protection law. In Ireland, where I live, we recently changed our very restrictive laws on abortion, and that had a lot to do with Savita Halappanavar, who sadly died in an Irish hospital because Irish laws were an obstacle to the care she needed. Savita’s husband and family were incredibly brave people who spoke out publicly, and that changed the course of Irish history.

All the individuals who donate to a campaign, who go to marches, who organise, who knock on doors, or who write letters to our politicians – we all matter as well. We have power!

When did you first learn about killer robots, and what did you think?

Killer robots are a very personal issue for me. In 2017, when I was working at Google, as part of the team that manages Google’s Cloud infrastructure, I was asked to do some work to support Project Maven, the US DoD’s project to automate analysis of drone surveillance footage. I was asked to work on a project to obtain higher security certification for its Cloud platform in order to run Maven in-house, rather than on the DoD’s own machines.

Maven was ‘kill chain’ work, no doubt about it: it analysed footage to generate data that would be used in target identification and selection. This was not just morally-neutral computer vision work. By 2020, the US Army was putting Maven on Grey Eagle drones to detect threats autonomously. Maven is not looking for unambiguously military objects like tanks. It’s looking for people, and for vehicles, including civilian vehicles.

I had an immediate and visceral reaction: I could not work on this, no matter how indirectly. I won’t help to build or run computer systems that select people to die.

Targeting human beings in conflict is not something we should automate: it’s too important, too complex and context-dependent a problem, and far too high-risk.

The 2020’s have been a little rough so far. What gives you hope?

The 2020s have certainly been interesting times! But actually, that is a thing that gives me hope. We’re in a time of flux. Maybe we can replace the old ‘normal’ with a better ‘normal’.

Why are you part of the Stop Killer Robots movement?

I’m part of this movement because I have a professional responsibility to speak out, as a technologist. Software is good at doing routine work and very poor at doing work where there is any ambiguity or grey areas. Unfortunately, Big Tech often oversells the capabilities of technology, which leads to harm. Tesla calling its car software ‘Autopilot’ is a great example of this – it’s a very sophisticated cruise control, but it’s named as though it’s fully autonomous driving. Germany and California have both asked Tesla to use a different name for the product.

Software needs to be used with extreme care in any sort of high-stakes work – which certainly includes the decision to use military force. Meaningful human control needs to remain at the forefront all the time.

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us