The Guardian view on robots as weapons: the human factor
Version 0 of 1. The future is already here, said William Gibson. It’s just not evenly distributed. One area where this is obviously true is the field of lethal autonomous weapon systems, as they are known to specialists – killer robots to the rest of us. Such machines could roam a battlefield, on the ground or in the air, picking their own targets and then shredding them with cannon fire, or blowing them up with missiles, without any human intervention. And if they were not deployed on a battlefield, they could turn wherever they were in fact deployed into a battlefield, or a place of slaughter. Such machines could turn wherever they were deployed into a battlefield, or a place of slaughter A conference in Geneva, under the auspices of the UN, is meeting this week to consider ways in which these machines can be brought under legal and ethical control. Optimists reckon that the technology is 20 to 30 years away from completion, but campaigners want it banned well before it is ready for deployment. The obvious question is whether it is not already too late. A report by Human Rights Watch in 2012 listed a frightening number of almost autonomous and wholly lethal weapons systems deployed around the world, from a German automated system for defending bases in Afghanistan, by detecting and firing back at incoming ordnance, through to a robot deployed by South Korea in the demilitarised zone, which uses sensing equipment to detect humans as far as two miles away as it patrols the frontier, and can then kill them from a very safe distance. All those systems rely on a human approving the computer’s actions, but at a speed which excludes the possibility of consideration: often there is as little as half a second in which to press or not to press the lethal button. Half a second is – just – inside the norm of reaction times, but military aircraft are routinely built to be so manoeuvrable that the human nervous system cannot react quickly enough to make the constant corrections necessary to keep them in the air. If the computers go down, so does the plane. The killer cyborg future is already present in such machines. Related: The human touch is optional in robot wars | Letters In some ways, this is an ethical advantage. Machines cannot feel hate, and they cannot lie about the causes of their actions. A programmer might in theory reconstruct the precise sequence of inputs and processes that led a drone to act wrongly and then correct the program. A human war criminal will lie to himself as well as to his interrogators. Humans cannot be programmed out of evil. Although the slope to killer robots is a slippery one, there is one point we have not reached. No one has yet built weapons systems sufficiently complex that they make their own decisions about when they should be deployed. This may never happen, but it would be unwise to bet that way. In the financial markets we already see the use of autonomous computer programs whose speed and power can overwhelm a whole economy in minutes. The markets, in that sense, are already amoral. Robots may be autonomous, but they cannot be morally responsible as humans must be. The ambition to control them is as profoundly human as it is right. |