Killer robots, which can act without human intervention, are considered too dangerous, and the United Nations may try to preemptively ban their use.
The UN Human Rights Council will discuss the ethics of killer robots (also known as “lethal autonomous robots") during an upcoming council meeting in Geneva. One UN report wants their use stopped until some of the ethical questions are resolved.
“The robots are machines programmed in advance to take out people or targets, which - unlike drones - operate autonomously on the battlefield,” the BBC reported.
Killer robots have not been used in warfare, but Israel, the United States and the United Kingdom are developing them, news reports said. Advocates argue their use could reduce human casualties for the side using them.
There are many questions relating to ethics that arise when killer robots are used in battle. "The traditional approach is that there is a warrior, and there is a weapon," Christof Heyns, the UN special rapporteur on extrajudicial, summary or arbitrary executions, was quoted saying by the BBC. "But what we now see is that [when] the weapon becomes the warrior, the weapon takes the decision itself."
"Machines lack morality and mortality, and as a result should not have life and death powers over humans," Heyns added in a statement appearing in The Guardian newspaper.
The Huffington Post also reported that killer robots could make the decision to fight a war more likely – because of less human involvement.
These kinds of robots could be in use in one or two decades – without a ban in place.
"States are working towards greater and greater autonomy in weapons, and the potential is there for such technologies to be developed in the next 10 or 20 years,” Bonnie Docherty of Harvard law school's International Human Rights Clinic warned in a statement quoted by The Guardian.