In recent years there has been a rapid development and proliferation of robotic weapons, and machines are starting to take the place of humans on the battlefield.
Some military and robotics experts have predicted that “killer robots”—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 years.
Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) believe that such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death, or injury, to civilians during armed conflict. A pre-emptive prohibition on their development and use is needed.
A relatively small community of specialists has been debating on the benefits and dangers of fully autonomous weapons; military personnel, scientists, ethicists, philosophers, and lawyers have contributed to the discussion. According to Philip Alston, then UN special rapporteur on extra-judicial, “the rapid growth of these technologies, especially those with lethal capacities and those with decreased levels of human control, raise serious concerns that have been almost entirely unexamined by human rights or humanitarian actors.” It seems to be time for the broader public to consider the potential advantages and threats of fully autonomous weapons.
The primary concern of Human Rights Watch and IHRC is the impact fully autonomous weapons would have on the protection of civilians during times of war. A report of theirs analyses whether the technology would comply with international humanitarian law and preserve other checks on the killing of civilians; it finds that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians. The research and analysis strongly conclude that fully autonomous weapons should be banned and that governments should urgently pursue that end.
Robotic weapons, which are unmanned, are often divided into three categories based on the amount of human involvement in their actions: Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command; Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.
Fully autonomous weapons, which are the focus of the Human Rights Watch and IHRC report, do not yet exist, but technology is moving in the direction of their development and precursors are already in use. Many countries employ weapons defence systems that are programmed to respond automatically to threats from incoming munitions. Other precursors to fully autonomous weapons, either deployed or in development, have anti-personnel functions and are in some cases designed to be mobile and offensive weapons.
As the report shows, robots with complete autonomy would be incapable of meeting international humanitarian law standards; and given military plans to move toward increasing autonomy for robots should undertake formal assessments of their impact. The rules of distinction, and military necessity, are especially important tools for protecting civilians from the effects of war; fully autonomous weapons would lack the human qualities necessary to meet the rules of international humanitarian law. For example, distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a human course of actions, something a robot could not do.
By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine non-legal protections of civilians. First, robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians. Emotionless robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people without fear their troops would turn on them. Second, although relying on machines to fight war would reduce military casualties, it would also make it easier for political leaders to resort to force since their own troops would not face death or injury.