Bionic Hands

Circumventing “Dirty Hands” with Lethally Autonomous Robots.

Robot Love

Explaining Dirty Hands

The concept of dirty hands was brought to the forefront of popular philosophical thought by Michael Walzer in his piece Political Action: The Problem of Dirty Hands. In it, Walzer explains the dilemma of a political leader when confronted with a decision to violate a moral principle(s) in order to prevent a disaster of some sort. Explained differently, it is the moral conflict between doing the wrong thing for the right reasons, or committing a moral wrong in order to do “good."

Examples of dirty hands situations relate to a political leader that is in some sort of a moral quandary, a “rock and a hard place." For example, a terrorist in the political leader’s stead is known to have planted a bomb. Ignoring the legitimate criticisms of torture and its ability to actually secure quality information, and assuming that in this instance that it can be be used successfully, should the political leader torture the terrorist to acquire the information?

Should the political leader torture the family of the terrorists if it means the terrorists would divulge vital information?

Dirty hands theory, of course, does not insist on the results being the best possible outcome in order to take action. Torturing terrorists may or may not actually bring about positive information and this is part of the dilemma. To understand dirty hands on a less violent level think of a person with “scruples” who runs for office and is resisting the temptation to make a deal to provide government contracts to a particular person for their support. If they do not make the deal they will lose but if they do make the deal this person can perform the “greater” objectives his/her supporters’ desire.

In order for either of these examples to carry the weight that Walzer wishes to impress on the reader both of these leaders have to be, in so many words, good men/women. They have to struggle with the decision. There has to be some sort of recognition or understanding that in order to accomplish some higher goal or to avoid some greater disaster he or she will have to violate a moral principle. They have to know that what they are doing is wrong and, this is important, that they are directly responsible for the decision.

Walzer expresses that a political leaders ability to feel and/or show guilt is exactly what is needed for the right type of leader to make these types of decisions. “Personal anguish sometimes seems the only acceptable excuse for political crimes and we don’t want to be ruled by men that have lost their souls” (Walzer 176). That is to say we would rather see leaders hate to do the evil they do rather than glory in it...but either way they will have to do evil.

Explaining Lethally Autonomous Robots

The term robot comes from the Czech word Robota which means forced labor. A lethally autonomous robot (LAR) is one that can choose its target and make decisions to use deadly force independent of human review or approval (Krishnan). The technology for this type of action is not in the distant future it is readily available now, although not explicitly used. The Department of Defense spends approximately six billion dollars every year on robotic weapons systems. Fully autonomous weapons are expected by some military and robotic experts to be deployable in the next 20 to 30 years (Docherty).

As it stands now, humans are “in-the-loop” or involved with decisions to use force deployed by robotic technology such as unmanned aerial vehicles, otherwise known as drones. The 2012 report Losing Humanity: The Case Against Killer Robots released by Human Rights Watch and The International Human Rights Clinic (IHRC) at Harvard Law School defined three levels of human involvement with robotic weaponry:

1. Human-in-the-Loop Weapons: Robots can only choose their targets and deliver force under the instruction of a human commander.

2. Human-on-the-Loop Weapons: Robots are able to select their targets and strike under human oversight (who can circumvent the robots’ decisions).

3. Human-out-of-the-Loop Weapons: Robots that are fully autonomous and are able to select targets and deliver force without any human involvement.

Drones like the Predator and Reaper are only the first generation of more robust robotic weaponry, weapons that will literally be enabled to kill independent of human decision-making (United States. Cong.10). What is coming and what should be discussed are Human-on-the-Loop Weapons and to a greater extent Human-out-of-the-Loop weapons.

The US Department of Defense expects that eventually “…unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure."

The US Airforce has said “increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’—monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input."

A 2004 US Navy report on underwater vehicles stated, “While admittedly futuristic in vision, one can conceive of scenarios where UUVs sense, track, identify, target, and destroy an enemy—all autonomously” (Docherty).

Enabling a robot to decide via computer programming whether or not to use deadly force on a human opens up a whole new discussion of dirty hands. It potentially eliminates the very act of a political and military leader struggling with moral decisions because those decisions will be abdicated to LARS. In a sense, a robot becomes the ethicists and the strategist. If the robot deems a dirty hands decision to be “right,” then why should a human think any differently?


wwaid?

Circumventing Dirty Hands

Even the cheapest of calculators are trusted to provide simple mathematical answers. The algorithms within a calculator are never tested before someone uses it to balance their bank accounts. It is, for the most part, trusted to accomplish the needed calculations, even when errors are becoming evident it is still surprising when a calculator goes wrong. Certainly, as machines become more complicated more care is placed into ensuring that it is operating according to predetermined standards, consumer airplanes go through frequent rigorous testing to ensure its hardware and software components are operating properly.

What happens when machines are no longer just the embodiment of concrete engineering principles? What happens when machines are programmed to make ethical/moral decisions in the same manner that a principled human would? Will they still, or eventually, be trusted in the same way that one can trust a cheap calculator... or an airplane?

Ronald C. Arkin, Roboticist at Georgia Institute of Technology is working towards developing autonomous robots that are discriminating and ethical. He has articulated “the most comprehensive architecture for a compliance mechanism” (United States. Cong.  Qtd. Ed Baret 14). Arkin proposes that LARS be equipped with an "ethical governor" and "strong artificial intelligence."
 
An ethical governor restricts LARS to act in accordance with the Laws of War and the Rules of Engagement and strong artificial intelligence will be designed to match and exceed human intelligence. A consequence of this is viewing and developing LARS not as cold calculating killers but as civilizing forces with the ability to be more humane than humans, “It is my contention that robots can be built that do not exhibit fear, anger, frustration or revenge and ultimately…behave in a more humane way than even human beings” (Arkin 1).

By programming LARS with all of the ethical moralizing aspects of humanity without including traits like fear and self-preservation, which may cause unjust acts of force, LARS may become the eventual calculator of ethical conundrums in battle and in strategy development while still supporting the overall plan. Arkin proposes equipping robots with lethal autonomy but requiring “ethical autonomy” as part of the robotic makeup. Potentially eliminating the very need to think through moral quandaries and essentially evading dirty hands as the decisions are left to LARS and other AI systems to decide.

Is it possible that leaders may ask themselves “WWAID?" As in, "What would artificial intelligence do?” If Arkin is successful then artificial intelligence and LARS, specifically, may eventually embody the ethical ideal of human hood, a sort of modern day, technological idol that directs mankind towards ethical behavior.

Developing LARS with strong AI and an ethical governor provides an interesting twist to Walzer’s acceptance that politicians are not, typically, held responsible for their actions because they act as officials of the state. He writes “there is rarely a Czarist executioner waiting in the wings for politicians with dirty hands, even the most deserving among them” (Walzer).

Except that, LARS may become the “executioner waiting in the wings.”

Responsibility Gap

Even if Arkin is correct, that LARS may one day have an effect on war that engenders more humanity than violence the current technology is not able to attain those ethical ideals, yet. It is able to target and deliver force autonomously, however. A key aspect of dirty hands is for leadership to acknowledge (and regret) immoral actions to attain some ultimate act.

The utilization of LARS, however, removes that sense of responsibility from leadership, as LARS will be programmed to accomplish specific tasks. If they are to be autonomous, human leadership may end up less inclined to assume responsibility for the behaviors of robots because they will be out of the “loop”. This may be a very purposeful decision so as to keep from having to engage with these types of moral crossroad scenarios. Who has dirty hands when immoral decisions are made and the result is unsuccessful (or successful)? Is it the commander, the programmer, the manufacturer, the politician… or is it the robot?

We may do well to ask ourselves if we are better off with the dirty hands of political leadership or with leadership that has circumvented even the possibility of having dirty hands by assigning moral decision making to lethally autonomous robots.

Written by Arash Kamiar
Arash@MetroJacksonville.com
@ArashWaiting

Sources:

Docherty, Bonnie Lynn. Losing Humanity: The Case against Killer Robots. [New York, N.Y.]: Human Rights Watch, 2012. Print.

Krishnan, Armin. Killer robots legality and ethicality of autonomous weapons. Farnham, UK: Ashgate, 2009. Print.

United States. Cong. House of Rep. Subcommittee on National Security and Foreign Affairs of the Committee on Oversight and Government Regorm. Rise of the Drones: Unmanned Systems and The Future of War. Hearings 111 Cong., 2nd sess. Washington: GPO, 2011. Print.

United States Army Research Office. Ethical Robots in Warfare. By Ronald C. Arkin. 2009. Print.

Walzer, Michael. “Political Action: The Problem of Dirty Hands.” Philosophy and Public Affairs, Vol. 2, No. 2 (Winter, 1973): 160-180. Print.