follow rules. As Marvin Minsky once remarked, "in general, we're least aware of what our minds do best."4 It may be that even if states ought to build law-abiding machine-weapons, realizing that goal might turn out to be quite technically demanding. None of this is to suggest that these problems are insuperable from the perspective of artificial intelligence research—ultimately, that is also an empirical question. Instead, the point is to emphasize the ambitiousness of many of the goals in artificial intelligence research and to stress the importance of remaining clear-eyed in pursuit of those goals, especially when the real-world stakes are as consequential as they are with robotic weapons.

The Status Quo Ante Machina

In his 2009 book, Governing Lethal Behavior in Autonomous Robots, Ronald Arkin makes a case for developing lethal autonomous weapons on the grounds that they might follow the laws of armed conflict more consistently than human soldiers do. Since soldiers frequently violate these laws, Arkin reasons that the standard for robotic weapons need not be perfection, but rather, mere improvement from the status quo ante machina. He cites an official United States Army mental health survey of soldiers returning from the war in Iraq that captures troubling attitudes toward Iraqi civilians and non-combatants, including permissive attitudes toward abuse and torture (GLB 31-2). Of course, such attitudes are hardly isolated to any particular army or war. He concludes that the possibility of building machines capable of satisfying the modest, but morally important standard of legal compliance is a sufficient reason to invest into this technology. Frankly, if one were looking for a good moral reason to adopt lethal autonomous weapons, it is hard to imagine a better one than their potential for reducing avoidable harms in war.

Given the imperfect standard set by human soldiers, Arkin reasons that it is at least possible that humans might one day—and maybe even someday soon—be able to build machines that do at least as well as they themselves do, and maybe a great deal better. In contrast with human soldiers, he explains, robotic weapons would not suffer potentially error-inducing emotions such as anger, fear, or frustration; they could be programmed to exercise more conservative

4 Marvin Minsky, The Society of Mind, New York, NY: Simon & Schuster 1986, p. 29.

Specifically, I shall consider the argument that armies morally ought to adopt LAWS, so long as those systems are capable of following the laws of war. One way of arriving at this conclusion is to contend that wars fought by machines whose behavior is compliant with the laws of war would be morally better than the status quo ante machina, in which wars are fought by human soldiers that often kill indiscriminately or otherwise commit battlefield atrocities.3 Whether or not such a state of affairs would in fact be better is ultimately an empirical question but notice that the argument ascribes great moral importance to following the laws of war. That is to say, the claim that humans would be morally better off with legally compliant LAWS depends upon the belief that the laws of war have a certain moral content and that in following those laws, soldiers act morally, or at the very least, they avoid acting immorally. This suggests that the ability to follow the law is necessary for soldiering well and by extension, would likewise be necessary for any morally acceptable LAWS. This is an important moral claim, and it presents a number of complicated philosophical questions. For example, it seems obvious that one can follow the law, yet still act immorally. If this is the case, then soldiers ought to aspire morally to more than legal compliance, even if they sometimes fail to achieve the legal standard. Rather than address such questions here, I want to proceed on the assumption that the ability to follow the law is prerequisite to the possibility of there being ethical LAWS. Granting that assumption, I want to consider the practical difficulties inherent to building such a machine.

An analysis of the demands imposed on moral agents by following the law shows that the laws of armed conflict and other relevant legal guidance are variously abstract and hierarchical, qualities that make legal rule-following especially difficult. As a result, following the laws of war requires an agent to overcome conceptual challenges posed by the heterogeneity of rules, semantic variation, and the demands of global reasoning, to say nothing of the difficulty of grasping the concept of law itself. It turns out that following abstract rules is far more difficult than it is generally given credit for being, most likely because humans have failed to properly appreciate their own impressive abilities to understand and to

3 Ronald C. Arkin, Governing Lethal Behavior in Autonomous Robots, Boca Raton, FL: Chapman & Hall/CRC 2009, p. 31. [Henceforth cited as GLB]