what military commanders want is machines that act in accord with the law, regardless whether they are engaged in anything like explicit rule following or not. To insist on anything more than compliance, one might argue, is to miss the point of Arkin's argument altogether. This is an incredibly important objection given the recent success of data-driven methods in artificial intelligence, and I must admit to being at least somewhat sympathetic to the position. If it were possible to build machines that comply with the laws of war without explicit reference to those laws, then all parties to a conflict might still be considerably better off than they are with human soldiers.

Without fully resolving the objection, I want to voice a philosophical concern about any approach that seeks to achieve legal compliance without explicit reference to the laws themselves. On his theory of rule-following, Boghossian claims that for an agent to be engaged in rule-following, the agent must accept the rule, must act in ways that conform with the rule, and in virtue of accepting the rule, the agent's action must be explicable and rational (ER 472). Among other things, Boghossian's theory precludes the possibility of accidental rule-following in a way that rises to the level of moral concern where compliance with the law of war is concerned. That is to say that as I sit at my computer typing this paragraph, I am acting in accordance with infinitely many possible rules. I am acting in accordance with a possible rule that requires me to wear shoes while using the computer and another that requires me to listen to music while sitting in my office. I am likewise following a rule preventing me from unjustifiably killing another human being. In practice though, despite being compliant with each of these rules, imagined or otherwise, I am not in fact following them. I am not following them for my acceptance or rejection of those rules neither explains nor rationalizes my behavior; the rules are not exerting influence in either permitting or restricting my behavior, and as such, those rules are not guiding my conduct. And it is worth noticing that criminal law seems deeply concerned with considerations of explicability and rationalizability; criminal law is not simply a matter of determining whether a defendant's actions conform with the law—the law is generally quite concerned with intent.

Now, suppose a machine were trained on data that captured some set of legally acceptable actions in war, but possessed no explicit means of representing the laws themselves. I take it that such a machine

would learn to comply with the law in the same way that a model like ChatGPT learns language, coming to recognize patterns over some massive corpus of data. If that were the case, then the machine would not learn that it is always illegal to intentionally target civilians; at best, it would learn something closely resembling that law. I take there to be a vast moral difference between a rule that prohibits targeting civilians categorically and one that prohibits targeting civilians ceteris paribus. The reality is that the opacity of data-driven machine learning—the inability to understand how a trained model is producing the outputs that it does—means that one can never be confident that an architecture that learns rules implicitly has, in fact, learned the proper rules. This should lead one to wonder if an implicit rule-following machine is adequate to the agential demands of choosing who lives and who dies in war.

Relatedly, the ability to follow rules seems to depend in important ways on an agent's ability to reason prospectively, applying a law to new tasks or contexts. In the previous section, I argued that one difficulty in applying the law is extending one's understanding of the law to new cases, but it is difficult to see how one could satisfactorily do so without some explicit representation of the law itself. The reality of data-driven methods in artificial intelligence is that they are notoriously brittle; they are capable of learning an incredible number of patterns in any given data set, but struggle to extend those patterns to new tasks or to accommodate even small context shifts in their data. Put another way, these networks are effective enough in interpolating between known cases, but so far have struggled to extrapolate beyond their trained tasks and data; the more similar that a new case is to trained cases, the more likely the system is to classify that case correctly. If that is the case, then it is entirely possible that data-driven methods would be technically wanting for all but the narrowest of tasks. I suppose then that my skepticism about the adequacy of data-driven, implicit rule-following machines is as much technological as it is philosophical, but questions regarding the appropriate use of technology to extend human agency should never be subordinated to questions of what is, in fact, possible.

Not Even Stupid

I have argued here that following the law is actually quite difficult in practice and that the demands of