Volume 17, No.1, Spring 2022
Artificial Intelligence in Warfare
and Affective Computing
Index and Editor's Introduction
On Reasoning, Commonsense Knowledge, and Consciousness
Ulrich Furbach|
Universität Koblenz, Germany
This essay addresses the application of formal logic to commonsense reasoning. To this end is depicted the integration of an automated reasoner based on predicate calculus with a method for statistical reasoning that is derived from word embeddings. This combination is motivated by the results from an experiment in the field of cognitive science involving human reasoning. Additionally, I briefly address its links to the Global Workspace Theory, which is currently considered to be one of the most prominent theories of consciousness.
Keywords: Jaspers, Karl; automated reasoning; human reasoning; Choice of Plausible Alternative challenge; Wason selection task; knowledge graph.
| |
Artificial Intelligence Against the Backdrop of an Adequately Differentiated Anthropology
Albrecht Kiel|
Universität Konstanz, Germany
The achievements and deficits of artificial intelligence systems should be seen against the backdrop of a differentiated anthropology that encompasses not only the rational functions of consciousness, but also those of the subrational-unconscious as well as suprarational functions. These include firstly all forms of fantasies and fictions. Furthermore, they include complex mental achievements such as identity resulting from individuation, imagination, intuition, creativity, and the power of judgement that is capable to separate the relevant from the irrelevant. On the level of semiotics, a distinction must be drawn between images and archetypes and between signs with unambiguous and symbols with ambiguous or connotative-hidden meanings respectively. The quality of a word semantics, sentence semantics, and text semantics also depends on this ability.
Keywords: Jaspers, Karl; Jung, Carl Gustav; Furbach, Ulrich; Global Workspace Theory; psychological functions; consciousness; subrational pre-conscious; complex psychological functions; semiotics and semantics.
| |
Artificial Systems In-Between Humans and Artifacts:
From Autonomous Weapons to Affective Computing
Catrin Misselhorn|
Universität Göttingen, Germany
In this essay I advance the thesis that artificial systems form a distinct category positioned in-between humans at one end of the spectrum and artifacts at the other end. I argue that these systems are not mere artifacts, but that they need to be considered as agents. They can even be moral agents in a functional sense, although they fall short of full moral agency as it pertains to humans. This view is elaborated with respect to lethal autonomous weapon systems designed and trained to act as moral agents. Particularly regarding such systems, the question arises as to whether decisions about life and death should be left to machines. I discuss three arguments to the effect that such decisions should not even in war be delegated to artificial moral agents. Henceforth the crucial question arises whether and how humans and machines can cooperate effectively. This brings in a second characteristic which is responsible for the special status of artificial systems in-between humans and artifacts: they are relational artifacts capable of entering emotional and social interactions with humans. Artificial systems cannot really be equal participants in social relationships for they do not have the necessary abilities such as consciousness or intentionality, yet they can simulate them well enough to profoundly challenge established social practices of human relationships.
Keywords: Artificial morality; machine ethics; autonomous weapon systems; jus in bello; responsibility gap; emotional AI; affective computing; relational artifacts.
| |
Robots, Emotions, and Interobjectivity
Jörg Noller|
Ludwig-Maximilians-Universität München, Germany
Catrin Misselhorn develops a non-reductive account of artificial intelligence that has become an aspect of the human lifeworld and her approach takes also into account the importance of emotions for artificial intelligence systems. Thereby Misselhorn argues for a middle ground between a form of weak technological reductionism and a strong transhumanist utopianism. While I am sympathetic to this general approach, I shall propose an alternative life-worldly interpretation of artificial intelligence systems. To this end I develop an interobjective rationale regarding artificial intelligence that does not merely refer to robots as being individual entities but one that refers to the way as to how artificial intelligence interferes with human activities in everyday life. Hence, I argue that artificial intelligence should be conceived of not in terms of robotics or devices but rather in terms of situated processes and capacities that function from within virtual realities. According to this alternative view, artificial intelligence is neither an object nor a subject but a self-reflective process that has the potential to shape human interactions with reality and society, and of enhancing or restricting individual and collective freedoms.
Keywords: Misselhorn, Catrin; artificial intelligence; robots; emotions; lifeworld; transhumanism; virtual reality; interobjectivity; freedom.
| |
Artificial Systems—Agents or Processes?
Catrin Misselhorn|
Universität Göttingen, Germany
In this brief replik to Jörg Noller's essay "Robots, Emotions, and Interobjectivity" that is included in this issue of Existenz, I highlight three central concepts in an effort to contrast some differences in addressing human agents' actions versus artificial agents' actions. These concepts include processes, life-worlds, and virtual realities. While I agree with many of Noller's observations regarding the first two books of my trilogy, our positions significantly differ regarding the role of humanism in matters concerning artificial intelligence.
Keywords: Künstliche Intelligenz und Empathie; artificial intelligence; empathy; relational artifacts; life-world; virtual reality; subject-object distinction.
| |
Artificial Intelligence and The Just War Proportionality Principle
Joseph O. Chapa|
United States Department of the Air Force
In this essay, I evaluate the relationship between uncertainty imposed by modern applications of artificial intelligence and the jus in bello just war proportionality principle. I look first at the structure of the proportionality principle and argue that, whenever possible, military institutions have moral obligations to improve a commander's ability to make accurate predictions about the goods and harms that will result from a contemplated military action. I then address the uncertainty and unforeseen consequences that can result from the use of modern, AI-enabled systems. I argue that, in the face of this potential uncertainty, and based on the previous claim, military institutions have an obligation to reduce the uncertainty that can result from AI-enabled systems in the military context. Finally, I argue that there are two broad means of reducing that uncertainty. The first one is to improve the algorithm's performance; and the second one is to increase commanders' training and education in artificial intelligence technology and operational details so that they can more capably recognize and predict system flaws and failures.
Keywords: Proportionality; just war; jus in bello; artificial intelligence; machine learning; uncertainty; evidence-relative; fact-relative; deep learning.
| |
LAWS and the Law: Rules as Impediment to Lethal Autonomy
Kevin Schieman |
United States Military Academy, West Point
The ability to follow the laws of war is arguably a necessary, if not sufficient condition for morally acceptable lethal autonomous weapons use. Yet following laws and other types of abstract rules is far more demanding than one may realize. This type of rule-following requires an agent to overcome three main difficulties: the heterogeneity of rules, semantic variation, and the demands of global reasoning. Taken together, these demands set an ambitious agenda for research focused on building machines capable of satisfying even a modest standard of legal compliance.
Keywords: Military ethics; machine ethics; lethal autonomous weapons; jus in bello; artificial intelligence; law; moral agency; machine agency.
| |
Technology as a Challenge to Peace
Robert H. Latiff |
University of Notre Dame
In this essay I describe the issues that led me to write Future Peace: Technology, Aggression, and the Rush to War. My primary concerns are the ubiquity of weapons technologies that too few really understand and the speed with which those technologies are being adopted and deployed by military forces around the world. In the context of an increasingly tense global standoff among superpowers and a growing number of smaller-scale, but equally deadly, conflicts, we have a military that is stretched thin, deployed excessively by technologically illiterate leaders, enabled by an ambivalent and unengaged populace that is in-thrall to promises of super-weapons and technology superiority. Technology, while mostly positive, is seductive and addictive and its users often overlook potential downsides. Political leaders often fall prey to several factors that lead them to engage too frequently in violent conflicts.
Keywords: Technology superiority; artificial intelligence; autonomous weapons; poor education; military deployments; nuclear weapons; test and evaluation.
| |
Risks of Weaponry Integrated with Artificial Intelligence
Gregory M. Reichberg | Peace Research Institute Oslo, Norway
Assessing the risks associated with automated weapon platforms is a major theme in Robert Latiff's book, Future Peace. While the development of artificial intelligence has been an aspect of weapon design over the last three decades, the newest forms of this technology that are based on machine learning, have considerably raised the stakes, leading to heated debates regarding lethal autonomous weapon systems and algorithmic warfare. With good reason, Latiff points out that the high-velocity of weapon systems that are integrated with this technology renders their effects increasingly difficult to manage for the service-personnel responsible for targeting decisions in combat settings. Safety must be thought anew in the age of AI, especially when new weapon systems are rolled out in an international context in which states are actively pursuing technological superiority over their peers.
Keywords: Latiff, Robert H.; arms control; artificial intelligence; machine learning; risk; safety; weaponry.
| |
Cheap War and What It Will Do: Comments on Latiff's Future Peace
Ryan Jenkins |
California Polytechnic State University, San Luis Obispo
Major General Robert H. Latiff's exploration of the automation of judgment in future warfare is urgently relevant. In this review of his book Future Peace, I argue that technological advancements make war appear cheaper and thus more attractive and thereby increase the likelihood of war—just as previous innovations that allegedly democratized access to increased combat capacity have done. I suggest viewing this as a problem of distribution: how can the burdens of warfare be communicated to the public more clearly and shared more fairly? Hence, I explore the idea that war should be subject to a plebiscite, which is then linked directly to a draft.
Keywords: Latiff, Robert H.; future of warfare; automation of judgment; artificial intelligence ethics; military ethics; technological determinism; political economy of war; ethics of conscription.
| |
Accidental Escalation and Future of War and Peace
Patrick Bratton |
United States Army War College
Robert Latiff's book Future Peace is an excellent survey of not only the many challenges facing the United States, but also how those challenges interact. Latiff gives a sober warning regarding the dangers of increased automation and how these may lead to unwanted conflict, adding to a logistical understanding of offense-defense balance, and of conflict escalation. This review will highlight a lack of engagement with Scott Sagan's normal accidents theory, and whether Future Peace has too much of an American focus.
Keywords: Latiff, Robert H.; Sagan, Scott; artificial intelligence weapons; Revolution in Military Affairs (RMA); C4I; offense-defense balance theory; normal accidents theory.
| |
From Future War to Future Peace: A Critique of USAF Major General Robert Latiff's Account of the Role of New Technologies in War
George R. Lucas, Jr. | United States Naval Academy, Maryland
In this essay, I consider Gen. Robert Latiff's overall position on increased reliance upon presently emerging military technologies. I argue that understanding the normative positions (largely cautionary in nature) in his most recent book depends upon evidence and descriptions of the kinds of military technologies in question, their current and likely future use in combat or combat readiness, and the evidence he adduces for his fear that the globalization and proliferation of these various technologies will make wars themselves easier to fight, and therefore more likely to occur. Those descriptions in his first book from 2017, however, do not provide a sufficient warrant for the normative conclusions in his second, most recent book from 2022. Notwithstanding, both books provide a wealth of insight into the proliferation and complexity of new military technologies, their impact on the violence and destruction caused by armed conflict, and accordingly, their current and projected uses in present and future conflicts.
Keywords: Latiff, Robert H.; Moore's Law; military technology; grey war; tactical nuclear weapons; enhanced human warriors; diplomatic compromise.
| |
|