Image401.EPS

Volume 17, No 1, Spring 2022      ISSN 1932-1066

Accidental Escalation and Future of War and Peace

Patrick Bratton

United States Army War College

patrick.bratton@armywarcollege.edu

Abstract: Robert Latiff's book Future Peace is an excellent survey of not only the many challenges facing the United States, but also how those challenges interact. Latiff gives a sober warning regarding the dangers of increased automation and how these may lead to unwanted conflict, adding to a logistical understanding of offense-defense balance, and of conflict escalation. This review will highlight a lack of engagement with Scott Sagan's normal accidents theory, and whether Future Peace has too much of an American focus.

Keywords: Latiff, Robert H.; Sagan, Scott; artificial intelligence weapons; Revolution in Military Affairs (RMA); C4I; offense-defense balance theory; normal accidents theory.

Image410.PNG

In professional military education, we read many contemporary works on future trends in security and conflict.1 Educators need to find books that are accessible to a national security professional, and that are not written only for academic specialists. The commonality of these works is that they are all trying to find "the next big thing," so to speak, that is driving global events and the nature of conflict. I have been reading these books since the 1990s, and many simply fall short of this intent. Most have one idea that probably was a good Foreign Affairs or Foreign Policy article, but not quite enough to be an entire book.

Yet Robert Latiff's book Future Peace differs from them.2 It gives a holistic review of the major challenges and trends of the past twenty to thirty years. Latiff covers a range of topics including cyber warfare, artificial intelligence, nationalistic-militarism, increased emphasis on competition across the spectrum including gray zone conflict and hybrid war, and the internal challenges that many liberal-democracies are facing. Latiff focuses on how these factors interact with the currently unprecedented technological advances. As he states early in the book, he intends

to draw attention to the fact that we...are not paying enough attention to the growing influence complex technologies are having on warfare and, critically, the role technology plays in the motivations of our leadership to employ military forces. [FP xiii]

Central to Latiff's argument is that military leaders are giving increasingly more responsibility and power to machines. This is coupled with the challenge that those same leaders having to make decisions ever faster about the use of force. He writes:

One of the problems with such speedy decisions, however, is that they may crowd out the opportunity for diplomatic efforts or negotiations and may lead directly to a military solution. [FP 13]

Future Peace also adds to an understanding of what future conflicts will look like. There are two opposing schools of thought about the future of conflict. In one view, its proponents believe that advanced technology, artificial intelligence, sensors, and communication networks will give unprecedented access for Command, Control, Communications, Computers, and Intelligence (C4I) when compared to previous wars. The worry is that on grounds of the need for decision makers to act quickly, the problem will not be the getting of information but rather the ability to process and act on it. Stakeholders worry about the sheer amount of information that could overwhelm the commanders. The second view is that stealth technology, electronic warfare, cyber-attacks, drone swarming, and so on, will overwhelm, disrupt, and degrade C4I so that it will be difficult to understand the battlespace and communicate with the respective units. The battlefield in all domains will be a mess with commanders not only being unaware of what the enemy is doing, but also being unable to communicate with their own troops that are engaged in combat. Latiff indicates that the second view will likely be more dominant in the future.

The arguments presented in Future Peace add to an understanding of the offense-defense balance theory, and of accidental escalation. Latiff highlights an issue that is often overlooked today, namely, the dangers of escalation when conflict is perceived as being too easy. International relations theorists such as, for example, Robert Jervis, have long debated the question of offense-defense balance. That is if the offensive is perceived to be dominant over defense, then war is more likely, given that it is harder to defend.3 Similarly, Steven van Evera writes:

underestimates of the price of war are a common companion—and often a pivotal element—to decisions for war.4

Beyond the academic debate, this issue was raised by critics of the Revolution in Military Affairs (RMA) movement of the late 1990s early 2000s. The proponents of this movement believed that massive improvements in sensors, long-range precision fires, stealth technology, and other technological advancements would transform the conduct of war. This position was most classically argued by Admiral William Owens.5 The RMA offered a seductive vision of a low-cost war of surgical strikes and minimal causalities. Critics of the RMA worried that it would lead to policymakers choosing to use military force more often since they believed it can be done easily and is being connected with minimal risk. Unfortunately, this debate faded during the 2000s, due to the increased focus on terrorism and irregular warfare.

Raising awareness for this danger, and the even greater risks it could pose today, is perhaps the most important argument in Future Peace. Latiff argues that the combination of high technology (especially long-range precision fires), an all-volunteer force that is removed from society that does the fighting, and the fact that American wars are fought "over there," that is far away from the homeland, all make conflict and war abstract and, in the words of Colin McInnes, more of a spectator sport.6

Future Peace adds an additional factor to the understanding of the offense-defense dominance theory. The traditional view is that when militaries and policymakers believe war is easily to be conducted, more conflicts will occur. While it is true today that Americans leaders, both civilian and military, take immense pride in the quality and capability of the American military, nonetheless having worked in professional military education for twenty years, I can confirm that it is rare to hear military officers voicing that future conflicts will be easy. In summary, this seems to indicate United States leadership will find resolving conflict difficult, which should indicate that war will be less likely to take place according to the traditional understanding of the offense-defense balance.

It should also be added that is it not only the United States that gets to decide whether and when a conflict will happen, and it is not only the United States that is working on these systems. As Latiff mentions, China, Russia, and other states are working on similar high-tech weapons systems. It would be a relevant consideration to assess whether their views about future conflicts is rather optimistic or pessimistic. Moreover, Future Peace was published before the Russian invasion of Ukraine; a war that has not gone according to Russian plans and proved to be far more difficult and costly for both sides than anticipated. The book would have benefitted from using a wider lens that includes considerations regarding the respective positions that Russia, China, and other actors assume regarding the future of conflict.

However, it is important to stress that Latiff points out that the real danger does not consist in leaders deliberately choosing to go to war because they believe the offense is dominant, but that the danger lies in accidental escalation. Future Peace adds clarity to the comprehension of the offense-defense theory, insofar as it points out that beyond the offensive being perceived as dominant conflict would be made more likely, due to the increased automation of decision-making, combined with the need for rapid, or even real-time decisions about the use of force. This also means that conflict becomes more likely even when policymakers and leaders do not want to initiate conflict.

This understanding leads up to the great warning stated in the book, namely, the one of the dangers of unintentional or accidental escalation. In a 2008 RAND study, the authors offer the following definition of "escalation":

Escalation can be defined as an increase in the intensity or scope of conflict that crosses threshold(s) considered significant by one or more of the participants.7

Seen from a general perspective, the thinking about escalation emphasizes it as a conscious choice by one of the participants in a conflict. In the same study, the authors argue:

It is almost axiomatic that weapons do not escalate; rather, people escalate with weapons. [DT xv-xvi]

Latiff warns that as leaders are under greater and greater pressure to make quick decisions about the use of forces, automatic systems can enable actions that both foreclose options for leaders and could automate the escalation of violence. Even if leaders themselves want to control escalation, automation could now increasingly remove that choice from leaders. Autonomous systems, including weapons systems could enact the use of force without a human decision-maker being involved in such enactment. Granted that the decision still rests with a person, it could be that the overwhelming amount of information, and the need to make a quick decision could still lead to the unleashing of a level of violence that is undesired by all.

It should be noted that Latiff's discussion of how high technology could trigger accidental escalation, could have been strengthened by engagement with Scott Sagan's pioneering work on accidental escalation. In 1995 Sagan published The Limits of Safety, drawing on his previous work on nuclear targeting.8 Sagan's thesis challenged, with extensive historical evidence, one of the assumptions of deterrence theory and by extension is contributing to the proliferation debate. In the 1980s and 1990s there was a robust debate between proliferation optimists who argued that paradoxically a controlled spread of nuclear weapons would bring greater stability to the world. Comparable to the fact that nuclear weapons kept the Cold War cold, so it is being argued would the nuclearization of rivalries in other parts of the world would bring increased stability. On the other hand, pessimists believed that proliferation would result in increased instability and risk. Sagan's work problematized the conventional understanding of the degree of stability in the Cold War.

Using declassified documents Sagan examined several cases in which accidents during Cold War Crises could have led to nuclear escalation. Among these near misses were the Cuban Missile Crisis and Able Archer 83. Therefore, he calls into question assumptions that more nuclear weapons will lead to increased stability, given how close the United States and the Soviet Union came to escalation already in the context of normal accidents.

In order to make his case, Sagan used two characteristics regarding complex organizations that are identified in Charles Perrow's book Normal Accidents: Living with High-Risk Technologies, interactive complexity and tight coupling (LS 32-5). Interactive complexity warns that organizations which have this characteristic of interactive complexity are

likely to experience unexpected and often baffling interactions among components, which designers did not anticipate and operators cannot recognize. [LS 33]

Tight coupling refers to systems that have time-dependent processes, interactions occur quickly, there is continuous movement due to the production process, and the system cannot manage to have delays. Sagan warns in this context:

If a system has many complex interactions, unanticipated and common-mode failures are inevitable; and if the system is also tightly coupled, it will be very difficult to prevent such failures from escalating into a major accident. [LS 36]

In order to explain the dangers of escalation Latiff uses the analogy of the United States military as being a "giant armed nervous system" (FP 9). Moreover, he argues that beyond the United States, this is an age of global connectivity. In this world of global interconnectivity, heightened aggressive behavior, and decision-making systems that are increasingly automated,

a dangerous situation [is being created], a sort of unstable equilibrium in which an unexpected event could precipitate a failure and a resort to conflict. [FP 35]

In many ways Future Peace is giving an excellent case for Sagan, Perrow, and proponents of the Normal Accidents Theory, and the book would have benefitted from engaging with this theory.

There are a couple of other questions or criticisms that can be made of the text. Paradoxically, the greatest strength of the book, also leads to one point of criticism. Latiff's tour de force survey of all the factors that may lead to more violence: the American fascination with technology, aggressive artificial intelligence, the arms industry, competition for resources and climate change, nationalism, militarism, media, low levels of education compounded by a lack of civics, and so on. While Latiff's description of all these variables is quite well executed, he does not address the question as to how all these factors are linked with each other. Moreover, the reader wonders whether all these factors are equally significant, or whether some of them are more important ones than others.

This situation prompts yet another question, namely whether this is merely a problem for the United States, or whether it poses a danger for all states? Latiff's focus is predominantly on the United States, and his examples are frequently American-centric. Yet, by definition, international conflict is going to involve other actors than just the United States, ranging from allies and partners, to adversaries to non-aligned states. This begs the question that even if the United States took many of the steps Future Peace recommends in order to dampen the accidental rush to war, will that be enough, especially when other actors would refrain from engaging in this practice? For example, the United States could improve the quality of its media, its teaching of civics, the relationship of the military and government with the arms industry, or even what types of military artificial intelligence use the United States will have. But what do these mean if other powers do not do the same? As Future Peace indicates it is not only the United States that is rapidly developing advanced technologies, artificial intelligence, and other systems.

However, my comments and observations are not to be seen as too critical. Future Peace is an important work that deserves to be read. It gives perhaps one of the best surveys of all the major debates in American security policy today. In addition to general readers who are looking for information regarding the impact of technology on conflict, this would be an excellent book for university classes in security studies.

1 The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the United States Government.

2 Robert H. Latiff, Future Peace: Technology, Aggression, and the Rush to War, Notre Dame, IN: University of Notre Dame Press, 2022. [Henceforth cited as FP]

3 Robert Jervis, "Cooperation Under the Security Dilemma," World Politics 30/2 (January 1978), 167-214.

4 Stephen Van Evera, The Causes of War: Power and the Roots of Conflict, Ithaca, NY: Cornell University Press 1999, p. 31. See also pp. 14-34.

5 Admiral William Owens and Edward Offley, Lifting the Fog of War, New York, NY: Farrar, Straus, and Giroux, 2000.

6 Colin McInnes, Spectator-Sport War: The West and Contemporary Conflict, Boulder, CO: Lynne Rienner Publishers, 2002.

7 Forest E. Morgan, Karl P. Mueller, Evan S. Medeiros, Kevin L. Pollpeter, Roger Cliff, Dangerous Thresholds: Managing Escalation in the 21st Century, Santa Monica, CA: RAND Corporation 2008, p. xi. [Henceforth cited as DT]

8 Scott D. Sagan, The Limits of Safety: Organizations, Accidents, and Nuclear Weapons, Princeton, NJ: Princeton University Press, 1993. [Henceforth cited as LS]