Image401.EPS

Volume 17, No 1, Spring 2022      ISSN 1932-1066

Technology as a Challenge to Peace

Robert H. Latiff

University of Notre Dame

rlatiff@msn.com

Abstract: In this essay I describe the issues that led me to write Future Peace: Technology, Aggression, and the Rush to War. My primary concerns are the ubiquity of weapons technologies that too few really understand and the speed with which those technologies are being adopted and deployed by military forces around the world. In the context of an increasingly tense global standoff among superpowers and a growing number of smaller-scale, but equally deadly, conflicts, we have a military that is stretched thin, deployed excessively by technologically illiterate leaders, enabled by an ambivalent and unengaged populace that is in-thrall to promises of super-weapons and technology superiority. Technology, while mostly positive, is seductive and addictive and its users often overlook potential downsides. Political leaders often fall prey to several factors that lead them to engage too frequently in violent conflicts.

Keywords: Technology superiority; artificial intelligence; autonomous weapons; poor education; military deployments; nuclear weapons; testing and evaluation.

Image410.PNG

Although it was not my original intent, Future Peace ended up being something of a follow-up to my first book, Future War: Preparing for the New Global Battlefield.1 That book was itself a by-product of a highly popular undergraduate philosophy course, entitled "The Ethics of Emerging Weapons Technologies," I have taught for over a decade at the University of Notre Dame. When the class was first introduced in 2010, it was unique. Drones, autonomous weapons, human enhancement technologies, lasers, neuroscience, and other technologies applied to warfighting were, if not new, then in their relatively early stages of development and use. While the technologies have matured, the issues that pertain to them remain. The course, and the book, examined these modern warfighting technologies in the light of the concepts of Just War Theory and the Laws of Armed Conflict. They discussed the potentially serious downsides of ignoring the ethical and moral issues around war and weaponry. Future War concluded with a discussion of public apathy concerning issues of war, and included an appeal for more citizen involvement.

As it happened, Future War was written during 2015-2016 and appeared as a new, more aggressive, belligerent, and potentially dangerous political administration assumed power in the United States. It also coincided with a rapidly growing interest in the technologies of massive datasets and artificial intelligence. Future Peace, written during the period 2018-2020 while I was in-residence at the Notre Dame Institute for Advanced Studies, was intended to again sound the alarm on the growing application of new technologies, but focused heavily on the rapidly growing field of artificial intelligence (AI) and specifically to its application to command-and-control systems and decision support systems.2 Future Peace questions the overreliance on technology and examines the pressure-cooker scenario created by the growing animosity between the United States and its adversaries, the globally deployed and thinly stretched United States military, the capacity for advanced technology to catalyze violence, and the American public's lack of familiarity with, or interest in, these topics.

Future Peace describes the many provocations to violence, how technologies are abetting those urges, and how political leaders all-too-cavalierly react to those urges and deploy the military. It explored what can be done to mitigate not only dangerous human behaviors but also dangerous technical behaviors. The book describes the highly connected nature of modern military forces and the complex command and control systems they employ, characteristics that lend themselves to potential mistakes and rapid escalations of conflict. It highlights the world-wide deployment and frequent use of the military by civilian political leaders. Attempting to understand the proclivity to resort to military force, the book describes some of the factors or urges that cause people and their national leaders to so frequently resort to war. While Future War and its focus on weapons technology issues were more about jus in bello—that is, justice in war—Future Peace is mostly concerned with the justice in decisions to go to war, or jus ad bellum.

The book dwells on the tendency of advanced countries to rely on technology for the solution to any problem, be it in the civil or the military sector. I argue that leaders and decision makers are often seduced by the promise of new technologies and weapons and they allow their fascination to skew their judgment—and possibly overlook important arguments against war. New technologies often have unintended and unanticipated consequences. Sometimes what appear to be positive interventions notwithstanding create secondary and tertiary problems that could have been anticipated. Dietrich Dörner, in his book The Logic of Failure, describes how things go wrong because one focuses on just one element in a system of complex interrelationships, and thereby is over-generalizing and applying corrective measures too aggressively, and simultaneously overlooking potential side-effects of one's actions. Dörner writes:

The primary such mechanism is extrapolation from the moment. In other words, those aspects of the present that anger, worry, or delight us the most will play a key role in our predictions of the future.3

The United States has historically turned to technology to solve problems, especially in the nineteenth century and since. Electrification, development of plastics, drug research—have all led to better quality of living. We view all problems, or most at least, as having technological solutions. And where there is no problem to be solved, we look to technology for improving life. Problems which are caused by human behavior—such as climate change—really do not have technological solutions, notwithstanding a belief by some that humans can artificially alter the weather. Militarily seen, we won wars with new technologies. From tanks and submarines, to radar and nuclear weapons, to stealth and precision guided bombs, we have depended on technology and shown that we can create whatever we need to win, which led to the likely unfounded assumption that whatever is needed to prevail will be developed and manufactured.

In 1954 Jacques Ellul wrote that technology has become so integral to human life that it determines the working of society rather than society determining technology, as it should be.4 He argued that technology would prevail over any attempt to prevent its development. The early days of nuclear weapons serve as a great example for this. That project had gained such momentum that, even following Germany's surrender, it continued toward its conclusion even though the original motivation, Germany's pursuit of such a device, had disappeared. In the current day, new technologies, for instance those equipped with artificial intelligence, appear and race through society. Computer and communications technologies come at a rapid pace. Social media systems consume an ever-increasing amount of the public's waking hours. They often control public narratives. Their implementation seems inevitable.

Humans are quite unprepared for the potential consequences of new technologies. The inner workings and effects of some cyberattacks are difficult enough to understand. When immensely complex software systems become widely distributed, even ubiquitous, it will be challenging or impossible to correct errors or to resolve the issues created by them. When modified viruses (either biological or digital) escape or are weaponized, humans have no idea what the consequences will be and therefore they have no idea of how to deal with them. What can be done now is attempting to think ahead to the consequences before proceeding too far with development and deployment.

The U.S. Department of Defense is accelerating its adoption of data, analytics, and artificial intelligence to generate decision advantage across the spectrum, from the boardroom to the battlefield. Intelligence analysis is the logical starting point for military uses of artificial intelligence, given the sheer amount of data that leaves human analysts overwhelmed. Consider, for example, the enormous amount of moving imagery collected by drones on the battlefield or cyberspace and the electromagnetic spectrum, where attacks can spread at literally the speed of light and with a complexity that no human brain can follow. Planners are considering AI utilization for cognitive warfare, which is an emerging concept that attempts to affect the way in which humans process information presented to them. There are a few voices now being raised regarding the potential dangers of this technology and machine learning. In the past, scientists have demonstrated a willingness to self-police until safety techniques could be worked out. They should do so again. There needs to be a broad and very public debate about the use of these technologies, in the civilian world, but especially in the military world. We can, and should, also initiate and engage honestly and earnestly in arms control-type discussions on using these technologies for military purposes.

Having established the high stress, high antagonism, environment in which the world's armed forces operate, I describe some of the factors, calling them "urges," that lead leaders and populations to engage in frequent conflicts. Among the contributing factors are large stocks of weapons in arsenals just waiting to be used, a fascination with technology and idealized super-weapons, and lack of education. Philosophers, religious leaders, military leaders, and rulers throughout history have warned that arms races and weapons buildups inevitably lead to wars. For instance, at the Vatican Council I in 1870, forty fathers of the Council submitted a document, the Postulata, in which the dangers of the rapidly increasing worldwide military installations are addressed. The document is reprinted in John Eppstein's book:

The present conditions of the world has assuredly become intolerable on account of huge standing and conscript armies. The nations groan under the burden of the expense of maintaining them. The spirit of irreligion and the forgetfulness of law in international affairs open up an altogether readier way for the beginning of illegal and unjust wars.5

As mentioned earlier, leaders who invest in superweapons do so with the implicit intention of using those weapons. Consider the example of J. Robert Oppenheimer. Len Giovannitti and Fred Freed report that they interviewed Oppenheimer in 1964 concerning the use of the atomic bomb and asked whether there was ever any serious discussion about not actually using it, Oppenheimer is said to have replied that "the decision was implicit in the project."6

One aspect that clearly contributes to war and violence is populations with poor education and lack of critical thinking skills. Seymour Lipset cites several studies conducted between 1948 and 1958 that have shown strong, direct correlations between low levels of education and a high tendency toward less tolerant political attitudes. Lipset also finds that low education levels highly correlate with racism, extreme nationalism, and xenophobia.7 General George C. Marshall, former Army Chief of Staff, former Secretary of State, and winner of the Nobel Peace Prize, said, in his Nobel acceptance speech, that education was an important factor affecting peace and security and suggested that schools should be more scientific and less nationalistic in teaching about the past circumstances that have led to war. That is why Marshall wanted students to understand the conditions that had led to past tragedies, without the influence of national prejudices.8

The United States military is increasingly an institution apart from the American public that cares little, and knows even less, about weapons technology beyond the so-called gee-whiz factor, and pays scant attention to frequent U.S. deployments of troops. Mostly, the public is ambivalent. They tend to ignore military issues. Since the public is so minimally engaged in, or with, the military, it has become mainly a tool of politicians. Increasingly, politicians only represent a segment of society, and they use the military for their own purposes, namely to advance their policies and goals. The public tends to ignore what does not affect them—until it affects them, or they think it does. Then they react poorly, with too little thought, calling on their leaders to do something. The 2003 Iraq war is a good example for this dynamic. The country represented no credible threat to the United States, yet we allowed politicians to whip up a war fervor and commit soldiers to a war in a foreign land. After thousands of deaths and hundreds of billions of dollars expended, the American public is worse off—or at least no better off—than before. Similarly, there are continuing calls by some for military action against Iran. The global consequences of such a conflict would, without doubt, be enormous and severe. As always, the American people will not go to war, the all-volunteer military will.

In Future Peace I question why it is that the United States, and other advanced countries, are so militaristic. Why do leaders so often, and so freely, resort to violence and war? I also question why it is that military personnel are all considered heroes—even those who do not engage in combat—while those who argue for peace are somehow considered disloyal and subversive. I focus on important historical figures who argued for peace and against war and who were treated poorly—even jailed—in consequence of their efforts. I then go on to describe some ways in which we might reverse these dangerous trends.

In summary, Future Peace depicts a high-technology, increasingly artificial-intelligence-mediated military that is stretched thin, over-used by politicians, and essentially ignored by the populace. It is a military and a public that are entranced by technology and addicted to it, ignoring its potential dangers while being seduced by its promises.

One of the comments made by Gregory Reichberg regarding Future Peace was that it portrayed all artificial intelligence systems as the same, tarring the entire AI endeavor with the same negative brush.9 Given the myriad operational uses of, and the different kinds of artificial intelligence utilization, Reichberg suggests that a more nuanced discussion of the specific type of AI is needed. This is a valid comment. Not all artificial intelligence systems are equally worrisome. Some of these systems are so-called expert systems which are based on large amounts of well-behaved data and collected professional expertise coded into the systems. Other types of these systems are neural networks and machine learning systems, trained on vast amounts of extant, likely not so well-behaved, potentially biased, and possibly sparse data. An example for the latter is large language models that depend on ingesting massive amounts of arbitrary, un-curated data. Artificial intelligence systems that are used in more predictable, less chaotic fields (such as in logistics, for instance) where the data come in easily understood and curated forms, are less susceptible to spoofing, hallucination, and a host of other problems that plague data-intensive systems.

Another comment concerned the adequacy of the United States Defense Department (DOD) efforts to control and manage artificial intelligence utilizations. While it is correct that the DOD acknowledges the risks of these systems, Reichberg asks whether their efforts are sufficient. DOD actions in this arena are positive as far as they go, yet acknowledging a problem and indeed doing something about it are two quite different things. Since 2012 DOD has published policy on autonomy, but has yet to issue specific guidance in the form of regulations. They publish principles and guidelines regarding artificial intelligence uses, and talk about AI governance, but stop short of establishing firm requirements.10 European efforts in this area can be criticized on the margins and for some of their elements, but they are specific and enforceable. Most DOD research is accomplished by contractors and, as would be expected, if it is not in the contract, words are meaningless. To date, DOD has not issued regulations or contractual language to be followed by its industrial base in regard to artificial intelligence use. Reichberg also asked, given the criticism regarding AI, whether the international community should not just ban it in weapons. Arguably, such a ban would be neither feasible nor advisable. Such weapons have utility in combat and could provide the leading forces with an advantage. In the proper battlefield circumstances and under proper command and control, they have a role in warfare. However, in the United States it is advisable to implement strict regulatory, contractual, and operational requirements to be followed in the research, development, testing, and deployment of artificial intelligence systems. As to the question of whether the risks associated with these systems outweigh the benefits or vice versa, I do not think this question can be answered as a generality. It would depend on the application.

Patrick Bratton suggests that I use a wider lens, asking how others, particularly Russia and China, feel about contemporary warfare and new weapons equipped with artificial intelligence.11 Regarding the future of conflict, Russia has made its feelings rather obvious by its actions. They seem willing to engage in all forms of conflict, from hybrid warfare to open combat, to terror strikes on civilian targets. They are also seemingly willing to employ any type of weapon, up to and including nuclear, without regard to moral issues. China's policy is to prepare for war, and they are modernizing their military at a blistering rate. While their rhetoric is increasingly strident and their actions in the South China Sea are more aggressive, they nonetheless seem somewhat more nuanced and restrained.

Bratton also suggests that I should have referred to Scott Sagan's work regarding arms proliferation and its effect on the risk of conflict.12 I agree with his recommendation that Sagan's work is very pertinent here. At the time of writing Future Peace, I was aware of some of Sagan's nuclear work, just not this one. Sagan does indeed make points comparable to the ones made by me. Specifically, he concludes that the more weapons that are in the hands of countries, the more dangerous the situation becomes. This is a counter-argument to the ones who argue that a widespread proliferation of nuclear weapons would act as a deterrent to all others. While this does not necessarily equate to increasing the frequency of conflict, it also does not deter it either and, as Sagan suggests, the presence of such weapons or technology in the hands of many likely raises the probability of conflict.

Bratton goes on to ponder if the United States were to do what I suggest in Future Peace, would it be sufficient? He questions whether it would be successful and what the response of other countries to this might be. To suggest that if the United States took all the actions I recommend, it would be enough to make the world a peaceful place, would be unrealistic. However, the United States is and will remain a key, if not the key, player globally for a long time to come. Often, what the United States does provide is leadership and a way forward for other countries. Sadly, however, it is frequently our own aggressiveness and our investments in new weapons technologies that prompts others to do the same.

Ryan Jenkins makes the point that I raise a lot of issues but that it is unclear whether they are equally significant or whether they are related.13 Obviously, I feel they are all important, or else I would not have mentioned them. Some, such as arms control, are solvable provided there is the political will to do it. Others, such as low education levels, or conscripted service would likely take generations to solve.

In response to my discussion of conscription in Future Peace, Jenkins suggests an interesting concept for a national vote on the draft in which it would be implemented of fifty percent, plus one, of the public agreed. He asks if this is workable and fair. It is not obvious to me that such a scheme would be any more workable than a simple draft. The concept seems fair enough to those who would vote in favor of it. I do not think, however, that being fair is the point. It would be no fairer than a complete conscripted service scheme and would, as he points out, excuse those not voting in favor of it. My point is that we should require everyone, no exceptions, to serve. Such draconian measures, more consistently implemented than was the case in the past, would force the issues of war and peace to the forefront of the public's consciousness. Realistically considered, regardless whether it is total or fifty percent, any conscription is highly unlikely to be introduced in the United States, absent a direct attack on the homeland. A public discussion, however, needs to occur.

Concerning the discussion of cheap war, I continue to believe that the cheaper conflict becomes, the more likely it becomes. Conversely, the more expensive it is in both financial and personal terms, the more difficult the decisions become. It may take time, as it did for the United States in Vietnam and for the Soviet Union in Afghanistan, but eventually it will become more than the aggressor is willing to pay.

I think George Lucas' main criticism is that there is a lack of evidence that technology causes more war, an assertion, he notes, that seems to be merely accepted by the other reviewers and other writers.14 He notes that said conclusion is in fact only unproven speculation and that wars are fought for political reasons. I could not agree more. Technology, he says, affects the degree of violence and the savagery, but not the frequency. I am not sure that I explicitly said technologies indeed causes more wars. If I did so, I have sent the wrong message. What I believe, and I agree that it is hard to prove, is that an unquestioning belief in the abilities of new weapons technology contributes to leaders' decisions to engage in conflict. As Lucas correctly notes, it is hard to prove a counterfactual. In support of my assertion, I cite Russia's Vladimir Putin as an example. It needs to be noted that in the months preceding Russia's invasion of Ukraine, he gleefully announced a series of new super-missiles, which he claimed would make Russia invulnerable and unstoppable. He also famously cautioned that the country that prevailed in artificial intelligence development would rule the world and that therefore no one should win a monopolist position.15 It could be argued that if he actually believed what he was saying it may have influenced his decision to invade. As a counter-example, Lucas points out that nuclear weapons are a case where the presence of the technology has not led to war. True enough if you discount their initial use by the United States, but as with all other arguments about nuclear weapons, I deem them being a special case and cannot be considered among other, less existential, technologies.

As another counter-argument of sorts, Lucas cites the case of precision guided munitions (PGM), noting that their introduction brought about dramatic decreases in collateral damage and non-combatant casualties. In that sense, PGMs have had a positive ethical outcome, but it is not clear to me that they helped stop a war or that the precision argument has anything to do with preventing one. On the contrary, while they serve to limit collateral damage, their increased usage may in fact offset their benefits. The world-wide investment in PGMs is expected to grow at an annual rate of almost six percent from 2022 to 2030.16 It is an unresolved question whether their presence and their efficiency invites greater usage, whether their increased utilization is a result of there being more wars, or whether they are being used more because they are more accurate. Perhaps all the mentioned points apply.

I agree with Lucas that neither of the current wars in Ukraine and Gaza happened because of technology, but I also assert that technology clearly played a role. In the Ukraine case, Russia may have thought too highly of its technical capabilities. In the Gaza case, it seems that Israel's belief in its technology capabilities may in fact have led it into a false sense of invulnerability.

I wholeheartedly support George Lucas' conclusions about the dangers of untested and unproven artificial intelligence systems in weapons and the need for intense efforts in testing and evaluation. The rapid growth and seeming rush to incorporate the technology argues even greater caution.

As I said in the introduction, the experience of meeting with the Critics, all of whom honored me by a careful reading and commentary on Future Peace, was an extraordinarily gratifying experience. I greatly appreciated the opportunity to participate in the Author Meets Critics session and am especially appreciative of the critics' keen observations. Their comments have in some cases modified my thinking and will inform and improve my future work in this area.

1 Robert H. Latiff, Future War: Preparing for the New Global Battlefield, New York, NY: Alfred A. Knopf, 2017.

2 Robert H. Latiff, Future Peace: Technology, Aggression, and the Rush to War, Notre Dame, IN: University of Notre Dame Press, 2022.

3 Dietrich Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations, transl. Rita and Robert Kimber, New York, NY: Basic Books 1997, p. 109.

4 Jacques Ellul, The Technological Society, transl. John Wilkinson, New York, NY: Vintage Books 1964, p. 140.

5 John Eppstein, The Catholic Tradition of the Law of Nations, London, UK: Burn Gates and Washbourne 1935, p. 132.

6 Len Giovannitti and Fred Freed, The Decision to Drop the Bomb, New York, NY: Coward-McCann 1965, pp. vi and 328.

7 Seymour Martin Lipset, Political Man: The Social Bases of Politics, Baltimore, MD: Johns Hopkins University Press 1981, pp. 101-4.

8 George C. Marshall, Nobel Peace Prize Acceptance Speech, December 11, 1953, www.nobelprize.org/prizes/peace/1953/marshall/acceptance-speech/.

9 Gregory M. Reichberg, "Risks of Weaponry Integrated with Artificial Intelligence," Existenz 17/1 (Spring 2022), 53-57.

10 Deputy Secretary of Defense, Memorandum: Implementing Responsible Artificial Intelligence in the Department of Defense, May 26, 2021. See also, Department of Defense, Responsible Artificial Intelligence Strategy and Implementation Pathway, June 2022.

11 Patrick Bratton, "Accidental Escalation and Future of War and Peace," Existenz 17/1 (Spring 2022), 62-65.

12 Scott D. Sagan, The Limits of Safety: Organization, Accidents, and Nuclear Weapons, Princeton, NJ: Princeton University Press, 1993.

13 Ryan Jenkins, "Cheap War and What It Will Do—Comments on Latiff's Future Peace," Existenz 17/1 (Spring 2022), 58-61.

14 George R. Lucas jr., "From Future War to Future Peace: A Critique of USAF Major General Robert Latiff's Critique of the Role of New Technologies in War," Existenz 17/1 (Spring 2022), 66-70.

15 Associated Press News, "Putin: Leader in Artificial Intelligence will Rule the World," 1 September 2017, https://apnews.com/article/ bb5628f2a7424a10b3e38b07f4eb90d4.

16 Precision Guided Munition Market Size, Share & Trends Analysis Report by Product (Tactical Missiles, Guided Rockets, Guided Ammunition), By Technology, By Region, And Segment Forecasts, 2023–2030, Grand View Research September 2023, https://www.grandviewresearch.com/industry-analysis/precision-guided-munition-market.