Artificial Intelligence (AI) revolution has brought dramatic transformations across sectors like security, economy, healthcare, and education. This essay aims to scrutinise how AI affects the basis of strategy. Firstly, this essay will describe AI using existing overarching interpretations and shed light on its competencies by subdividing it into its varieties – artificial narrow intelligence (ANI), artificial general intelligence (AGI), and superintelligence. Secondly, it will define strategy underscoring the Clausewitzian perspective of strategy as “the use of the engagements for the purpose of the war.” Thereafter, it will peruse the fundamental effects of ANI on strategy, followed by a long-view consideration of possible changes that AGI can bring to the basis of strategy. Finally, it will conclude that, in contrast to AGI, which, if it exists, could alter both the nature and strategy of war, ANI would have a serious impact on the nature of warfare with very little change to the strategy making that would remain a human preserve.
What is AI?
Human intelligence is considered the ability to achieve one’s goals through “a combination of analytical, creative, and practical abilities.” Although the term AI has many loose interpretations, Bellman’s description of it as a “process of automation of certain tasks, such as making decisions and learning, that is a reflection of human intelligence” can be taken as a starting point to analyse AI’s impact on strategy.
Presently, the AI systems functioning around us are varieties of ANI. They are limited to utilizing algorithms to attend to singular tasks like voice recognition, driving a car, or image classification. ANI, through reinforcement learning, achieves human-level or better capabilities in pre-determined fields. They are effective at accomplishing a specific task, but cannot flexibly acclimatize to execute different tasks or modify their functioning to face a novel challenge. AGI, by contrast, represents a broad intelligence that can attain dynamic human-like cognitive skills equipped to flexibly employ its learning in multiple spheres, autonomously switch between tasks, and address unprecedented challenges. Although Google’s DeepMind introduced a generalist AI, Gato, which can carry out a wide range of activities from stacking blocks to captioning images using the same algorithm, developing AGI with capabilities in matching or even exceeding human intelligence seems a distant prospect currently. Moreover, it is expected that after AGI is developed, a point of “singularity” in AI will be achieved, eventually resulting in superintelligence that will surpass human-level intelligence.
What is Strategy?
Betts focussed on the deliberative process of strategy and claimed that strategy represents the “link between military means and political ends,” the rational scheme for converting the former to the latter. Gray also affirmed that strategy is “the attempted achievement of desired political ENDS, through the choice of suitable strategic WAYS, employing largely the military MEANS then available or accessible.” Due to its complex and multifaceted nature, there are many interpretations of strategy. But at its core, the basis of strategy represents systematic objective setting and meticulous pursuits of contextual political goals by making considered choices involving, amongst many, military resource allocation, dynamic prioritization, and pliable trade-offs.
ANI and Strategy
Technological developments have previously brought “Revolutions in Military Affairs” and Payne asserts that AI will inevitably impact the nature and conduct of war by affecting the factors behind human decision making. Many AI autonomous drones with networked decision-making ability could launch a swarming sortie that could potentially overwhelm and outwit an adversary's sophisticated air defence system more efficiently than human-controlled vehicles. An artificial narrow intelligence (ANI) controlled virtual fighter jet beating an F-16 human pilot in DARPA’s simulated dogfight shows how ANI’s reinforcement training can be expected to soon outperform humans in real-life dogfights. The startling insight of Clausewitz’s strategic theory that defence is a stronger method of war than offence is being challenged by ANI’s potential. The speed associated with ANI, its quick cycling of the OODA loop (observe, orient, decide, and act), incredible precision, and capability of replacing humans on the front line can benefit offensive capabilities immensely affecting the offence-defence balance with far-reaching repercussions for the deterrence in the international order. ANI’s tactical advantages allow audacious goals like the remote assassination of an adversary, even if casus belli, to be opted for by a state’s belligerent decision-makers having necessary ANI weapons. The goal of strategy to “get more out of a situation than the starting balance of power would suggest” is surely not changed by AI, but it definitely can affect the selection of the political goals that remain intertwined with the available AI capabilities.
In 2015, DeepMind’s AlphaGo AI defeated the world champion at Go, the popular board game, by employing a Monte Carlo tree search algorithm to determine its moves. It previously gained knowledge about the game utilizing an artificial neural network during comprehensive practice with both human and AI opponents. During the gameplay, experienced observers noted that AlphaGo occasionally made moves radically different and uncharacteristic of humans. Payne says AI can draw parallels and inferences from data, conceivably invisible to humans. Thus, ANI’s unanticipated and perceptive insights to humans might drastically affect strategy shaping.
There are always physical and psychological aspects of strategizing, where human decision-makers are impacted by factors like cognitive load, time constraints, stress, and fatigue. During the Vietnam War, President Johnson’s colleagues thought he was depressed, whereas the next President, Nixon, was prone to unrestrained anger. Here, ANI can provide decisive inputs to decision-makers while avoiding human cognitive heuristics and skewed risk judgement, which humans are susceptible to when under duress.
But there also can be a tendency where defence planners may start to view AI-generated observations as comparable (or even superior) to those made by humans. This automation bias or over-reliance on automation in military decision-making will probably foster circumstances leading to strategic instability in the absence of human intuitiveness, judgment, and accountability. Army investigators discovered automation bias to be a contributing factor in the 2003 Patriot fratricides, in which two friendly aircraft were shot down by Army Patriot air and missile defence operators during the initial stages of the Iraq War. Investigators found that in both cases, operators relied on the (inaccurate) signals their automated radar systems were sending them even though operators were “in the loop” and had the final say regarding whether or not to fire.
However, unlike a game with well-defined problem sets, Clausewitz claimed “war is the realm of uncertainty,” where situational vagueness creates a fog. Yarger applied chaos theory to strategy and called strategic environments “chaotic” systems sensitive to variabilities in initial conditions and subject to seemingly random and unpredictable behaviour of the adversary. Yarger underscored the flexible, and contextual experience, anticipation and insights of the human element as the success factor behind strategy. For military strategy, ANI can provide tactical advantages, but human genius is necessary to guide AI through the perilous and imperfect knowledge available in strategic environments. In the case of intelligence, surveillance, and reconnaissance (ISR), through constant monitoring of numerous data feeds and identification of patterns of interest, AI can lessen the burden associated with data processing, as was the case with AI-based Project Maven that Pentagon used against ISIS. But the judgement aspect of strategy, even during Project Maven, must inevitably require human intelligence.
Moreover, Clausewitz deemed strategy to incorporate “human passions, values, and beliefs,” which most certainly are unquantifiable for an ANI. Yarger also contended that strategy is “essentially a human enterprise” and suggested that human emotion, ideology, and culture influence dynamic policy-led goals and the strategy schemed for achieving them. Bostrom says an AI will have “convergent instrumental” values where the AI targets sub-goals that make it easier to achieve the final goal of the strategy. Human response associated with issues of ‘reputation’ and ‘credibility’ may make a decision-maker veer from coercion to diplomacy. AI’s instrumental values are more likely to be detached and rational than in sync with human values and ethics, thus raising the pertinent point about the human-in-the-loop approach being imperative.
Even so, ANI's tactical importance will inexorably extend into the strategic sphere as ANI will dramatically tip the balance of power in its possessor's favour in many ways. Payne believes this would result in a neorealist notion of security dilemma and, therefore, a new arms race between great powers and small nations bandwagoning with greater ones.
As nations race to achieve AI hegemony, governments rely on private companies like Google and IBM, as they do for many other dual-use technologies, for underlying R&D and expertise necessary for AI to be used for military purposes. In practice, this means that many nations will use the same international supply chains to support their military AI ambitions, potentially leading to competitive conflicts of interest and affecting the course of achieving military means necessary for fulfilling political ends. Meanwhile, an authoritarian state like China benefits as it is not hindered by the separation between public and private and ensures that its private sector’s AI progress is committed to advancing military might and the coupled political goals.
AGI and Strategy
As Artificial General Intelligence (AGI) does not yet exist, there will unavoidably be a speculative nature to any analysis of AGI and strategy. AGI is anticipated to be an autonomous agent with unsupervised learning capabilities able to apply it in diverse realms. AGI is capable of becoming the first non-biological entity competent in strategic thought. Because AGI’s underlying cognitive action is substantially different from humans, if humans delegate strategic functions to it, AGI can drastically change the basis of strategy by producing hitherto unpredictable actions and schemes.
General John Allen says AGI can bring on a hyper-war, a war at the speed of light, in which the observe-orient-decide-act (OODA) loop would almost entirely lack human decision-making. Payne argues that even if AGI were to have functional characteristics close to human cognition, it would still lack a human-like assessment of what is good enough in terms of outcomes and a capability to reconcile its goal with alternative possibilities. For example, in a Cuban Missile Crisis like situation, an AGI might approach the event with eyes on full-fledged military attack, instead of Kennedy-style caution consisting of blockage and focus on diplomacy. Similarly, fast acting AGI capable of adaptive thinking and wise to a diverse data set may feel confident to completely undercut defensive measures and adopt unstable nuclear postures such as renouncing no-first use or even launch pre-emptive first strike affecting strategic stability. It is because even advanced AGI cannot comprehend the meaning behind human political goals. That begs the question as to whether a future AGI be able to set its own goals and, if it does, would they be compatible with human goals. This issue may be exacerbated when an AGI recursively self-improves itself or when superintelligence endowed with superhuman cognition is created.
Bostrom contends that an AGI that can think for itself would evolve into a super-intelligent AI which cannot be stopped and would adopt goals like ceaseless resource acquisition and self-preservation, leading to a malignant failure wherefore humans would face an existential crisis. Fearing this, Robert Work, former U.S. Deputy Secretary of Defence, asserts that he cannot even imagine employing AI in weapons or strategy without “human in the loop”. In Arkin's opinion, AGI's intelligence must be programmed, with a control switch, to allow an impact on strategy with the sole aim of maximizing the realization of human objectives.
Gray asserted that the fundamental theory and function of strategy is an abiding and unchangeable human activity and the only dynamic and variable entity would be the technological context in which strategy is set. Due to the current unlikelihood of programming human faculties like emotion and intuition, and self-aware consciousness into AI, the narrow AI won't fundamentally alter the nature of strategy, even if its capabilities influence human intelligence in strategic decision-making. For the foreseeable future, the foundation of strategy will essentially remain a human endeavour where humans can comprehend a broader context and adapt to novel situations, and ANI can be used for specific, tailored tasks and their advantages in speed. However, the intelligence explosion that may lead to self-thinking AGI might augur a human-machine civilization where strategy-making is shared, and that age might bear witness to AGI drastically changing the way strategy is schemed and implemented. But, for now, it remains a theoretical concept.
Nitin Menon is an engineer by education and an educator by passion with a keen interest in geopolitics and diplomacy.
Bellman, Richard. An Introduction to Artificial Intelligence: Can Computers Think? San Francisco: Boyd & Fraser Pub. Co, 1978.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014.
Freedman, Lawrence. Strategy: A History. New York: Oxford University Press, 2013.
Gray, Colin S. The Future of Strategy. Cambridge: Polity Press, 2015.
Gray, Colin S. The Strategy Bridge: Theory for Practice. Oxford: Oxford University Press, 2010.
Payne, Kenneth. Strategy, Evolution, and War: From Apes to Artificial Intelligence. Washington, DC: Georgetown University Press, 2018.
Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton & Company, 2019.
Von Clausewitz, Carl. On War. Princeton, New Jersey: Oxford University Press, 1989.
Ayoub, Kareem, and Kenneth Payne. ‘Strategy in the Age of Artificial Intelligence’. Journal of Strategic Studies 39, no. 5–6 (18 September 2016): 793–819. doi:10.1080/01402390.2015.1088838.
Goldfarb, Avi, and Jon R. Lindsay. ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’. International Security 46, no. 3 (25 February 2022): 7–50. doi:10.1162/isec_a_00425.
Horowitz, Michael C. ‘When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability’. Journal of Strategic Studies 42, no. 6 (19 September 2019): 764–88. doi:10.1080/01402390.2019.1621174.
Johnson, James S. “Artificial Intelligence: A Threat to Strategic Stability.” Strategic Studies Quarterly 14, no. 1 (2020): 16–39. https://www.jstor.org/stable/26891882.
Sternberg, Robert J. ‘The Theory of Successful Intelligence’. Review of General Psychology 3, no. 4 (1 December 1999): 292–316. doi:10.1037/1089-26188.8.131.522.
Winter-Levy, Sam, and Jacob Trefethen. “Safety First: Entering the Age of Artificial Intelligence.” World Policy Journal 33, no. 1 (2016): 105–11. https://www.jstor.org/stable/26781386.
Yarger, Harry R. “Strategic Theory for the 21st Century: The Little Book on Big Strategy.” Strategic Studies Institute, US Army War College, 2006. http://www.jstor.org/stable/resrep12087.
‘A Generalist Agent’. Deepmind. Accessed 16 February 2023. https://www.deepmind.com/publications/a-generalist-agent.
Allison, Graham. ‘Is China Beating America to AI Supremacy?’. The National Interest. The Center for the National Interest, 22 December 2019. https://nationalinterest.org/feature/china-beating-america-ai-supremacy-106861.
John Allen and Amir Husain. ‘On Hyperwar’. U.S. Naval Institute, Accessed 21st February 2023. https://www.usni.org/magazines/proceedings/2017/july/hyperwar.
Koch, Christof. ‘How the Computer Beat the Go Master’. Scientific American. Accessed 21 February 2023. https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/.
Paul Scharre. ‘A Million Mistakes a Second’. Foreign Policy. Accessed 25 February 2023. https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/.
‘The Pentagon’s New Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS’, Defence One. Accessed 21st February 2023. https://www.defenseone.com/technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833/.
‘The Rise of A.I. Fighter Pilots | The New Yorker’. Accessed 18 February 2023. https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai-fighter-pilots.
 Carl Von Clausewitz, On War (New Jersey: Princeton University Press,1989), 87.
 Robert Sternberg, ‘The Theory of Successful Intelligence’, Review of General Psychology, no.4(December 1999):189-202.
 Richard Bellman, An Introduction to Artificial Intelligence: Can Computers Think? (San Francisco: Boyd & Fraser Pub. Co,1978),12.
 Sam Winter-Levy and Jacob Trefethen, ‘Safety First: Entering the Age of Artificial Intelligence’, World Policy Journal, no.1(2016):105-111.
 Winter-Levy and Trefethen, ‘Safety First’, 105-111.
 "A Generalist Agent", Deepmind, accessed 16 February 2023, https://www.deepmind.com/publications/a-generalist-agent.
 Richard Betts, ‘Is Strategy an Illusion?', International Security, no.2(2000), 5.
 Colin S. Gray, The Future of Strategy (Cambridge: Polity Press,2015),10.
 Kenneth Payne, Strategy, Evolution, and War: From Apes to Artificial Intelligence (Washington: Georgetown University Press, 2018),181.
 James Johnson, ‘Artificial Intelligence: A Threat to Strategic Stability’, Strategic Studies Quarterly, no.1 (2020), 20.
 ‘The Rise of A.I. Fighter Pilots', The New Yorker, accessed 18 February 2023, https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai-fighter-pilots.
 Michael C. Horowitz, ‘When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability’, Journal of Strategic Studies 42, no.6 (19 September 2019): 764–88.
 Lawrence Freedman, Strategy: A History (New York: Oxford University Press, 2013), xii.
 Christof Koch, ‘How the Computer Beat Go Master’, Scientific American, accessed 21 February 2023, https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/.
 Payne, Strategy, Evolution, and War, 175-176.
 Kareem Ayoub and Kenneth Payne, ‘Strategy in the Age of Artificial Intelligence’, Journal of Strategic Studies 39, no.5–6 (18 September 2016): 798.
 Horowitz, ‘When Speed Kills’, 773-774.
 Von Clausewitz, On War, 101.
 Harry R. Yarger, Strategic Theory for the 21st Century (Strategic Studies Institute, US Army War College, 2006), 34-38.
 Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security 46, no.3 (25 February 2022): 36-37.
 ‘The Pentagon’s New Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS’, Defense One, Accessed 21st February 2023, https://www.defenseone.com/technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833/.
 Von Clausewitz, On War, 134-135.
 Yarger, Strategic Theory for the 21st Century,40.
 Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press,2017).
 Ayoub and Payne, ‘Strategy in the Age of Artificial Intelligence’, 797.
 Payne, Strategy, Evolution, and War,190.
 Graham Allison, ‘Is China Beating America to AI Supremacy?’, The National Interest, Accessed 22nd February 2023, https://nationalinterest.org/feature/china-beating-america-ai-supremacy-106861.
 Payne, Strategy, Evolution, and War, 208-210.
 John Allen and Amir Husain, ‘On Hyperwar’, U.S. Naval Institute, Accessed 21st February 2023, https://www.usni.org/magazines/proceedings/2017/july/hyperwar.
 Payne, Strategy, Evolution, and War, 200.
 Paul Scharre, ‘A Million Mistakes a Second’, Foreign Policy, Accessed 25 February 2023, https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war.
 Johnson, ‘Artificial Intelligence: A Threat to Strategic Stability’, 29-31.
 Payne, Strategy, Evolution, and War, 22-23.
 Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press,2014),123-124.
 Paul Scharre, Army of None: Autonomous Weapons and the Future of War, (New York: W.W. Norton & Company, 2019), 281-282.
 Scharre, Army of None, 227-228.
 Colin S. Gray, The Strategy Bridge: Theory for Practice (Oxford: Oxford University Press,2010), 20.