Speeding Up the OODA Loop with AI

A Helpful or Limiting Framework?

By Mr

By Mr

 Owen

 Daniels

Institute for Defense Analyses

Published:
 May 2021
 in 

Introduction

Strategists, warfighters, and technologists have heralded Artificial Intelligence (AI) as a potential tactical and strategic tool for outpacing adversary decision-making processes, commonly seen through the frame of the OODA (Observe, Orient, Decide, and Act) loop. Conventional thinking posits that human-machine teams augmented by AI-enhanced technologies will be able act more quickly than opponents in a conflict, gaining decisive advantages that could enable victory. Amid competition with near-peers who can access similar capabilities, enthusiasm for exploiting AI technology across kinetic and non-kinetic domains is understandable: consistent advantage over the adversary to act and react more quickly could prove decisive.

Yet conceptualizing AI use through the OODA loop’s emphasis on speed ignores the limitations of this heuristic framework and the complexity of human-machine teaming, and may stunt creative thinking about AI’s current military applicability. By over-generalizing AI’s advantages only in terms of speed, stakeholders could inadequately explore how AI could help militaries. Focusing on speed also de-emphasizes potential risks like inadvertent escalation, un-explainability behind AI decisions, training and data issues, and legal or ethical concerns.

Discussion around future AI use needs to be grounded in specificity, rather than treating AI as a panacea for warfighting challenges. Distinguishing between types of AI, general versus narrow AI or traditional machine learning versus deep learning systems, is key to ensuring precise terminology and demystifying conversations about AI’s military applications. For example, structured applications of AI in non-warfighting, support functions may present the best near-term application of the technology. While recognizing AI’s great potential for certain military applications, this article highlights some flaws in the discourse around military AI use and offers several key lessons.

Conceptual Challenges with OODA Framing

US Air Force Colonel John Boyd developed the OODA loop framework as an advantageous mental model for fighter pilots trying to win direct Air-to-Air encounters with symmetrical circumstances. The continuously operating loop segments the decision cycle into the aforementioned subcomponents and accounts for the pilot’s previous experiences, training, and culture. Boyd posited that pilots who could cycle through their OODA loops more quickly, observing situational changes, orienting to understand new information, deciding on a course of action, and acting on it, could dominate opponents.1 In that context, with limited inputs and a relatively constrained environment, the OODA loop offered an appealing heuristic model.

The straightforward logic and explainability of Boyd’s model have led militaries, businesses, and technologists to adopt and apply the OODA loop beyond its original context.2 Recently, the OODA loop has emerged as a popular framing device for discussing how AI could help militaries function at greater speed.3 In discussions of great power competition, the OODA loop provides an easy, surface-level comparative framework among near-peers who might use AI technologies in similar ways.4 Emerging military concepts that feature AI, like the US Air Force’s Joint All-Domain Command and Control system, are described as having ‘information over the OODA loop … at the heart of successful execution.’5 One research team even identified AI as the latest advancement to replace the human element of the OODA loop with technology in that AI might transform human decision-makers’ abilities to orient by integrating and synthesizing massive, disparate information sources alongside new manoeuvre and fires technologies for acting, as well as digitization technologies to improve observing and disseminating information.6 Others theorize that AI may one day be authorized to make lethal battlefield decisions at a pace far exceeding that of humans.7

AI’s appeal as a force multiplier and decision aid is clear given its potential for rapidly executing time-consuming, mundane, or even dangerous tasks. However, discussion of future AI applications can be vague or overly optimistic given limited technological understanding and nonlinear trends in AI advancement.8 Given these misunderstandings, using the OODA loop to frame discussions about military AI applications may stretch the OODA concept beyond its useful limitations and over-emphasize speed at the cost of other key metrics, like decision quality and human-machine team performance.

First, from a conceptual standpoint, phrases like ‘hacking’ or ‘outpacing’ the adversary’s OODA loop may inaccurately imply that the adversary’s decision-making calculus mirrors our own. In the context of using AI to outpace the enemy, strategic-level decision-makers could inappropriately assume symmetric thinking, access to information, or understanding of a specific situation.9 While the OODA framing aims to convey the importance of superior decision speed, it is important to consider how adversaries’ decision-making might differ from one’s own, both for exploitative advantages and introspective vulnerability analysis.

Second, focusing purely on speed could miss the importance of decision quality and attention to timing. AI-enabled decision-making would ideally not only happen faster than the enemy’s but would lead to effective action at the most advantageous moment relative to the adversary.10 Quicker decisions are not necessarily better, and speeding through one’s own OODA loop so quickly that it becomes disassociated from the adversary’s may be less helpful than acting at the moment of most significant comparative advantage.

Third, it is not clear that the OODA loop scales to the strategic level or across operations, or even beyond its original one-on-one fighter context. When scaled-up to include multiple operators within their own, differently paced loops, Boyd’s closed-loop system quickly becomes an open system-of-systems with dependent components. Vulnerable points increase with scale; as sub-systems span the tactical through strategic levels, their complexity dilutes the OODA model’s usefulness.11 Intelligence, Surveillance, and Reconnaissance (ISR) integrity is a risk in any military decision-making process given imperfect information; however, emphasizing rapid action could increase the negative effects of compromised observation and orientation on strategic decisions and effective outcomes. The OODA loop’s centralized structure may also be unrealistic strategically given command structures and devolved authorities. AI could cut across the fog and friction of war, but AI-enabled strategic thinking should not be limited by the OODA framework.

Technological Challenges

In addition to the conceptual limitations of AI speeding the OODA loop, existing technological challenges should give the framework’s proponents pause. Potential future applications of AI to military decision-making are manifold; image recognition is broadly applicable across ISR, predictive analytics can help with maintenance and route planning, web trawlers can collect valuable open-source information, and AI-enabled sensing could give warfighters increased situational awareness. But today’s AI capabilities carry common risks that may make emphasizing speed as a key performance metric less desirable.

AI’s ability to recognize images (observe) outside of certain conditions is highly limited and does interpret their function based on form (orient). Difficulty training algorithms stemming from inadequate data also poses risks to correctly observing and orienting, such as model overfitting or underfitting, and cultivating training data itself introduces the possibility for unintentional bias.12 At present, black box characteristics of deep learning systems hamper explaining their choices and testing and evaluating for potential emergent behaviours.13 These challenges to human understanding lessen the likelihood of quicker decisions and rapid positive effects in human-machine teams.

AI-enabled big data tools, particularly as aids for non-warfighting functions and decisions where representative data exists as with systems maintenance, may offer the best near-term prospects for military AI application. Yet even then, such tools require massive amounts of specific information to produce analysis that is not over-generalized to the data set. In some cases, these analysis tools could increase access to information that ultimately possesses little value to decision-makers and demands further human judgment to wade through, increasing cognitive load and creating additional human-machine teaming challenges.

Even assuming AI technologies function perfectly in the future, nearing machine decision cycle speeds may not be a good thing. Technology will not always plug neatly into human processes and may require humans to adapt in order to avoid automation bias, defaulting to reliance on machines.14 Contested environments may create incomplete situational awareness even with superior observation tools, leading decision-makers without sufficient understanding of technological limitations toward poor choices. Furthermore, AI could lead to unintended escalation. A 2020 RAND wargame found that ‘widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability,’ with machine decision-making speeds leading to quicker escalation and weakening deterrence. Machines in that wargame also struggled to respond to de-escalatory signals as humans might.15 Add uncertainty over the impact of AI’s potential to affect nuclear deterrence, and the risks of speedy AI begin to mount.16 That is even before weighing unresolved legal, moral, and ethical concerns about using AI and AI-enhanced autonomy for combat. For example, international bodies and individual states have emphasized meaningful human control or appropriate levels of human judgment for AI-enabled capabilities such that potential liability for system malfunctions like targeting errors remains with a human.

Implications for Militaries

The limitations of the OODA-AI framing expose how important it is for operators and decision-makers to firmly grasp the strengths and limitations of AI and other emerging technologies. Military leaders need to be well-enough versed in the particulars of AI to recognize the realistic extent of its tactical, operational, and strategic value beyond simply accelerating decisions. Focusing on speed as a key metric is insufficient. New problem sets posed by adversaries in traditional domains already challenge officers well-schooled in doctrine, strategy, and warfighting. Incorporating algorithmic tools that perform best in constrained contexts does not guarantee near-term success.

Militaries should weigh how to use AI creatively in the context of competition. Effective AI is highly dependent on input quality, and future contested environments where adversaries deny or poison information may not be the best initial settings to deploy AI tools. How can militaries use AI for non-combat functions that exploit comparative advantages over adversaries? How can AI solve problems in constrained contexts, such as logistics, base functions, or personnel policies? What safeguards are necessary to protect against mistakes by non-technical users and to cultivate comparative human judgment advantages?

Even as AI advances, warfare will remain human-centric. Educating operators and decision-makers about the military implications of emerging technologies and establishing a core of common understanding with allies should help adapt tech-enabled decision-making for future warfighting. If AI-enabled technologies create scenarios where human values and input are necessary, operators at all levels need basic fluency in these systems’ capabilities to properly use them and trust their effective functioning. Because humans will remain the most important cogs in the decision cycle for the foreseeable future, effectively integrating human judgment and machine function with AI will become either a source of military competitive advantage – or a liability.17

Creative, aspirational thinking about future applications of military AI is important; to borrow a phrase, the new wine of potentially revolutionary technology should not be put in old conceptual bottles.

Gross, George. M., ‘Nonlinearity and the Arc of Warfighting’, Marine Corps Gazette (2019): p. WE44-47, https://mca-marines.org/wp-content/uploads/Nonlinearity-and-the-Arc-of-Warfighting.pdf, accessed 26 Mar. 2021.
Trautman, Erik, ‘How Artificial Intelligence is Closing the Loop with Better Predictions’, Hackernoon, 26 Jul. 2018, https://medium.com/hackernoon/how-artificial-intelligence-is-closing-the-loop-with-better-predictions-1e8b50df3655, accessed 26 Mar. 2021; Blondeau, Antoine, ‘What Do AI and Fighter Pilots Have to Do with E-Commerce? Sentient’s Antoine Blondeau Explains.’ 5 Dec. 2016, https://www.ge.com/news/reports/ai-fighter-pilots-e-commerce-sentients-antoine-blondeau-explains, accessed 26 Mar. 2021.
Strickland, Frank, ‘Back to basics: How this mindset shapes AI decision-making’, Defense Systems, 30 Sep. 2019, https:// defensesystems.com/articles/2019/09/18/deloitte-ai-ooda-loop-oped.aspx, accessed 26 Mar. 2021.
Freedberg, Jr., Sydney, ‘JAIC Chief Asks: Can AI Prevent Another 1914?’, Breaking Defense, 11 Nov. 2020, https://breakingdefense.com/2020/11/jaic-chief-asks-can-ai-prevent-another-1914/, accessed 26 Mar. 2021.
Hitchens, Theresa, ‘Exclusive: J6 Says JADC2 Is A Strategy; Service Posture Reviews Coming’, Breaking Defense, 4 Jan. 2021, https://breakingdefense.com/2021/01/exclusive-j6-says-jadc2-is-a-strategy-service-posture-reviews-coming/, accessed 26 Mar. 2021.
Goldfarb, A. and Lindsay, J., ‘Artificial Intelligence in War: Human Judgment as an Organizational Strength and a Strategic Liability’, Brookings Institution, 2020, https://www.brookings.edu/wp-content/uploads/2020/11/fp_20201130_artificial_ intelligence_in_war.pdf, accessed 26 Mar. 2021.
Anderson, W., Husain, A., and Rosner, M., ‘The OODA Loop: Why Timing is Everything’, Cognitive Times (Dec. 2017), p. 28–29, https://www.europarl.europa.eu/cmsdata/155280/WendyRAnderson_CognitiveTimes_OODA%20LoopArticle.pdf, accessed 26 Mar. 2021.
Richbourg, Robert, ‘It’s Either A Panda Or A Gibbon: AI Winters And The Limits Of Deep Learning’, War on the Rocks, 10 May 2018, https://warontherocks.com/2018/05/its-either-a-panda-or-a-gibbon-ai-winters-and-the-limits-of-deeplearning/, accessed 26 Mar. 2021.
Pietrucha, Mike, ‘Living with Fog and Friction: The Fallacy of Information Superiority’, War on the Rocks, 7 Jan. 2016, https://warontherocks.com/2016/01/living-with-fog-and-friction-the-fallacy-of-information-superiority/, accessed 26 Mar. 2021.
Luft, Alastair, ‘The OODA Loop and the Half-Beat’, In The Strategy Bridge, 17 Mar. 2020. https://thestrategybridge.org/the-bridge/2020/3/17/the-ooda-loop-and-the-half-beat, accessed 26 Mar. 2021.
Ibid. 9.
Ramzai, Juhi, ‘Holy Grail for Bias-Variance Tradeoff, Overfitting & Underfitting’, towards data science, 12 Feb. 2019, https://towardsdatascience.com/holy-grail-for-bias-variance-tradeoff-overfitting-underfitting-7fad64ab5d76, accessed 26 Mar. 2021.
MathWorks®, ‘What is Deep Learning? How it Works, Techniques, and Applications’, https://www.mathworks.com/discovery/deep-learning.html, accessed 26 Mar. 2021.
Oakden-Rayner, Luke. ‘Medical AI Safety: Doing it wrong’, Jan. 2019, https://lukeoakdenrayner.wordpress.com/2019/01/21/medical-ai-safety-doing-it-wrong/, accessed 26 Mar. 2021.
Wong, Y. H., Yurchak, J., Button, R., Frank, A., Laird, B., Osoba, O., Steeb, R., Harris, B., Joon Bae, S., ‘Deterrence in the Age of Thinking Machines’, Santa Monica: RAND Corporation, 2020, https://www.rand.org/content/dam/rand/pubs/research_reports/RR2700/RR2797/RAND_RR2797.pdf, accessed 26 Mar. 2021.
Loss, R. and Johnson, J., ‘Will Artificial Intelligence Imperil Nuclear Deterrence?’ War on the Rocks, 19 Sep. 2019, https://warontherocks.com/2019/09/will-artificial-intelligence-imperil-nuclear-deterrence/, accessed 26 Mar. 2021.
Ibid. 6.
Author
Mr
 Owen
 Daniels
Institute for Defense Analyses

Mr Owen Daniels is a research associate in the Joint Advanced Warfighting Division at the Institute for Defense Analyses in Alexandria, Virginia. He previously worked in the Scowcroft Center for Strategy and Security at the Atlantic Council and at Aviation Week magazine, and leads Young Professionals in Foreign Policy’s Fellowship Program.

Information provided is current as of May 2021

Other Essays in this Read Ahead

Policy and Strategy

Policy and Strategy Panel Introduction

From the Washington Pact to NATO 2030

Increasing NATO’s Resilience

Soft Power as a Countermeasure to Hybrid Threats

Avoiding Cyber Forever Wars

Toward a Joint All Domain Whole of NATO Cyber Conflict Deterrence Strategy

Outer Space, a Challenging Domain for Ambitious Defence Strategy

Food for Thought for a Novel Space Security Diplomacy

The Impact of Law on NATO’s Space Power at the Speed of Relevance

Looking for a Few Good Operators

Opportunities for Space Force to Fulfil the Women, Peace and Security Agenda

Dynamic Command and Control

Dynamic C2 Synchronized Across Domains Panel Introduction

Technology and Connectivity

An Essential Bond for a Modern Air Force

Dynamic C2 Synchronized Across Domains

Senior Leader Perspective

NATO Command and Control Resilience in Contested Environments

Multi-Domain Combat Cloud

A Vision for the Future Battlefield

Human-On-the-Loop

Is Human-On-the-Loop the Best Answer for Rapid Relevant Responses?

Superiority in the Electromagnetic Spectrum

Security Convergence for Air and Space Power

Resilience in Three Dimensions

Cyberspace and Joint Air and Space Power

Any Speed; Always Relevant

Managing the Electromagnetic Spectrum

A Large-Scale Collective Action Problem for the 21st Century

Electronic Protective Measures

It’s About Protecting Access, Not Aircraft

Superiority in the Electromagnetic Spectrum Panel Introduction

With an Emphasis on Electronic Warfare

NATO Electronic Warfare and Cyberspace Resilience

NATO Space

Chinese ‘High-Risk’ Corporate Space Actors

Modular Satellite Manufacturing to Enhance Space Assets Resiliency

NATO Space Panel Introduction

NATO’s Fifth Operational Domain

NATO Space

International Cooperation is Key to Spacepower

The Role of Space Domain Awareness

Space Asset Resilience thru Protection

From Satellite Generations to a Continuous Evolution

Discussing a Paradigm Change in the Design and Operation of ISR Satellite Constellations

Leveraging Responsive Space and Rapid Reconstitution

Enabling Resilient Space-Based Data, Products, and Services for NATO

Contact Us

Contact Information

Joint Air Power Competence Centre
Römerstrasse 140
47546 Kalkar
Germany

+49 (0) 2824 90 2201

Request for Support

Please leave us a message

Contact Form