Introduction
In this paper, we address three high-level questions that we recognize do not have clear answers as yet. Is a Human-On-the-Loop (HOTL) capability, giving user control only over autonomy planning, better at delivering Rapid Relevant Responses (R3) than Human-In-the-Loop (HITL), where the user has complete control to start or stop the automation? Can we adapt current HITL Command & Control (C2) architectures using variable autonomy to address compressed cycle times and more demanding time constraints in the hypersonic operational environment? And do we dare risk Human-Out-Of-the-Loop (HOOTL) weapon systems and the potential for control-induced errors caused by brittle automation that can lead to cascade failures? At issue is how to evolve these high-level questions toward operational answers.
Warfighters for many years, and from many warfighting domains, have demanded more capability and functionality in the weapons and systems they are given. ‘We need it to do more’ has been the common theme, regardless of Allied Command or branch of military. We have now reached the stage of technology development where engineering teams can build more capability and functionality into weapons and systems than our warfighters can extract, given the current state of user interface design, because the human operator typically is a complete afterthought for systems design teams. As technology continues to move forward in leaps and bounds, we must shift the focus from designing more functionality into weapons and systems to developing the next generation of C2 architectures to allow our warfighters to extract 100 % of the functionality built into these systems while reducing required training time. The focus of this paper is major weapons and C2 systems and the challenge of hypersonics. In the future, the lessons learned can extend across echelons from strategic/operational levels to tactical platforms and individual warfighters.
The purpose of C2 is to enable the effective transfer of information between and among systems and operational users to gain situational awareness, make decisions, and execute appropriate courses of action. There are several methods available to designers, acquisition professionals, weapons developers, and warfighters. HITL methods develop tools to facilitate the effectiveness and ease of knowledge management, information foraging and exchange, collaboration, and decision-making in the networked command environment. It is essential that C2 architectures consider how to effectively integrate operational users with information technologies and networks, particularly as weapon velocities approach hypersonic. The tasks that must be accomplished by command decision-makers are time-critical with life-or-death outcomes. In this context, human performance must be optimized to deliver R3, but no amount of training can compensate for poor human systems integration and confusing user displays that obfuscate automation status and human control. HITL is mandatory during test and evaluation, training, and early-stage fielding into operational theatres. A progression from HITL may require a precautionary phase of HOTL as a step toward higher levels of autonomy to HOOTL. Our discussion focuses mainly on performance improvement of HITL routines which may have implications for the human in/on/out of the loop progression.
Technical Approach
Graceful degradation is an automation supervisory control technique that we propose to explore. Automation features are needed that sense, analyze, and react to platform/vehicle/weapon environmental conditions and equipment status and can adjust R3 subsystems to maintain normal operations. Problems occur in the human-system supervisory loop if the adaptations suddenly cross a tolerance threshold wherein the system rapidly fails. Automation that can rapidly fail is referred to as ‘brittle’ because it breaks suddenly and without warning. Graceful degradation is needed whereby the human supervisors are informed and aware that automated features are compensating for performance deviations. Users then must be trained to view and interpret this information.
For example, automation for an Unmanned Surface Vehicle (USV) may increase boat engine thrust and power to maintain R3 payload launch position to deal with strong currents or a propeller fouled by seaweed. Plans for task process contingencies can be based upon the availability of information (e.g., can inspect in and around the USV with no blind spots) or known information deficiencies (e.g., can measure ocean current and resistance, but cannot inspect for a fouled propeller). While the automation rules may require that engine RPMs above a certain value require instant corrections to maintain speed/schedule, the automation must also be able to immediately inform the user about the rate of change and direction, beyond the reported fault information from the automated correction. The operator must be in the information loop as automation makes adjustments to maintain operations. This example is relevant and critical to modelling how experienced warfighters would respond as cycle times approach zero when launching, or defending against, hypersonic weapons. A design approach to mitigate the effects of brittle automation should consider graceful degradation that clearly warns the operator of deviation while reducing automation, to prevent automated courses of action in degraded modes that could lead to cascade or catastrophic system failures. Employment of R3 HITL contingencies should be based upon the availa bility of information or known information deficiencies.
Another supervisory control issue is automation-induced complacency. Automation complacency (aka, automation bias) is the condition that occurs when users tend to trust the automation results and disregard other possible contradictory information. Factors that contribute to complacency include long periods of stable operations with few critical decisions, monotony, fatigue, and boredom. Mitigation strategies can include tasks and activities designed to keep operators alert, diligent, and vigilant. Simulation of events and recurring practice and activities with critical events can also reduce negative issues related to complacency. Endsley’s model1 of autonomy oversight recognizes the ‘decision-biasing effect’ of operator dependence on automated decision aids. Human operators tend to supervise automated systems using approaches that require the least cognitive effort when seeking and sharing process details, believing that automation has superior analytical ability. R3 architectures should consider decision process designs that grant active human ‘management by consent’ versus reactive ‘management by exception’ to mitigate automation bias by requiring the human operator to remain actively engaged, except in delta near-zero situations. The effects of automation bias will be greatly reduced by explicitly displaying decision elements/steps, and then compelling the user to engage in the decision process with critiquing, what-if, and contingency planning paradigms.
When mitigating the effects of automation bias, the decision process to support ‘management by consent’ or ‘management by exception’ must also be able to manage cognitive biases of human information processing, especially under conditions of high workload.2 These biases have been found to be the underlying cause for most errors in human judgement and have been extensively studied.3 HITL architectures must address potential human judgement errors in the supervision of R3 tasks and workflows, most notably confirmation bias, availability bias, and illusory correlations. As with automation bias, the best way to reduce judgement bias is to explicitly (and in an operationally relevant manner) display layers of information and data that both support the decision process and engage the user.
Task-Centred Design (TCD)
HITL R3 architectures should support warfighter tasks. TCD organizes system information and controls in a human activity-centric manner such that normal workflows are efficient and task products can be easily c reated. In a task-centred design, information is ‘brought to the task’ versus requiring the end-user to collect, gather, and synergize information from separate sources TCD for R3 operations will involve the trade-off of f unction allocations between human and system for doing task steps and accomplishing goals.
When function allocation design decisions are made, User Interface (UI) constructs can be created to deliver capability as cycle time approaches zero. This can include shared system-user task states and awareness of past, current, and planned tasks explicitly listed. A UI construct to foster task-centred performance can include the explicit display of tasks which are triggered based upon mission objectives. For example, Osga4 developed a task management display which depicted completed, current, and emerging tasks for a dynamic ship defence combat information team operating environment. The display represented task states in the form of icons associated with ship defence and related battlegroup reports. The reduction of cognitive workload related to the analysis of raw data, finding tasks, and creating task products allowed the operators to shift cognitive functions towards higher-level mission supervision and away from continuous information search and filtering sub-tasks.
HITL Modelling
R3 requires the exploration and analysis of the degrees of freedom, constraints, and consequences associated with developing automated sensors, platforms, and weapons systems, modelling the advantages and disadvantages of HITL, HOTL, and even HOOTL. This modelling needs to be an essential part of the thoughtful development and testing of sensors and weapons systems. HITL, HOTL, and HOOTL system development must consider the capabilities of expert systems, Artificial Intelligence (AI), and Machine Learning (ML) at all levels of embedded logic and include a careful review of all components, assemblies, subsystems, systems, and systems-of-systems.
We propose a ML R3 testbed to evaluate application ideas and tools to improve response accuracy and scheduling, creating algorithms to mine what strike teams think about when considering options. The testbed will evaluate what decision-making heuristics should be considered to do response planning in tactical environments and to provide intelligent strike assistance to any response team. HITL can be done using Interactive Learning (IL) methods. IL is an artificial intelligence ML approach with a human in the machine interactive loop, where observations of user interactions are recorded to provide guidance for the next ML iteration and improve machine accuracy. We propose a ML active learning approach, which asks Subject Matter Experts (SMEs) to label only the most important strike planning data via pool-based active learning, identifying cognitive patterns to train a strike planning aid. Using interactive learning methods, we can evaluate computer-generated strike plans given an R3 objective using multiple fix sources to model and accurately estimate red threat location.
With this methodology, we can discover algorithms that capture what warfighters think about when deploying R3 to better understand what decision-making heuristics should be considered to facilitate tactical planning in evolving battlespace environments. Specifically, we will conduct limited objective experiments using IL methods to cognitively model how SMEs evaluate automated strike plans to discover core tactical and operational planning heuristics as the basis for smart algorithms to increase speed and efficiency of the mission planning and execution process. This currently involves a significant amount of error analysis that is done in the head of the warfighter. Here, we can generate and test algorithms to capture what the warfighter thinks about when figuring out the best course of action. The HITL model can then be refined to get a clearer idea of what decision-making heuristics the warfighter should be considering to achieve better strike options. Via iterative design and testing, we can express and build a reliable machine learning paradigm by arranging the heuristics and algorithms into an operational view, which could be applied to other strike planning domains to create AI requirements for more effective UIs to support mission strike teams.
Using IL methods, we will evaluate automated kill chains for R3 objectives. Algorithms must integrate across multiple perspectives: risk, probability, uncertainty, complexity, consequences, and accountability. International boundaries, threat assessment, blue platform/weapon status, and human supervisor experience level must all be part of the equation. We will also model the challenges of determining threat location in GPS-denied, Radio Frequency (RF), Emissions Controlled (EMCON), night and cloud-covered scenarios. The IL modelling will begin with a series of experimental trials where the SME will choose the better of two displayed automated strike options, followed by the display of more machine-suggested options and so on. SME’s comments will be captured, noting reasons for selecting/not selecting a particular option. The testbed will collect choice data and capture the n-dimensional state of the scenario including HITL role, blue capabilities, threats, fix location source availability, and display layout.
Conclusion
Increasing levels of automation and AI bring the promise of enhanced weapons effectiveness, but also bring risks that may lead to lethal consequences. We propose a variable autonomy method to adapt C2 HITL architectures to address compressed cycle times and more demanding workloads in the R3 operational environment. This methodology will offer control solutions to mitigate the risks associated with HOOTL weapon system options where machine errors and brittle automation can lead to cascading failures. A C2 architecture approach to address the effects of brittle automation should consider design strategies that model R3 automation to make the human operator more aware of deviations from the strike plan, changes to the weapon system state, and pending automated course of actions. This requires in-depth human factors analysis of how weapon system autonomy progresses from HITL to HOTL to HOOTL and how automation supervisors reassert control and retain decision-making while minimizing response delays. The actions, reactions, and consequences associated with R3 automated systems are and will be, in the final analysis, the responsibility of the human warfighters and Joint Force Commanders who ultimately will employ these current and future capabilities. This requires modelling of how the most experienced warfighters would react to deviations and high consequence/low-frequency scenarios, particularly as cycle times approach zero to launch and monitor, or defend against, hypersonic weapons.