explainable ai for intelligence augmentation in multi-domain … · 2019. 10. 18. · mdo isr,...

7
Explainable AI for Intelligence Augmentation in Multi-Domain Operations Alun Preece Crime and Security Research Institute Cardiff University, UK Email: [email protected] Dave Braines IBM Research Hursley, Hampshire, UK Email: dave [email protected] Federico Cerutti Crime and Security Research Institute Cardiff University, UK Email: [email protected] Tien Pham U.S. Army Research Laboratory Adelphi, MD, USA Email: [email protected] Abstract Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and recon- naissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed among multiple partners. Realising this concept requires advancement in both artificial intelligence (AI) for improved distributed data analytics and intelligence augmen- tation (IA) for improved human-machine cognition. The con- tribution of this paper is threefold: (1) we map the coali- tion situational understanding (CSU) concept to MDO ISR requirements, paying particular attention to the need for as- sured and explainable AI to allow robust human-machine decision-making where assets are distributed among multiple partners; (2) we present illustrative vignettes for AI and IA in MDO ISR, including human-machine teaming, dense urban terrain analysis, and enhanced asset interoperability; (3) we appraise the state-of-the-art in explainable AI in relation to the vignettes with a focus on human-machine collaboration to achieve more rapid and agile coalition decision-making. The union of these three elements is intended to show the potential value of a CSU approach in the context of MDO ISR, grounded in three distinct use cases, highlighting how the need for explainability in the multi-partner coalition set- ting is key. Introduction Multi-domain operations (MDO) require the capacity, ca- pability, and endurance to operate across multiple do- mains — from dense urban terrain to space and cy- berspace — in contested environments against near-peer ad- versaries (U.S. Army 2018). A key characteristic of the op- erational environment in MDO is that adversaries will be This research was sponsored by the U.S. Army Research Labora- tory and the UK Ministry of Defence under Agreement Number W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the UK Ministry of Defence or the UK Government. The U.S. and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. contesting all domains, the electromagnetic spectrum, and the information environment, and allied dominance is not assured. Adversaries attempt to achieve stand-off by sep- arating friendly forces in multiple dimensions: temporally, spatially, functionally, and politically. Stand-off is achieved by reducing allies’ speed of recognition, decision and ac- tion, as well as by fracturing alliances through multiple means: diplomatic, economic, conventional and unconven- tional warfare, including information warfare. In this con- text, rapid and continuous integration of capabilities to col- lect, process, disseminate and exploit actionable information and intelligence becomes more critical than ever before. Addressing this challenge, the concept of layered ISR in MDO envisions exploitation of ‘an existing intelli- gence, surveillance, and reconnaissance (ISR) network de- veloped with partners. . . that consists of overlapping sys- tems of remote and autonomous sensors, human intelli- gence, and friendly special operations forces’ ((U.S. Army 2018), pp.33–34). Maximising the value of ISR assets in the unprecedentedly-contested environment requires an abil- ity to share resources among partners — in operations con- ducted as part of joint, interagency, and multinational teams — in a controlled but open coalition environment, with knowable levels of trust and confidence. Artificial intelligence (AI) and machine learning (ML) techniques are seen as key to realising the layered ISR vi- sion in MDO: ‘rapidly disseminating data to a field army or corps analysis cell employing artificial intelligence or other computer assistive technologies to analyze the high volume of data’ ((U.S. Army 2018), pp.39). Indeed, the demands of the MDO environment are seen as requiring an ability to converge capabilities — including ISR — spanning multi- ple domains at a speed and scale that exceeds human cogni- tive abilities. Robust and interoperable AI/ML is viewed as key in fusing data from multiple assets and disseminating actionable knowledge across operation partners to inform decision-making and task accomplishment (Spencer, Dun- can, and Taliaferro 2019). In summary, the challenge is to enable humans and ma- chine agents (software and robotic) to operate effectively in joint, interagency, multinational and highly-dispersed teams, arXiv:1910.07563v1 [cs.AI] 16 Oct 2019

Upload: others

Post on 28-Jan-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

  • Explainable AI for Intelligence Augmentation in Multi-Domain Operations

    Alun PreeceCrime and Security Research Institute

    Cardiff University, UKEmail: [email protected]

    Dave BrainesIBM Research

    Hursley, Hampshire, UKEmail: dave [email protected]

    Federico CeruttiCrime and Security Research Institute

    Cardiff University, UKEmail: [email protected]

    Tien PhamU.S. Army Research Laboratory

    Adelphi, MD, USAEmail: [email protected]

    Abstract

    Central to the concept of multi-domain operations (MDO)is the utilization of an intelligence, surveillance, and recon-naissance (ISR) network consisting of overlapping systemsof remote and autonomous sensors, and human intelligence,distributed among multiple partners. Realising this conceptrequires advancement in both artificial intelligence (AI) forimproved distributed data analytics and intelligence augmen-tation (IA) for improved human-machine cognition. The con-tribution of this paper is threefold: (1) we map the coali-tion situational understanding (CSU) concept to MDO ISRrequirements, paying particular attention to the need for as-sured and explainable AI to allow robust human-machinedecision-making where assets are distributed among multiplepartners; (2) we present illustrative vignettes for AI and IA inMDO ISR, including human-machine teaming, dense urbanterrain analysis, and enhanced asset interoperability; (3) weappraise the state-of-the-art in explainable AI in relation tothe vignettes with a focus on human-machine collaborationto achieve more rapid and agile coalition decision-making.The union of these three elements is intended to show thepotential value of a CSU approach in the context of MDOISR, grounded in three distinct use cases, highlighting howthe need for explainability in the multi-partner coalition set-ting is key.

    IntroductionMulti-domain operations (MDO) require the capacity, ca-pability, and endurance to operate across multiple do-mains — from dense urban terrain to space and cy-berspace — in contested environments against near-peer ad-versaries (U.S. Army 2018). A key characteristic of the op-erational environment in MDO is that adversaries will beThis research was sponsored by the U.S. Army Research Labora-tory and the UK Ministry of Defence under Agreement NumberW911NF-16-3-0001. The views and conclusions contained in thisdocument are those of the authors and should not be interpretedas representing the official policies, either expressed or implied,of the U.S. Army Research Laboratory, the U.S. Government, theUK Ministry of Defence or the UK Government. The U.S. and UKGovernments are authorized to reproduce and distribute reprintsfor Government purposes notwithstanding any copyright notationhereon.

    contesting all domains, the electromagnetic spectrum, andthe information environment, and allied dominance is notassured. Adversaries attempt to achieve stand-off by sep-arating friendly forces in multiple dimensions: temporally,spatially, functionally, and politically. Stand-off is achievedby reducing allies’ speed of recognition, decision and ac-tion, as well as by fracturing alliances through multiplemeans: diplomatic, economic, conventional and unconven-tional warfare, including information warfare. In this con-text, rapid and continuous integration of capabilities to col-lect, process, disseminate and exploit actionable informationand intelligence becomes more critical than ever before.

    Addressing this challenge, the concept of layered ISRin MDO envisions exploitation of ‘an existing intelli-gence, surveillance, and reconnaissance (ISR) network de-veloped with partners. . . that consists of overlapping sys-tems of remote and autonomous sensors, human intelli-gence, and friendly special operations forces’ ((U.S. Army2018), pp.33–34). Maximising the value of ISR assets inthe unprecedentedly-contested environment requires an abil-ity to share resources among partners — in operations con-ducted as part of joint, interagency, and multinational teams— in a controlled but open coalition environment, withknowable levels of trust and confidence.

    Artificial intelligence (AI) and machine learning (ML)techniques are seen as key to realising the layered ISR vi-sion in MDO: ‘rapidly disseminating data to a field army orcorps analysis cell employing artificial intelligence or othercomputer assistive technologies to analyze the high volumeof data’ ((U.S. Army 2018), pp.39). Indeed, the demandsof the MDO environment are seen as requiring an ability toconverge capabilities — including ISR — spanning multi-ple domains at a speed and scale that exceeds human cogni-tive abilities. Robust and interoperable AI/ML is viewed askey in fusing data from multiple assets and disseminatingactionable knowledge across operation partners to informdecision-making and task accomplishment (Spencer, Dun-can, and Taliaferro 2019).

    In summary, the challenge is to enable humans and ma-chine agents (software and robotic) to operate effectively injoint, interagency, multinational and highly-dispersed teams,

    arX

    iv:1

    910.

    0756

    3v1

    [cs

    .AI]

    16

    Oct

    201

    9

  • in distributed, dynamic, complex, and cluttered environ-ments. From the humans’ perspective, AI and ML are nec-essary tools to overcome human cognitive limits due to thespeed and scale of operations, with the goal of augmenting— not replacing — human cognition and decision-making.Here we view intelligence augmentation (IA) as a comple-ment to AI, as first envisioned in the earliest period of AI’shistory (Engelbart 1962). We focus on rapidly-formed coali-tion teams comprised of human and AI/ML agents, operat-ing at the edge of a network, with limited connectivity, band-width and compute resources, in a decision-making role, forexample, by Army Soldiers in a dense urban setting. How-ever, much of the discussion will also apply to a range ofother roles in other domains, for example, intelligence ana-lysts conducting cyber-domain decision-making.

    We have previously studied this challenge in a re-lated context: that of coalition situational understanding(CSU) (Preece et al. 2017) wherein we identified two par-ticular properties of importance in human-machine collabo-ration: explainability to underpin confidence and tellabilityto improve operational agility and performance. This paperfocuses mainly on the first of these, but also touches uponthe second. We begin by revisiting the CSU concept in theMDO context, before examining how the concept applies inthe context of three MDO vignettes: human-machine team-ing, dense urban terrain analysis, and enhanced asset inter-operability. Finally, we appraise the state-of-the-art in ex-plainable AI in relation to the vignettes, highlighting howthe notion of layered explanations (Preece et al. 2018) iswell-fitted to the need for AI/ML assurance in MDO layeredISR.

    Before proceeding, we step back and note that key fea-tures of MDO environments — (i) rapidly changing situa-tions; (ii) limited access to real data to train AI; (iii) noisy,incomplete, uncertain, and erroneous data inputs during op-erations; and (iv) peer adversaries that employ deceptivetechniques to defeat algorithms — are not unique to themilitary context; they are often found in government andpublic sector applications more generally, as are the joint,interagency, and multinational aspects of these endeavours.Indeed, in general, the multi-domain breadth of the MDOconcept and its consideration of both competition and con-flict phases, means that MDO impinges on the political andsocietal spheres that are the province of government and thepublic sector.

    Coalition Situational Understanding for MDOSituational understanding (SU) is the ‘product of applyinganalysis and judgment to the unit’s situation awareness todetermine the relationships of the factors present and formlogical conclusions concerning threats to the force or mis-sion accomplishment, opportunities for mission accomplish-ment, and gaps in information’ (Dostal 2007). UK militarydoctrine (U.K. Ministry of Defence 2010) defines under-standing in the following terms:Comprehension (insight) = situational awareness and analysisUnderstanding (foresight) = comprehension and judgement

    Here, understanding includes foresight, i.e., an ability toinfer (predict) potential future states, which is compatible

    Figure 1: CSU layered model (from (Preece et al. 2017))distributed virtually across multiple partners and employ-ing multiple technologies: human-computer collaboration(HCC), knowledge representation and reasoning (KRR);multi-agent systems (MAS); machine learning (ML); natu-ral language processing (NLP), vision and signal processing(VSP).

    with the common definition that SU involves being able todraw conclusions concerning threats (Dostal 2007). Fore-sight necessarily includes an ability to process and reasonabout information temporally. These views of SU are in-trinsically linked to information fusion in that they involvethe collection and processing of data from multiple envi-ronmental sources as input to deriving SU. In terms of theJDL (Joint Directors of Laboratories) Model of data fu-sion (Blasch 2006), a CSU problem may address relativelyhigh or relatively low levels of understanding, in terms of thekinds of semantic entities and relationships considered. Forexample, at the relatively low levels a CSU problem may beconcerned with only the detection, identification and local-ization of objects such as vehicles or buildings (JDL Levels 1and 2). At higher levels, a CSU problem would be concernedwith determining threats, intent, or anomalies (JDL Level 3).Moreover, the sources will commonly span multiple modal-ities, for example, imagery, acoustic and natural languagedata (Lahat, Adali, and Jutten 2015).

    Our conceptual architecture for SU in a coalition opera-tions context — coalition situational understanding (CSU)— is illustrated in Figure 1. The lowest layer consists ofa collection of data sources (physical sensors and human-generated content), accessible across the coalition, collect-ing multimodal data. The three upper layers roughly corre-spond to Levels 0–3 of the JDL Model. For each layer, thefigure shows the primary technologies — including AI andML — employed, though others may be exploited also. Theinformation representation layer uses incoming data streamsto learn concepts and model entities together with their rela-tionships at multiple levels of semantic granularity. The his-tory of past observations is encoded in these representations,explicitly or implicitly. The information fusion layer em-ploys algorithms and techniques developed to perform fu-

  • sion over concepts and entities derived from the informationrepresentation layer. This layer estimates the current stateof the world, providing insight (situational awareness). Theprediction and reasoning layer then uses the estimated cur-rent state, together with the state space of the models to pre-dict the future state, providing foresight (situational under-standing). The figure depicts a virtual view of the coalition:all four layers are distributed across the coalition.

    In accordance with the User Fusion model (Blasch 2006),the upper layers in Figure 1 need to be open to humans toprovide expert knowledge for reasoning; these layers alsoneed to be open to the human user in terms of being able togenerate explanations of the insight and foresight generatedby the system. There is a bi-directional exchange of infor-mation occurring between the different layers: in the upward(feedforward) direction, the inferences at the lower layer actas input for the next higher layer; in the downward (feed-back) direction, information is used to adjust the model andalgorithm parameters and possibly task the sensors differ-ently. Creating better systems to support CSU necessitatesthe development of mature models and algorithms that canover a period of time reduce the human intervention and at-tain greater autonomy, but without replacing human involve-ment and oversight.

    CSU-MDO VignettesIn this section, we consider three vignettes in the context ofCSU in MDO. The first examines the need for robust andinteroperable AI/ML services; the second examines the dy-namics of human-machine collaboration, and the third con-siders the focus on dense urban terrain operations.

    Vignette 1: Enhanced Asset InteroperabilityTaking the MDO concept of layered ISR as a startingpoint (‘overlapping systems of remote and autonomous sen-sors, human intelligence, and friendly special operationsforces’ (U.S. Army 2018) p.34), we take the view that hu-mans are one of three kinds of ISR agent in a multi-agentsetting as depicted in Figure 2, along with software agentsbased on (i) subsymbolic AI technologies (e.g., deep neu-ral networks (LeCun, Bengio, and Hinton 2015)) and (ii)symbolic AI technologies (e.g., logic-based approaches). Toachieve interoperability between these three kinds of agent(ISR asset), we need to:

    1. enable subsymbolic AI agents to share uncertainty-awarerepresentations of insights and knowledge that can thenbe communicated to symbolic AI agents;

    2. equip symbolic AI agents to learn the uncertainty distri-bution of causal links from data, while being able to shareinsights to subsymbolic AI agents; and

    3. develop symbiotic AI techniques to effectively interactwith humans, at first by adapting stereotypical behavioursvia continuous learning from human-machine teaming ac-tivities.

    The first two cases focus on interoperability between ma-chine assets. In the third case we go beyond the traditionalhierarchical architecture that sees humans interacting only

    Figure 2: A multi-agent non-hierarchical approach to CSU:(top) human agent, (bottom left) subsymbolic AI agent, (bot-tom right) symbolic AI agent.

    with symbolic AI-equipped agents that in turn leverage sub-symbolic AI for achieving human-level or superior perfor-mance on specific tasks. Such a traditional architecture islimited because: (1) it is not always the case that symbolicAI is needed for interaction with humans (Ribeiro, Singh,and Guestrin 2016); (2) there are tasks for which a symbolicAI can support a subsymbolic AI agent (Xu et al. 2018); and(3) there are tasks for which humans can support symbolicand/or subsymbolic AI agents (Phan et al. 2016), hence AIagents need to be equipped with the capabilities to learn andreason about human hierarchies and structures.

    Figure 3 provides a mapping between the MDO layeredISR architecture envisioned in (Spencer, Duncan, and Tali-aferro 2019) and the preceding characterisation of assets assymbolic, subsymbolic, or hybrid.

    Vignette 2: Human-Machine TeamingOur work seeks to advance capabilities to contribute to com-plex coalition tasks in support of MDO, where the need forjoint and multinational teams and multi-domains is cardi-nal (U.S. Army 2018). It is of paramount importance to pro-vide a coherent view and assessment of operational situa-tions as they happen thus integrating learning and reasoningfor CSU in complex, contested environments to inform deci-sion makers at the edge of the network. As previously noted,CSU requires both collective insight — accurate and deepunderstanding of a situation derived from uncertain and of-ten sparse data, and collective foresight — the ability to pre-dict what will happen in the future (Preece et al. 2017).

    The notion of affordances has been central to the human-computer interaction (HCI) field for many years, referringto what an object ‘is for’, i.e., ‘the perceived and actualproperties of the thing, primarily those fundamental prop-erties that determine just how the thing could possibly beused’(Norman 1988). In the MDO layered ISR context, itis necessary to consider human and machine assets in termsof their affordances to a range of ISR tasks. The purpose ofhuman-machine teaming is to aim for each party to exploitthe strengths of, and compensate for the weaknesses of, the

  • Figure 3: Simplified version of the figure from (Spencer,Duncan, and Taliaferro 2019): rectangles represent symbolicsystems; circles represent subsymbolic systems; roundedrectangles represent hybrid elements.

    other (Cummings 2014). For example, (Crouser and Chang2012) characterised machine affordances in the scope of vi-sual analytics as follows:

    • large-scale data manipulation;• collection and storage of large data volumes;• efficient data movement;• bias-free analysis.Based on current machine capabilities, the following consti-tute human asset affordances (Crouser and Chang 2012):

    • visual and audiolinguistic perception;• sociocultural awareness;• creativity;• broad background domain knowledge.

    In fulfilling the MDO, it will become common to envisagedeployment of both manned and unmanned tactical head-quarters (HQ) as illustrated in Figure 4, elaborated from thescenario in (White et al. 2019). Here, concurrently with thedeployment of the manned HQ A, a second unmanned HQ Bis established further forward in a high threat area consist-ing of ‘virtual staff officers’. These are designed to work incohort with their opposite numbers in the manned HQ andreduce both HQ footprint as well as the workload of — andthreat to — human operators. A mix of both autonomous

    and manned sensors feed into the unmanned HQ and human-machine teaming provides the enduring requirement to havea ‘human in the loop’ in order to make key final decisions.

    Figure 4: Human-machine teaming in the tactical domain:deployment of manned and unmanned tactical headquartersequipped with subsymbolic and symbolic AI agents; elabo-rated from (White et al. 2019).

    Vignette 3: Dense Urban Terrain AnalysisAccelerating rates of urbanisation globally, and the strate-gic importance of cities and megacities, ensures that MDOoperations will take place within dense urban terrain. Here,density refers to both the physical and demographic natureof this environment, giving rise to specific physical, cogni-tive, and operational characteristics. Preparation for MDOin dense urban terrain necessitates intelligence activities tounderstand human, social, and infrastructure details; such ar-eas are characterised by diverse, interconnected human andphysical networks, and three-dimensional engagement areasaffording varying levels of ready-made cover and conceal-ment.

    In such environments, ISR will exploit and augmentcivilian infrastructure. For example, use of civilian CCTV(closed circuit television cameras) will increasingly be aug-mented with processing for automatic facial recognition forthe detection and tracking of high-value targets, or to sup-port building patterns of life. As targets move into vehicles,civilian automatic number plate recognition technology maybe exploited. The diversity of such urban infrastructure —in some cases extending to full-scale ‘smart city’ integration— establishes further requirements for agile interoperabilitybetween ISR assets, especially since ISR tasks cannot nec-essarily plan in advance what collection and processing willbe needed. In such cases, analytics composition will be dy-namic and context-specific, with continual re-provisioningand resource optimisation (White et al. 2019).

    In dense urban terrain, the need for joint, interagency, andoften multinational cooperation is further pronounced. Asabove, CSU in this context depends on human-AI collabo-ration: machine processes such as AI agents offer powerfulaffordances in terms of data analytics, but they need to pro-

  • vide levels of assurance (explanation, accountability, trans-parency) for their outputs, particularly where those outputsare consumed by decision makers without technical trainingin information science, and who may be exploiting relativelyunfamiliar local ISR assets. Current ML approaches are lim-ited in their ability to generate interpretable models (i.e., rep-resentations) of the world necessary for CSU (Lake et al.2017). Moreover, these approaches require large volumes oftraining data and lack the ability to learn from small num-bers of examples as people and knowledge representation-based systems do (Guha 2015). An ability for human ex-perts to tell a machine relevant information — often fromtheir lived experience of the local environment — increasesthe tempo and granularity of human-AI interactions and theoverall responsiveness of the system in meeting mission re-quirements. It is therefore important to equip coalition ma-chine agents with integrated learning and knowledge rep-resentation mechanisms that support CSU while affordingassurance (explainability) and an ability to be told key infor-mation to mitigate issues with sparse data (tellability). In ourrecent research we have built significant foundations for theneuro-symbolic hybrid environment, including multi-agentlearning with multimodal data (Xing et al. 2018), evidentialdeep learning (Sensoy, Kaplan, and Kandemir 2018), proba-bilistic logic programming (Cerutti et al. 2019), forward in-ferencing architectures where the output of a neural networkwas fed into probabilistic logic engine to detect events withcomplex spatiotemporal properties (Vilamala et al. 2019).

    Layered Explanations for Layered ISR inMDO

    The emergent goal from the three vignettes in the previoussection is to address the challenge of enabling rapid exploita-tion of adaptive ISR knowledge to inform decision-makingacross coalitions in MDO, by creating system architecturesto enable synergistic collaboration between machine and hu-man agents for actionable insight and foresight in a con-tested environment.

    Throughout our earlier research into CSU we have iden-tified the need for the agile integration of human and ma-chine agents from across coalition partners into dynamicand responsive teams. We have We have formalised thisas human-agent knowledge fusion (HAKF): a capabilityto support this deep interaction, comprising bi-directionalinformation flows of explainability and tellability therebyenabling meaningful communication between AI and hu-mans (Braines, Preece, and Harborne 2018) as shown inFigure 5. This HAKF capability supports explainability andtellability naturally as conversational processes between hu-man and machine agents (Tomsett et al. 2018), enabling AIagents to provide explanations of results arising from com-plex machine/deep learning classifications, and to receiveknowledge that modifies their models or knowledge bases.

    A key requirement is to add human interaction to the dis-tributed symbolic/subsymbolic integration highlighted in theprevious section and establish the minimum set of commonlanguage that the various human and AI agents need to mas-ter to ensure effective communication for a given task. To

    Figure 5: Human-agent knowledge fusion for improvedconfidence and performance in support of better decision-making.

    support intuitive machine processable representations in thecontext of dynamic context-aware gathering and informationprocessing services, we pay particular attention to the humanconsumability of machine generated information, especiallyin the context of conversational interaction and where deci-sion makers may lack deep technical training in informationscience. This common language must be capable of convey-ing uncertainty and the appropriate structures to achieve in-tegration with the subsymbolic layers, as well as more tra-ditional semantic features relevant to the domain. We do notlimit ourselves to purely linguistic forms; novel visual or di-agrammatic notations, or indeed other communication tech-niques, may be relevant as part of the solution.

    Moreover, it is necessary to consider the case of au-tomated negotiations between various autonomous agents,some of which will be humans. At the same time, humansthemselves can be the object of a learning task: their ownbehaviour can potentially be nudged in specific directions ifthe machine agent learns enough about the individual hu-man agent (or human agents in general) to infer the im-pact of suggestions or changes. In addition, machine agentsmight need to identify the best fit among the human agentsfor a given task, with historical data helping them towardsthis goal. Such symbiotic AI techniques can be used tomore effectively interact with the humans, at first by adapt-ing stereotypical behaviours via continuous learning fromhuman-machine interactions.

    Such a complex and dynamic hybrid setting is particularlyrisky and prone to exploitation in a contested environment,hence the need to integrate the uncertainty-aware and proba-bilistic capabilities. All of this much be achieved in a tempothat is appropriate to the decision-making task and the in-volvement of the human users, with machine agents able tosupport real-time interaction.

  • Layered ExplanationsIn recent work, we examined explainability from the per-spective of explanation recipients, of six kinds (Tomsettet al. 2018): system creators, system operators, executorsmaking a decision on the basis of system outputs, decisionsubjects affected by an executor’s decision, data subjectswhose personal data is used to train a system, and system ex-aminers, e.g., auditors or ombudsmen. Based on this frame-work, we proposed a ‘layered’ approach to offer differentexplanations tailored to the different stakeholders (Preece etal. 2018) by means of a composite explanation object thatpacks together all the information needed to satisfy multiplestakeholders, and can be unpacked (e.g., by accessor meth-ods) per a recipient’s particular requirements. We view suchan object as being layered as follows:Layer 1 — traceability: transparency-based bindings to in-ternal states of the model so the explanation isn’t entirely apost-hoc rationalisation and shows that the system ‘did thething right’;Layer 2 — justification: post-hoc representations (poten-tially of multiple modalities) linked to layer 1, offering se-mantic relationships between input and output features toshow that the system ‘did the right thing’;Layer 3 — assurance: post-hoc representations (again, po-tentially of multiple modalities) linked to layer 2, with ex-plicit reference to policy/ontology elements required to giverecipients confidence that the system ‘does the right thing’.

    Integrated ExampleWe consider a dense urban terrain setting drawing on (Ka-plan et al. 2018) in which civilian sensing infrastructure,including CCTV, is supplemented by coalition ISR assets.Video feeds from a public marketplace are being moni-tored using activity recognition AI/ML services as elab-orated in (Vilamala et al. 2019). An outbreak of anoma-lous, ‘violent’ physical activity is suddenly detected in theCCTV feed. At this point, via enhanced asset interoperabil-ity, the coalition ISR system accesses on demand other sens-ing modalities to obtain more data on the situation, tappinginto recently-collected audio data from the marketplace, ob-tained via acoustic sensors. Processing the relevant part ofthe audio stream reveals rhythmic chanting that, fused to-gether with the visual activity, signifies that the activity isa harmless dance ritual specific to the region. Note thatthe inference that the activity is non-threatening constitutessituational understanding: insight with foresight. Moreover,while it is conceivable that, when enough data is available toclassify the activity, the harmless dance could be identifiedby machine processing, in (Kaplan et al. 2018) we considerthe case where recognising this activity requires local cul-tural knowledge and is handled by human-machine teaming:the machine brings the anomalous visual activity, includingthe extra context from the audio, to the attention of an expe-rienced human agent.

    Our concept of layered explanations supports the ‘pack-ing’ of three levels of explanation to underpin confident hu-man decision-making in this example:

    • traceability in terms of salient features in the video and

    audio, for example, using the technique in (Hiley et al.2019) to discriminate the significant spatial and temporalfeatures (in the latter case, the ‘violent’ motions);

    • justification of the inference as to the meaning of the ac-tivity (insight and foresight) assuming that the inferencecan be made by machine processing; and

    • assurance that counterfactuals have been considered(possibilities of harmless vs aggressive action) poten-tially represented via the uncertainty-awareness methodsfrom (Kaplan et al. 2018).

    Conclusion and Future WorkIn this paper, we have applied the coalition situational un-derstanding concept to the problem of achieving layeredISR in multi-domain operations, specifically where artificialintelligence and machine learning services provide for im-proved distributed data analytics, and intelligence augmen-tation — particularly the need for assured and explainableAI — supports improved human-machine cognition. We fo-cused on three elements in realising the layered ISR vision:human-machine teaming, dense urban terrain analysis, andenhanced asset interoperability, highlighting how the needfor explainable AI in the multi-partner coalition setting iskey.

    Our current and future work focuses on the general prob-lem illustrated in Figure 2: enabling subsymbolic AI agentsto share uncertainty-aware representations of insights andknowledge that can then be communicated to symbolic AIagents, while also equipping symbolic AI agents to share in-sights to subsymbolic AI agents (i.e., machine-to-machinetellability). Ultimately, we seek to develop techniques bywhich AI/ML agents can synergistically interact with hu-mans via continuous learning from human-machine teamingactivities.

    ReferencesBlasch, E. 2006. Level 5 (user refinement) issues support-ing information fusion management. In 9th InternationalConference on Information Fusion.Braines, D.; Preece, A.; and Harborne, D. 2018. Multimodalexplanations for ai-based multisensor fusion. In Proc NATOSET-262 RSM on Artificial Intelligence for Military Multi-sensor Fusion Engines.Cerutti, F.; Kaplan, L.; Kimmig, A.; and ensoy, M. 2019.Probabilistic logic programming with beta-distributed ran-dom variables. In Proceedings of the AAAI Conference onArtificial Intelligence, volume 33, 7769–7776.Crouser, R., and Chang, R. 2012. An affordance-basedframework for human computation and human-computercollaboration. IEEE Transactions on Visualization andComputer Graphics 18(12):2859–2868.Cummings, M. 2014. Man versus machine or man + ma-chine? IEEE Intelligent Systems 29(5):62–69.Dostal, B. C. 2007. Enhancing situational understandingthrough the employment of unmanned aerial vehicles. ArmyTransformation Taking Shape. . . Interim Brigade CombatTeam Newsletter (01–18).

  • Engelbart, D. C. 1962. Augmenting human intelligence: Aconceptual framework. Technical report, Stanford ResearchInstitute.Guha, R. 2015. Towards a model theory for distributed rep-resentations. In Knowledge Representation and Reasoning:Integrating Symbolic and Neural Approaches: Papers fromthe 2015 AAAI Spring Symposium, 22–26.Hiley, L.; Preece, A.; Hicks, Y.; Marshall, D.; and Taylor,H. 2019. Discriminating spatial and temporal relevance indeep taylor decompositions for explainable activity recogni-tion. In Proc IJCAI 2019 Workshop on Explainable ArtificialIntelligence (XAI).Kaplan, L.; Cerutti, F.; Sensoy, M.; Preece, A.; and Sulli-van, P. 2018. Uncertainty aware AI ML: Why and how.Proc AAAI FSS-18: Artificial Intelligence in Governmentand Public Sector.Lahat, D.; Adali, T.; and Jutten, C. 2015. Multimodal datafusion: An overview of methods, challenges, and prospects.Proceedings of the IEEE 103(9).Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; and Gersh-man, S. J. 2017. Building machines that learn and think likepeople. Bahavioral and Brain Sciences.LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning.Nature 521(7553):436–444.Norman, D. 1988. The Psychology of Everyday Things.Basic Books.Phan, N.; Dou, D.; Piniewski, B.; and Kil, D. 2016. A deeplearning approach for human behavior prediction with ex-planations in health social networks: social restricted Boltz-mann machine (SRBM+). Social Network Analysis andMining 6(79).Preece, A.; Cerutti, F.; Braines, D.; Chakraborty, S.; andSrivastava, M. 2017. Cognitive computing for coali-tion situational understanding. In First International Work-shop on Distributed Analytics InfraStructure and Algorithmsfor Multi-Organization Federations (IEEE Smart WorldCongress).Preece, A.; Harborne, D.; Braines, D.; Tomsett, R.; andChakraborty, S. 2018. Stakeholders in explainable AI. InProc AAAI FSS-18: Artificial Intelligence in Governmentand Public Sector.Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. “Whyshould I trust you?”: Explaining the predictions of any clas-sifier. In Proceedings of the 22nd ACM SIGKDD Interna-tional Conference on Knowledge Discovery and Data Min-ing (KDD’16), 1135–1144. ACM.Sensoy, M.; Kaplan, L.; and Kandemir, M. 2018. Eviden-tial deep learning to quantify classification uncertainty. InAdvances in Neural Information Processing Systems, 3179–3189.Spencer, D. K.; Duncan, S.; and Taliaferro, A. 2019. Op-erationalizing artificial intelligence for multi-domain oper-ations: a first look. In Proc Artificial Intelligence and Ma-chine Learning for Multi-Domain Operations Applications(SPIE Vol 11006). SPIE.

    Tomsett, R.; Braines, D.; Harborne, D.; Preece, A.; andChakraborty, S. 2018. Interpretable to whom? A role-basedmodel for analyzing interpretable machine learning systems.In 2018 ICML Workshop on Human Interpretability in Ma-chine Learning (WHI 2018).U.K. Ministry of Defence. 2010. Understanding: Joint Doc-trine Publication 04 (JDP 04).U.S. Army. 2018. The U.S. Army in Multi-Domain Opera-tions, 2028 (TRADOC Pamphlet 525-3-1).Vilamala, M. R.; Hiley, L.; Hicks, Y.; Preece, A.; andCerutti, F. 2019. A pilot study on detecting violence invideos fusing proxy models. In Proc 22nd InternationalConference on Information Fusion.White, G.; Pierson, S.; Rivera, B.; Toumad, M.; Sullivan, P.;and Braines, D. 2019. DAIS-ITA scenario. In Proc Arti-ficial Intelligence and Machine Learning for Multi-DomainOperations Applications (SPIE Vol 11006). SPIE.Xing, T.; Sandha, S. S.; Balaji, B.; Chakraborty, S.; ; andSrivastava, M. 2018. Enabling edge devices that learn fromeach other: Cross modal training for activity recognition. InProceedings of the 1st International Workshop on Edge Sys-tems, Analytics and Networking, 37–42. ACM.Xu, J.; Zhang, Z.; Friedman, T.; Liang, Y.; and den Broeck,G. V. 2018. A semantic loss function for deep learning withsymbolic knowledge. In Proceedings of the 35th Interna-tional Conference on Machine Learning (ICML).