Artificial Intelligence (AI) algorithms, which are at the heart of emerging autonomy technologies that are revolutionizing multiple industries including aviation, defense and manufacturing, are perceived as black boxes whose decisions are a result of complex rules learned on-the-fly. Unless these decisions are explained in a human understandable form, the end-users are less likely to accept them and certification personnel are less likely to clear these systems for wide use. Explainable AI (XAI) is an AI algorithm whose actions can be easily understood by humans. Phase I of this SBIR developed EXplained Process and Logic of Artificial INtelligence Decisions (EXPLAIND)—a prototype tool for verification and validation, as well as in-operation explanation of AI-based aviation systems. We successfully used EXPLAIND to generate reliable, human-understandable explanations for decisions made by a NASA-developed AI algorithm used to detect aircraft trajectory anomalies. Controllers participated in cognitive walkthroughs of EXPLAIND’s explanation interface, which successfully explained the rationale behind one frequently detected anomaly type. EXPLAIND thus represents an important step towards user acceptance and certification of AI-based decision support tools (DSTs). In Phase II, we propose to build on the successful Phase I technology to create a commercial, licensable, universally-applicable, cloud-based AI explainability software platform. We pursue three thrusts in Phase II: (1) Operationalize EXPLAIND for aircraft trajectory anomaly detection applications, (2) Expand EXPLAIND to be a universal explainability approach and apply it to benefit other NASA XAI research programs, and (3) Apply EXPLAIND to non-aviation applications with significant commercialization potential: computer vision systems in self-driving cars, credit rating algorithms in the financial industry, and insurance claims processing algorithms in the health insurance industry.
EXPLAIND can benefit NASA AI algorithms used for (1) Aviation anomaly detection (for NASA System-Wide Safety project). (2) Image perception and drone team pattern-formation in support of autonomous search & rescue missions (for NASA ATTRACTOR project). (3) Image recognition applied to NASA Earth science datasets. (4) UAM/UTM path planning, de-confliction, and scheduling (for NASA ATM-X project). (5) Increasing Diverse Operations (IDO) Traffic Management. and (6) Science Mission Directorate’s distant planet discovery algorithms.
EXPLAIND can benefit commercial AI algorithms that are used for: (1) Practical air and ground aviation anomaly detection (for FAA Office of Safety); (2) Computer vision based navigation of self-driving cars; (3) Credit and claims processing in Financial and Health Insurance industries; and (4) Making business decisions in areas governed by new explainability and ethics related regulations.