NASA SBIR 2018-II Solicitation

Proposal Summary


PROPOSAL NUMBER:
 18-2- A3.02-8802
PHASE 1 CONTRACT NUMBER:
 80NSSC18P1928
SUBTOPIC TITLE:
 Increasing Autonomy in the National Airspace Systems (NAS) (not vehicles)
PROPOSAL TITLE:
 Explainable Artificial Intelligence based Verification & Validation for Increasingly Autonomous Aviation Systems
SMALL BUSINESS CONCERN (Firm Name, Mail Address, City/State/Zip, Phone)
ATAC
2770 De La Cruz Boulevard
Santa Clara, CA 95050
(408) 736-2822

PRINCIPAL INVESTIGATOR (Name, E-mail, Mail Address, City/State/Zip, Phone)
Aditya Saraf
aps@atac.com
2770 De La Cruz Boulevard
Santa Clara, CA 95050 - 2624
(408) 736-2822

BUSINESS OFFICIAL (Name, E-mail, Mail Address, City/State/Zip, Phone)
Alan Sharp
acs@atac.com
2770 De La Cruz Boulevard
Santa Clara, CA 95050 - 2624
(408) 736-2822

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 6
Technical Abstract (Limit 2000 characters, approximately 200 words)

Artificial Intelligence (AI) algorithms, which are at the heart of emerging autonomy technologies that are revolutionizing multiple industries including aviation, defense and manufacturing, are perceived as black boxes whose decisions are a result of complex rules learned on-the-fly. Unless these decisions are explained in a human understandable form, the end-users are less likely to accept them and certification personnel are less likely to clear these systems for wide use. Explainable AI (XAI) is an AI algorithm whose actions can be easily understood by humans. Phase I of this SBIR developed EXplained Process and Logic of Artificial INtelligence Decisions (EXPLAIND)—a prototype tool for verification and validation, as well as in-operation explanation of AI-based aviation systems. We successfully used EXPLAIND to generate reliable, human-understandable explanations for decisions made by a NASA-developed AI algorithm used to detect aircraft trajectory anomalies. Controllers participated in cognitive walkthroughs of EXPLAIND’s explanation interface, which successfully explained the rationale behind one frequently detected anomaly type. EXPLAIND thus represents an important step towards user acceptance and certification of AI-based decision support tools (DSTs). In Phase II, we propose to build on the successful Phase I technology to create a commercial, licensable, universally-applicable, cloud-based AI explainability software platform. We pursue three thrusts in Phase II: (1) Operationalize EXPLAIND for aircraft trajectory anomaly detection applications, (2) Expand EXPLAIND to be a universal explainability approach and apply it to benefit other NASA XAI research programs, and (3) Apply EXPLAIND to non-aviation applications with significant commercialization potential: computer vision systems in self-driving cars, credit rating algorithms in the financial industry, and insurance claims processing algorithms in the health insurance industry.

Potential NASA Applications (Limit 1500 characters, approximately 150 words)

EXPLAIND can benefit NASA AI algorithms used for (1) Aviation anomaly detection (for NASA System-Wide Safety project). (2) Image perception and drone team pattern-formation in support of autonomous search & rescue missions (for NASA ATTRACTOR project). (3) Image recognition applied to NASA Earth science datasets. (4) UAM/UTM path planning, de-confliction, and scheduling (for NASA ATM-X project). (5) Increasing Diverse Operations (IDO) Traffic Management. and (6) Science Mission Directorate’s distant planet discovery algorithms.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words)

EXPLAIND can benefit commercial AI algorithms that are used for: (1) Practical air and ground aviation anomaly detection (for FAA Office of Safety); (2) Computer vision based navigation of self-driving cars; (3) Credit and claims processing in Financial and Health Insurance industries; and (4) Making business decisions in areas governed by new explainability and ethics related regulations.

Duration: 24

Form Generated on 05/13/2019 13:31:36