Future deep space missions will require artificial cognitive agents are able to interface with onboard systems and take over time-consuming routine tasks, thus reducing crew cognitive and work load. In addition, they should continuously monitor critical onboard systems and alert crew about any off-nominal operation, quickly responding to them according to predetermined procedures in cases where the crew is overloaded or would be endangered. We propose our fully implemented Thinking Robots Autonomous Cognitive System (TRACS) architecture at TRL5 as the basis for a cognitive agent for future NASA deep space missions. TRACS is open, modular, makes decisions under uncertainty, and learns in a manner that the performance of the system is assured and improves over time. It deeply integrates natural language capabilities and one-shot learning from instructions, observations, and demonstrations. TRACS allows for easy integration with commercial off-the-shelf components and third-party modules, components, and software libraries, and has extensive integrated fault detection, fault exploration, and recovery methods. TRACS has also been successfully used in several projects with NASA collaborators at NASA Langley and NASA Ames. In this project, (1) the fully implemented and operational interactive cognitive TRACS architecture, extended by episodic memory for long-term interactions and additional annotation mechanisms for facilitating assurance, together (2) with the results from a feasibility study in NASA-funded simulation environment demonstrating the full operation of the architecture in interactive human-subject experiments, and (3) a detailed plan for the application domains, system integration, and evaluations in Phase II based on NASA objectives to be developed in collaboration with NASA.
The TRACS cognitive architecture will have broad application in NASA contexts, from cognitive advisors in cockpits, to control architectures for autonomous robots working remotely on the Mars habitat. Because TRACS can be easily integrated with existing systems, it can also be used just as an intelligence user interfaces on top of existing software which enables natural task-based interactions with humans.
The TRACS cognitive architecture will also be widely applicable in social and assistive robotic domains (e.g., office assistants that are given new tasks on the fly), but also in collaborative manufacturing or any other areas where humans need to interact with systems in natural language and be able to configure, adapt, and task such systems online.