Mosaic ATM proposes an innovative approach to providing information about automated system trustworthiness in a given context, which will support humans in appropriately calibrating trust in such systems. Appropriately calibrated trust will, in turn, inform the scope of autonomy humans grant to the system to perform independent decision making and task execution. We communicate system trustworthiness through a combination of an innovative approach to explainable machine learning (ML) and representation of confidence in model results based on a quantification of uncertainty in those results given the available input data. We demonstrate our approach in the context of automated support for monitoring and managing crew wellbeing and performance in deep space exploration missions, where astronauts will be subject to the physical and psychological stress of performing in an isolated, confined, and extreme (ICE) environment. In Phase I, we will demonstrate our approach to support human assessment of automated system trustworthiness through a generalized method for explainable ML and representation of uncertainty in ML model results and situate them in a prototype system that will support evaluation of their effect on human calibration of trust. This prototype system will be based on our concept for automated support for monitoring and managing crew wellbeing and performance, which we will document in Phase I.