Despite continued advances in robotics, it has been difficult to deploy robots for non-trivial tasks in unstructured environments in a fully autonomous way. As a result, for such tasks there is often a human in the loop to create supervised autonomy. We are proposing to develop a flexible software-based solution that enables operators to easily command robots by leveraging more computer vision. The key innovation of this proposal is to connect a perception pipeline that recognizes instances of known classes of objects with affordances: a description of how an object should be inspected or manipulated. For example, given a depth image of a scene, the system will recognize a hatch door handle and its hinge and annotate the scene to automatically inform the user how a robot could open the hatch. The user can then confirm the desired action to have a robot open the hatch door autonomously. The proposed work will result in a software tool that enables the training of perception pipelines using domain randomization. The perception capabilities will be demonstrated in inspection and manipulation tasks that are relevant to IVR scenarios such as inspection of hatch seals, and manipulation of buttons, switches, and handrails. Experiments will be performed in simulation and on hardware using a UR5e and the Astrobee platform.
The initial focus is on enabling higher levels of autonomy for IVA such as inspection, cargo unloading and science experiment tending. The technology may also be applicable to EVA and ISAM-related robotic activities. In the long run, we also envision applications in, e.g., construction and assembly on the moon and other planets.
The same applications that are of interest to NASA are also of interest to the rapidly expanding commercial space industry. Terrestrial applications that are enabled by the proposed work include inspection and maintenance of industrial sites and offshore platforms.