|PROPOSAL NUMBER:||04-II X6.03-7770|
|PHASE-I CONTRACT NUMBER:||NNL05AA77P|
|SUBTOPIC TITLE:||Atmospheric Maneuver and Precision Landing|
|PROPOSAL TITLE:||Novel Color Depth Mapping Imaging Sensor System|
SMALL BUSINESS CONCERN
(Firm Name, Mail Address, City/State/Zip, Phone)
6201 East Oltorf, Suite 400
Austin ,TX 78741 - 7511
(512) 389 - 9990
PRINCIPAL INVESTIGATOR/PROJECT MANAGER
(Name, E-mail, Mail Address, City/State/Zip, Phone)
6201 East Oltorf St, Suite 100
Austin, TX 78741 -7511
(512) 389 - 9990
TECHNICAL ABSTRACT (Limit 2000 characters, approximately 200 words)
Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can supply position information of external objects, but no range information. Ideally, a system that, in one package, provides 3-dimensional relative information about external objects is needed. To this end, Nanohmics will develop a lightweight, compact, low power, low cost, modular sensor system that produces a depth map of the surroundings. By combining a color optical camera, a multi-element range finding system, and digital processing electronics, a single low cost sensor system can be designed to provide relative position and anti-collision information i.e. a 3-Dimensional Vehicle Imaging Sensor for Incident Obstacle Navigation (3D VISION MapperTM). The proposed system could, for example, be mounted on the long-neck mast near the PANCAMs and NAVCAMs on Martian robotic rovers.
POTENTIAL NASA COMMERCIAL APPLICATIONS (Limit 1500 characters, approximately 150 words)
Robots deployed for planetary exploration missions, such as the Mars Rovers, must be able to autonomously avoid rocks, crevices, holes and cliffs. Technology developed during the Phase I SBIR program could be utilized to extend the mission life of robots deployed to planets where remote communication with the robot is not practical because of time delay and power limitations. Additionally, the robot systems to be used on Mars must be able to detect and recognize shadows in order to avoid losing sunlight to power solar panels while avoiding obstacles during terrain navigation. Autonomous exploratory vehicles require depth information in order to calculate the best course to avoid obstacles. The 3D VISION MapperTM is designed to produce this information in a pre-processed manner, thereby offloading some of the heavy processing work from the main control processor.
Due to the difficulty of control of fully autonomous vehicles, semi-autonomous control systems are currently more widely used. In this way, a human operator gives general directions to the robotic vehicle: a destination, a path to travel, or waypoints. The semi-autonomous vehicle then avoids local obstacles and moves to the desired location. The 3D VISION MapperTM is ideally suited for such applications. The depth map data can be easily converter to a "top-down" map-like view and presented to the operator graphically.
POTENTIAL NON-NASA COMMERCIAL APPLICATIONS (Limit 1500 characters, approximately 150 words)
Applications outside of NASA include the following:
Consumer Electronics (Gaming, role-playing environments,PDAs)
Security & Surveillance applications ( Image processing and Identification)
Automotive (Airbag Deployment, Occupant sensing, Obstacle avoidance, Autonomous Navigation)
Robotics & Machine Vision (Assembly robots, Pick & Place, Part Inspection, Measurement & Gauging)
Hazardous Area Mapping
Robots in urban search and rescue missions could utilize the technology developed in this program. These robots need to be mobile and robust in harsh environments and at times have to be autonomous because the robot operator is either distracted or stressed. Robots used in search and rescue missions must be able to detect and recognize obstacles and plan a route to navigate around them to continue the mission.