NASA SBIR 2019-I Solicitation

Proposal Summary


PROPOSAL NUMBER:
 19-1- H6.22-3899
SUBTOPIC TITLE:
 Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition
PROPOSAL TITLE:
 Energy Efficient High-Throughput Neuromorphic Processor for Deep Reinforcement Learning
SMALL BUSINESS CONCERN (Firm Name, Mail Address, City/State/Zip, Phone)
Prixarc, LLC
2673 Commons Boulevard, Suite 55
Beavercreek, OH 45431- 3833
(937) 782-8206

Principal Investigator (Name, E-mail, Mail Address, City/State/Zip, Phone)

Name:
Dr. Sanjeevi Sirisha Karri
E-mail:
sanjeevi@prixarc.com
Address:
2673 Commons Blvd., Suite 55 Beavercreek, OH 45431 - 3833
Phone:
(802) 829-8375

Business Official (Name, E-mail, Mail Address, City/State/Zip, Phone)

Name:
Dr. Vamsy Chodavarapu
E-mail:
vamsy@prixarc.com
Address:
2673 Commons Blvd., Suite 55 Beavercreek, OH 45431 - 3833
Phone:
(937) 782-8206
Estimated Technology Readiness Level (TRL) :
Begin: 2
End: 4
Technical Abstract (Limit 2000 characters, approximately 200 words)

In space environments, particularly for low powered satellites (such as cubesats), onboard neural processing has many applications. One of the key neural algorithms that would be highly useful is reinforcement learning for cognitive communications in software-defined radios. In reinforcement learning a policy function is used for determining action and needs to be updated based on environmental inputs. This function can become extremely complex and is often approximated using deep learning networks, thus leading to deep reinforcement learning algorithms. The updating of these deep networks themselves can be highly energy consuming. At present there are no low SWaP, high throughput deep reinforcement learning processors available commercially. Thus the objective of this work is to develop highly efficient neuromorphic processors for deep reinforcement learning, particularly for cognitive communications. In this work we will examine application properties of deep reinforcement learning algorithms in terms of their hardware requirements, and then make efficient hardware for them. A key component of this work will be to make highly energy efficient processors for training deep learning networks – this is the most computationally demanding task of the algorithm. We will look at multiple design options and evaluate their performance for several deep reinforcement learning based cognitive communications datasets. We will select the best architecture option for prototyping as a scale down version an FPGA for verification. This work builds on very extensive work we have done in the past on efficient processors for training deep networks. In Phase II we will design a full chip based on the best option in Phase I and characterize its performance. We will collaborate with researchers at NASA Glenn and at AFRL for this work. The chip will have many commercial applications, including cubesats, UAVs, commercial and personal radio communications, and big data applications.

Potential NASA Applications (Limit 1500 characters, approximately 150 words)

Cognitive communications: The low power architecture for deep reinforcement learning will be ideally suited for cognitive communications task. This requires online training of a deep network and will handled by the proposed chip study at very low power.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words)

Commercial terrestrial and personal radio applications where smarter communications may be needed; Robotics; Big data analytics; Bioinformatics; Data mining.

Duration: 6

Form Generated on 06/16/2019 23:18:04