Neurala and the CELEST Neuromorphics Lab have been awarded a 2011 National Aeronautics and Space Administration Small Business Technology Transfer (STTR) Program. The joint work will focus on the development of neuromorphic brains for autonomous robots dealing with unknown, potentially hostile environments.
The announcement can be found here.
TECHNICAL ABSTRACT: Surface exploration of planetary environments with current robotic technologies relies heavily on human control and power-hungry active sensors to perform even the most elementary low-level functions. Ideally, a robot should be capable of autonomously exploring and interacting within an unknown environment without relying on human input or suboptimal sensors. Behaviors such as exploration of unknown environments, memorizing locations of obstacles or objects, building and updating a representation of the environment, and returning to a safe location, are all tasks that constitute typical activities efficiently performed by animals on a daily basis. Phase I of this proposal will focus on design of an adaptive robotic multi-component neural system that captures the behavior of several brain areas responsible for perceptual, cognitive, emotional, and motor behaviors. This system makes use of passive, potentially unreliable sensors (analogous to animal visual and vestibular systems) to learn while navigating unknown environments as well as build usable and correctable representations of these environments without requiring a Global Navigation Satellite System (GNSS). In Phase I, Neurala and the Boston University Neuromorphics Lab, will construct a virtual robot, or animat, to be developed and tested in an extraterrestrial virtual environment. The animat will use passive sensors to perform a spatial exploration task. The animat will start exploring from a recharging base, autonomously plan where to go based on past exploration and its current motivation, develop and correct an internal map of the environment with the locations of obstacles, select the shortest path of return to its recharging base before battery depletion, then extract the resulting explored map into a human-readable format. In Phase II Neurala will enhance and translate the model to low-power neuromorphic hardware and collaborate with iRobot to test the model in a robotics platform.
POTENTIAL NASA COMMERCIAL APPLICATIONS: Technology developed in this project will have a transformative impact on space exploration, and is directly relevant in supporting the key attributes of autonomy to support NASA missions, stated in the OCT roadmap for Robotics, Tele-Robotics and Autonomous Systems (TA04) as, “the ability for complex decision making, including autonomous mission execution and planning, the ability to self-adapt as the environment in which the system is operating changes, and the ability to understand system state and react accordingly.” This work addresses two space technology grand challenges which aim to enable transformational space exploration and scientific discovery: all access mobility and surviving extreme space environments. Development of a biologically-inspired, robust, low-power, multi-component brain system able to perform self-localization and mapping will enable robots to autonomously navigate novel terrains without the need of GNSS. By including the ability to learn about an environment as it explores, robotic agents will be able to autonomously negotiate novel terrains and send relevant, intelligently preprocessed information back to a human controller. Lastly, incorporating high-level decision making and conflict resolution will allow the robot to decide between exploration of its environment and returning to home base for a battery recharge.
POTENTIAL NON-NASA COMMERCIAL APPLICATIONS: One of the fundamental challenges of modern robotics is to build autonomous systems that are increasingly able to explore their environment and act upon choices in an intelligent way. The simultaneous localization and mapping (SLAM) problem exemplifies one such challenge. Currently, industry and academic solutions of the SLAM problem rely on accuracy of expensive sensors that are highly sensitive to noise and the complexity of real-world environments. These solutions are suboptimal since they require expensive, precise, and power-hungry sensors. The technology proposed herein mimics an animal’s ability to solve the SLAM problem with noisy sensors and without the need of GNSS. Applications of this new technology include guidance systems for:
– Robots navigating in GNSS-denied environment, such as collapsed building in disaster areas (e.g., earthquakes, nuclear power plants);
– Robots for surveillance and scouting of indoor environments, such as urban war zones;
– Microrobots for medical diagnosis, and
– Robots for deep-ocean exploration.