How should a robot see?
Robots do not see like humans. Humans do not process the detail of entire images the way a computer does today: they rapidly move their eyes around (“saccade”) to spot items that are of interest, while most computer-based image processing systems process the entire image at equal resolution, as if “every pixel counts”.
Thanks to Neurala Ingelligence Engine, today’s robot can leverage human-grade vision in an inexpensive, software-only package that leverages the power of Graphic Processing Units (GPUs).
The Neurala Intelligence Engine includes a brain-inspired neural model controlling an active visual system that saccades or moves a robot’s camera eyes to create a new type of efficient object detection and vision system. The goal is to make image processing more efficient and identification of critical objects faster. The work is being partially funded by NASA for planetary exploration, where processing and battery efficiency are critical, and by the Air Force Research Lab, where efficient sensemaking is key to interpret large volumes of real-time data. But, it has general application to all types of vision systems.
The following videos demonstrate how the system works on a pan-tilt camera mounted on a robot controlled by and artificial brain, and shows how objects in the image are easily attended and separated in real-time on a mobile robotic platform.