Neurala Wins 2017 TechConnect DEFENSE Innovation Award

Neurala was recently honored to receive the 2017 TechConnect Defense Innovation Award at the Defense Innovation Summit (DITAC) in Tampa, Florida this October for its advancement of the Neurala Brain. This award highlights the potential positive impact that Neurala technology will have for national security, and recognizes the rate at which enhanced artificial intelligence is impacting the defense landscape today. The presented Neurala Brain enables enhanced automation for unmanned systems and data analysis. For unmanned systems, UASs and UGSs can sense and avoid, do not require full data pre-programming, can learn on the fly, and can perform in GPS denied environments while knowing its location. The Neurala Brain can also play a role in the counter-UAS mission or be used for a priori situational awareness for the warfighter before entering a new environment (tunnels, booby-trapped houses, etc). Enhanced data analysis can leverage the ability of the Neurala Brain to learn on-the-fly (commonly called "one-shot" learning) in support of the analysts for actionable intelligence.

TechConnect_DefenseInnovation_Award_2017.png

Much of the tech behind the Neurala Brain was developed by collaborations between industry and academia (the founders of Neurala were Boston University faculty during Neurala's nascent stages), working with government agencies NASA, USAF and DARPA. These collaborations all required new levels of intelligence on limited hardware configurations. NASA, for instance, required the development of an advanced navigation system that could conduct accurate simultaneous localization and mapping (SLAM) on a Mars rover with only passive camera input, no human feedback, limited computed resources and tight power constraints. State of the art SLAM algorithms of that time were not able to handle such low computer, low power situations. Moreover, one of the critical mechanisms for creating accurate maps -- detecting key landmarks -- was especially challenging in the Martian environment, as much of the Mars surface is incredibly homogeneous. 

The solution was to mimick the human brain, which operates on a measly 20 watts and is able to learn new objects almost instantaneously using high level features learned on-the-fly. At the time, neural networks and deep learning were not vogue terms -- indeed neural networks had been a dead field for close to 20 years and deep learning networks were untrainable (which would be remedied in 2012 by the advent of large datasets containing millions of manually labeled images). Nevertheless, while the advancements in classical neural networks (a scientific branch mainly presiding in statistics and artificial intelligence) had stagnated, advancements in computational neuroscience (a scientific branch mainly presiding at the intersection of neuroscience and computer science) were advancing rapidly. In the past two decades great strides have been made on how the brain processes visual information and creates rapid, short term memories. The solution delivered to NASA codified multiple advances in computational neuroscience into a single integrated framework that could handle a complex task.

Of note was the integration of a new human hippocampal model of memory consolidation, enabling the artificial brain the ability to implant novel objects into memory instantaneously, and then to transfer the knowledge of these objects into deep storage slowly over time. Details of this process were recently presented at GTC San Jose. This instantaneous learning utilized novel high-level features to solve the problem of learning and discriminating Martian landmarks within a low-power neural framework that perfectly met NASA's need.

It turns out that this same technology fits perfectly within the deep learning paradigm that has exploded within the last five years (indeed some of the deep learning methods so popular today were part of the NASA solution years earlier). The developed on-edge technology -- edge learning inside a light weight, powerful deep learning framework -- fits perfectly well within automating unmanned vehicles (both in the air and on the ground), enabling collision avoidance and mapping, and within enhancing user interaction, enabling machines to learn specific users on-the-fly and carry out user-specific actionable commands.

Deep learning and, more pertinently, edge learning are ripe to disrupt numerous industries in the next five years. Neurala is at the forefront of this technological trend, and is honored that this capability has been recognized for, and will be, a critical component of the national security services in the coming years.

About Jeremy Wurbs

Dr. Jeremy Wurbs is a lead research scientist at Neurala, specializing in neural principles and architectures for enhanced processing for embedded devices. Before Neurala he worked with NASA to pioneer autonomous drone flight using novel deep learning methods to enable continued online learning onboard during drone flight. Since joining Neurala he has worked to integrate and develop new technologies that enable tracking, detecting, classifying and segmenting objects on embedded platforms as well as enhance Neurala's proprietary on-edge learning technology suite. He has spoken at many academic and industry events about AI, including NASA, Vision Science Society, International Conference on Cognitive and Neural Systems, AUVSI, Defense Innovation and MassTLC. Jeremy holds a Ph.D. in Cognitive and Neural Systems from Boston University.