Living at the Edge: edge-learning enabled AI

In the past, the only way to train AI –a deep neural network (DNN) - was to load a large number of images, train the network on a server, and then deploy it on a compute edge. The resulting DNN was ‘fixed’. If for some reason the network misclassified an orange for an apple, for instance, there was nothing that could be done other than retrain the DNN on the server with an augmented dataset that would make the distinction between the two objects. Neurala has changed all of that.

Enter the era of edge learning

DNNs are very powerful, but prone to failure when they encounter real-world scenarios where they are exposed to data they have not been trained on. While collecting larger and larger datasets helps DNNs be more precise, there are always cases where data is either not collected, or simply does not exist at the moment of the creation of the DNN.


This problem is compounded by the fact that the hardware where DNNs are trained and deployed is not the same. Often DNNs are trained on a server, and then deployed on a smaller, less powerful compute edge, incapable of learning new information until recent innovation made it possible.


To understand how this works in practice, take an industrial IoT example, with a quality control DNN devoted to classify ‘good’ vs ‘bad’ products in a manufacturing settings. It is very hard to pre-train the DNN to recognize all possible products coming out of a production line, especially as new items are engineered and pushed into production. The DNN should be able to quickly train to recognize the new product, at the compute edge, in seconds, directly on the production line, without having to go back to a server, which may be unavailable.


This is not just true in manufacturing. In retail and logistics applications, for instance, new products are always being introduced in supermarkets, shipping facilities, and warehouses. As this happens continuously, the same continuous tweak/adaptation of the underlying AI needs to happen. While many approaches exist that tackle inference at the edge, until Neurala, there was no practical solution to this problem. In real world scenarios, users need AI that also learns at the edge.


Meet Brain Builder: the first edge-learning-enabled platform for custom vision AI


Brain Builder is the first commercially available AI software that enables the creation of custom vision applications that learn directly on the compute edge. How does it work? Enterprises that need to build custom visual AI solutions can upload data on the Brain Builder SaaS platform, train a model in minutes, then deploy onto the edge using the Brain Builder SDK. This means that you can have a locally customized AI algorithm on almost any kind of device. The learning can be either supervised or unsupervised. Here at Neurala, we’re advocates of “human-in-the-loop” training to ensure that you can take full advantage of customizing your model. It can detect object types, or it can detect anomalies. The custom vision model can either start out empty (namely, relying only on edge learning), or get downloaded from Brain Builder and supplemented with local, edge learning.


The Brain Builder SDK makes the custom vision models developed in Brain Builder portable from one OS / CPU / GPU type to another, and its efficiency makes it blistering fast on any edge. The use cases for this technology vary from smart cameras in consumer and retail applications, to manufacturing, where internet connectivity is not required or allowed, and users can customize the custom vision model directly on the edge to meet their tough operating requirements.

Curious? Try it for yourself. Download Neurala’s app on iOS or Android today