Speeding Up Neurala's Brain Builder With AI Containers From NVIDIA NGC

By Lucas Neves, Senior Software Engineer at Neurala

We’re in the midst of the AI Revolution, but AI that relies on visual data is still out of reach for many businesses. The process of developing a custom AI solution, tailored to the needs of an individual business, is too costly and time-consuming for all but the largest of enterprises.

And, while experienced AI data scientists train Deep Neural Networks (DNNs) on many platforms, most suffer from the same challenges: lack of expertise to build and train the DNN, time-consuming neural network training, and an iterative process to improve the prediction performance of the network

Introducing Brain Builder

To help businesses develop custom vision AI solutions quickly, without breaking the bank, we developed Brain Builder, a cloud platform that provides data scientists and developers that are new to deep learning with the ability to quickly and easily train neural networks.

For example, unlike with consumer applications for AI, industrial and manufacturing defect detection use cases aren’t built on large, readily available datasets in the public domain. Instead, they require custom vision AI that can be built on smaller, unique data sets that skew heavily towards examples of good products, not the defects that the quality control teams are trying to identify.

A Different Kind of DNN

Unlike other AI platforms, Brain Builder is built on Lifelong-DNN™, a different type of deep neural network that can learn incrementally, one image at a time, requiring less training data than traditional AI.

With Brain Builder, enterprises tackling AI implementation don’t need to be experts or know the ins and outs of DNN architectures, frameworks, and models. They benefit from real time feedback, or incremental learning, which provides them with immediate results as they train the network. Additionally, training can be done quickly or incrementally, one image at a time, as needed.

With these technological advantages, Brain Builder provides a quick, easy user experience for anyone with a business challenge that can be tackled with an AI vision system.

I Feel the Need, The Need for Speed

Although Brain Builder is optimized to be fast and efficient, we were able to achieve even more speed by leveraging NVIDIA GPUs, using highly optimized deep learning framework containers from NGC .

One of Brain Builder’s many features is AI-assisted video annotation. With this tool, a user can annotate a single video frame and the AI Annotator can then learn that frame and apply it to the rest of the video.

AI Annotation saves time and money, but it requires hardware that can handle complex computational needs, reliably and at speed. As a result, we evaluated a number of processing options using a variety of DNN frameworks for comparison and testing.

Points scored

Figure 1. Time (ms) for the AI Annotator to learn (Training) or analyze (Inference) a single frame from a video, before use of NGC containers on 1x V100 on AWS p3.2xlarge ec2 instance.

Before NGC, Caffe-based models ran at 200ms per video frame (or about 5 frames per second). TensorFlow models were significantly slower, requiring three seconds to learn one frame and one second for inference on each frame. These training and inference times were simply too slow to provide value.

Enter NGC

NGC came out of the box with a number of features that made it an ideal tool for us to work with, including:

  • Highly optimized NVCaffe and TensorFlow containers for Volta architectures
  • TensorRT and multi-GPU training support, and Native support for FP16 for compatible GPUs support
  • Quick and easy setup with a simple docker pull command

We deployed the Brain Builder AI Annotator, using NGC containers, and measured the results.

Points scored

Figure 2. Time (ms) for the AI Annotator to learn (Training) or analyze (Inference) a single frame from a video, before and after use of NGC containers on 1x V100 on AWS p3.2xlarge ec2 instance.

Working in NGC containers, both training and inference sped up significantly. Caffe models ran in a quarter of the time previously measured. TensorFlow models ran training in a quarter of the time, and inference in an eighth of the previous time.

With Brain Builder running in NGC containers on NVIDIA GPUs, our customers can complete their work and build custom AI solutions significantly faster than would otherwise be possible.

Benefits Beyond Performance

NGC containers have also helped us speed up our development time. To increase reliability, we now develop and test our networks on multiple frameworks. In the past, that would have required a complicated and time-consuming installation. All the plumbing work required to develop Brain Builder would have hampered the delivery dates. Thanks to NGC, we were able to focus on building and delivering the solution on time.

As a result of our experiments with NGC for the Brain Builder project, and considering the benefits it has delivered, we’ve switched to NGC containers for most of our projects.

View a video of Neurala’s work with NVIDIA here and sign up for Brain Builder here.