Humans excel at cognitive processing, for example, recognizing faces, vehicle lane tracking, or separating human speech from background noise. This happens because the brain’s neural networks learn how to analyze and interpret important visual and audio cues.
Creating artificially intelligent machines with the same abilities is challenging but important in applications such as automotive safety, surveillance, and security. Accelerating machine learning deployment in convolutional neural network-based designs is critical to addressing this challenge.
One solution lies in supplying a dedicated low power AI processor for Deep Learning at the edge, combined with a deep neural network (DNN) graph compiler that:
- Automatically quantizes and converts networks for use in real-time embedded devices, offering significant reduction in time-to-market
- Ensures operation with the minimal power and memory bandwidth overheads in embedded systems
- Delivers superior performance while retaining the flexibility to stay up-to-date with the latest technology in the constantly evolving domain of machine learning