Humans excel at cognitive processing, for example, recognizing faces, vehicle lane tracking, or separating human speech from background noise. This happens because the brain’s neural networks learn how to analyze and interpret important visual and audio cues.
Creating artificially intelligent machines with the same abilities is challenging but important in applications such as automotive safety, surveillance, and security. Accelerating machine learning deployment in convolutional neural network-based designs is critical to addressing this challenge.
One solution lies in developing a dedicated low power AI processor family for Deep Learning at the edge, and deep neural network (DNN) SW compiler that:
- Automatically convert the network for use by real-time embedded devices, offering significant reduction in time-to-market
- Ensure operation with the minimum power and memory bandwidth overheads in embedded systems
- Deliver superior performance while retaining the flexibility to stay up to date with the constantly evolving domain of machine learning