NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof NeuPro-M™ redefines high-performance AI (Artificial Intelligence) processing for...
Harnessing neural networks to accelerate Edge AI
Humans excel at cognitive processing, for example, recognizing faces, vehicle lane tracking, or separating human speech from background noise. This happens because the brain’s neural networks learn how to analyze and interpret important visual and audio cues.
Creating artificially intelligent machines with the same abilities is challenging but important in applications such as automotive safety, surveillance, and security. Accelerating edge AI deployment in neural network-based designs is critical to addressing this challenge.
One solution lies in supplying a dedicated low power AI processor for Deep Learning at the edge, combined with a deep neural network (DNN) graph compiler that:
- Automatically quantizes and converts models for use in real-time Edge AI devices, offering significant reduction in time-to-market
- Ensures operation with the minimal power and memory bandwidth overheads in embedded systems
- Delivers superior performance while retaining the flexibility to stay up-to-date with the latest technology in the constantly evolving domain of embedded machine learning
Security and Surveillance