Overview

The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

The CDNN compiler enable an extremely simple and streamlined transition of existing deep neural networks to an embedded environment.
NeuPro AI processor ensure superior performance with minimal power consumption. Separately, each component of the CDNN compiler is a powerful enabler of embedded imaging and vision applications. Combined, these pieces deliver an ultimate toolkit to support new network structures and changing layer types of deep neural networks.

CEVA supplies a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network.

Benefits

The CDNN compiler streamlines implementations of deep learning in embedded systems by automatically quantizing and optimizing offline pre-trained neural networks to real-time embedded-ready networks for CEVA-XM cores, CEVA-NeuPro AI processors and customer neural network engines. This enables real-time, high-quality image classification, object recognition, and segmentation, significantly reducing time-to-market for running low power machine learning in embedded systems

Automatic quantization and conversion to embedded-ready networks
Greatly reduces memory bandwidth for any network via various mechanisms including layer fusion and compression
Enables heterogeneous computing architectures, optimizes for and enables seamless utilization of custom AI engines

Main Features

  • CDNN Compiler converts pre-trained neural network models and weights from offline training frameworks (such as Caffe or TensorFlow) to real-time network models
  • CDNN Run-Time software accelerates deployment of machine learning in low-power embedded processors
  • CDNN-Invite API enables seamless incorporation and usage of custom AI engines within the CDNN framework

CEVA CDNN live demonstration

The CEVA Deep Neural Network (CDNN) is a comprehensive inferencing graph compiler that simplifies the development and deployment of deep learning systems for mass-market embedded devices.

CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN type networks into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing on low device.

CDNN enables an extremely simple and streamlined transition of existing deep neural networks to power optimized and memory constrained embedded environments.

CDNN enables heterogeneous computing and is flexible to split a network between multiple compute engine such as one or multiple CEVA-XM DSPs, NeuPro AI processors and custom AI Engines, all this to ensure and optimize for superior performance with minimal power consumption.

CDNN is composed of an offline compiler and run-time components. While the CDNN compiler quantized and optimizes the networks, the CDNN run-time executes the network in real-time on the various processors. Combined, these pieces deliver an ultimate toolkit to support variety of network structures today and robust for supporting the changing layer types and structures of deep neural networks going forward.

CEVA supplies a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures as well as custom neural network engines via the CDNN-Invite API. These enable the development of deep learning applications using the CDNN, targeting any advanced network.