Today is an important milestone for CEVA’s Imaging & Vision product line as we are announcing a unique software framework for Deep Learning called CDNN (which stands for CEVA Deep Neural Network). The main idea behind this software framework is to enable easy migration of pre-trained Deep Learning networks into real-time embedded devices and be able to run efficiently and in low power on the CEVA-XM4 Vision DSP. These technologies enable a variety of object recognition and scene recognition algorithms which could be used in the future for applications such as automotive advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR) and virtual reality (VR).
Availability of Universal Object Detector (UOD) Algorithm
As complementary technology to the CDNN, we are also announcing availability of an Universal Object Detector (UOD) algorithm, this is coming from our new CEVAnet partner Phi Algorithm Solutions. They have used CDNN to implement a CNN-based Universal Object Detector algorithm for the CEVA-XM4 DSP. This is now available for application developers and OEMs to run a variety of applications including pedestrian detection and face detection for security, ADAS and other embedded devices based around low-power camera-enabled systems.
Real-Time Object Recognition and Vision Analytics
The CDNN software framework in conjunction with the CEVA-XM4 imaging and vison DSP enables:
- Real-time object recognition and vision analytics
- Lowest power deep learning solution for embedded systems: 30x lower power and 3x faster processing compared to leading GPU-based systems using an example such as AlexNet, which is the most popular deep neural network
- 15x average memory bandwidth reduction compared to typical neural network implementations
- Automatic conversion from offline pre-trained networks to real-time embedded-ready networks
- Flexibility to support various neural network structures, including any number and type of layers
In a representative application using this technology and run a Deep Neural Network-based pedestrian detection algorithm at 28nm, it requires less than 30mW for a 1080p 30 frames per second video stream running on the CEVA-XM4 DSP.
Faster Network Model with CEVA Network Generator
Key to the performance, low power and low memory bandwidth capabilities of CDNN is the CEVA Network Generator, a proprietary automated technology that converts a pre-trained network structure and weights to a slim, customized network model used in real-time. This enables a faster network model which consumes significantly lower power and memory bandwidth, with less than 1% degradation in accuracy compared to the original network. Once the customized embedded-ready network is generated, it runs on the CEVA-XM4 imaging and vision DSP using fully optimized convolutional neural network (CNN) layers, software libraries and APIs.
CDNN as A Source Code
The CDNN software framework is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit (ADK). It is flexible and modular, capable of supporting either the complete CNN implementation or specific layers. It works with various networks and structures, such as networks developed with Caffe, Torch or Theano training frameworks, or proprietary networks. CDNN also includes real-time example models for image classification, localization and object recognition.
For getting more details on this product please –
- Visit CEVA website
- Join us at CEVA 2015 TECHNOLOGY SYMPOSIUM – ASIA: On Oct 26, 28 and 30th – CEVA will be hosting 3 one-day events in Shenzhen, Shanghai and Hsinchu accordingly. During these events we will expose more information on the CDNN framework. To register click here
- Register a live webinar: “Deep Learning – Bringing the Benefits into Low-Power Embedded Systems” Nov 12th: : join us to hear abuot implementing machine learning in embedded systems, including a deep dive into CDNN. To register click here
You might also like
More from Imaging & vision
Smartphones have pretty much eliminated the need for point-and-shoot pocket cameras. For a while now, they have even been threatening …
Choosing The Best Vision System for Highly Automated Vehicles: Centralized or Edge Processing, Which is The Best Choice?
One of the most important attributes of a highly-automated vehicle (HAV) is its ability to see. Multiple computer vision sensors, …