The CEVA Deep Neural Network (CDNN) is a comprehensive graph compiler that simplifies the development and deployment of deep learning systems for mass-market embedded devices. CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized CNN, RNN, and other type networks into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing, using low compute and memory resources. CDNN enables heterogeneous computing and is flexible to split a network between multiple compute engines such as SensPro or NeuPro processors and custom AI Engines, to ensure superior performance with minimal power consumption. In this video, we are showing how the CDNN Graph Compiler and GUI enable users to quickly configure the CDNN tool and easily analyze their neural networks performance on any of CEVA’s AI processors. The example we are using is inferencing of the ssd_mobilenet network on the SP500 DSP, both natively (DSP only), and also with a hardware accelerator connected via CDNN-Invite API for higher performance.