As Artificial Intelligence (AI) marches into almost every aspects of our lives, one of the major challenges is bringing this intelligence to small, low-power devices. This requires embedded platforms that can deliver extremely-high Neural Network performance with very low power consumption. However, that’s still not enough.
Machine Learning developers need a quick and automated way to convert and execute their pre-trained networks on such embedded platforms. In this session, we will discuss and demonstrate tools that complete this task within few minutes, instead of spending months on hand porting and optimizations.
Click here to watch the webinar on demand and to hear about:
- Overview of the leading deep learning frameworks, including Caffe and TensorFlow
- Various topologies of neural networks, including MIMO, FCN, MLPL
- Overview of most common neural networks such as Alexnet, VGG, GoogLeNet, ResNet, SegNet
- Challenges in porting neural networks to embedded platforms
- CEVA “Push button” conversion approach from pre-trained networks to real-time optimized
- Programmer Flow for CNN Acceleration
Want to learn more?
- Click here to find out about CDNN2, CEVA Deep Neural Networks – the advanced, low-power, embedded solution for machine learning
- Watch the CEVA CDNN2 live demonstration
- Read previous blog post: Open Source Deep Learning Frameworks: Now on Embedded Platforms
- Check out the CEVA-XM4 intelligent vision processor
You might also like
More from Deep Learning
On Demand Webinar: Challenges of Vision Based Autonomous Driving & Facilitation of An Embedded Neural Network Platform
The automotive market is seeing accelerated growth and rapid adoption of vision applications that will lead the way to autonomous …
In two of my previous posts, I discussed deep learning frameworks, features and challenges and the specific challenges of deep …