2018 embedded processor report: Rise of the neural network accelerator

A major focus for IP vendors targeting neural network workloads is flexibility, as requirements are changing rapidly in the evolving AI market. An example of this can be found in CEVA’s recently released NeuPro AI processor architecture, which consists of a fully programmable vector processing unit (VPU) alongside specialized engines for matrix multiplication and computing activation, pooling, convolutional, and fully connected neural network layers (Figure 1).

http://www.embedded-computing.com/processing/2018-embedded-processor-report-rise-of-the-neural-network-accelerator