Such neural networks -- convoluted or deep or in other formats -- have the ability to learn, or be trained, and are useful for a broad range of applications from the recognition of patterns in data, to image recognition, to face and gesture recognition and on to natural language processing. Many of these techniques could become key to the efficient implementation of the Internet of Things, drone deployment, and automotive driver assistance systems, to mention but a few application areas.
Movidius, a friend of Google, and Qualcomm have both recently made announcements (see Movidius shows neural network stick and Qualcomm offers neural network SDK for Snapdragon processor ). Cadence has also made an announcement (see Embedded neural networks: Cadence's latest DSP target ) and Ceva has been working on this front for a while (see CEVA invests in gesture recognition software firm ). There are also numerous startups getting involved.
It seems that we are on the brink of a neural networking hardware revolution. And such a revolution could yet upset the pecking order in processor architectures. Intel and ARM should, and I am sure are, keeping a close watch on the developing situation.
However, I contend we are still at an interim stage. In these latest developments the vendors are running neural networks as software on processors either primarily optimized – or also optimized – for other functions (such as GPUs that render graphics or general DSPs). But this interim stage may not last long. Quite soon we may see dedicated neural processing units (NPUs) added to SoCs.
As an example Qualcomm has not yet included a dedicated neural processor unit (NPU) in its Snapdragon range of processors, but it does offer its Zeroth neural network processing software platform running heterogeneously on the Kyro CPU, Adreno GPU and Hexagon DSP cores within the Snapdragon 820. If deep learning applications come forward piggybacking on the existing computing resources within