The scaling of CMOS over the past decade makes it feasible to create neuromorphic processors that can perform such tasks as detection, recognition, classification and inference. And there has been a spate of such cores and processors coming from startup companies in recent months. The step up in efficiency set to be demonstrated at next year's ISSCC brings battery operation within the realms of possibility
Session 14 at ISSCC 2017, comprising eight papers, is devoted to deep learning papers. According to the preliminary program ST will open the session with a paper on a 2.9 tera operations per second per watt (TOPS/W) deep convolutional neural network SoC in28nm FDSOI. The circuit operates from a supply voltage of 0.575V and achieve peak performance of 676GOPS operating at 200MHz. The SoC integrates a host CPU, a 16 DSP array, and a convolutional deep neural network accelerator fed by an on-chip reconfigurable network that reduces on-chip and off-chip memory traffic.
All the other papers are from academic institutions and some are more specialised and others more general. Korea's KAIST presents a reconfigurable convolutional neural network (CNN) and recurrent neural network (RNN) SoC in more modest 65nm CMOS and yet able to achieve an energy efficiency 8.1TOPS/W, operating at 50MHz with supply voltage of 0.77V.
The energy efficiency record goes to paper 14.5 from KU Leuven. The paper presents an energy-scalable CNN processor, again implemented in ST's 28nm FDSOI achieving efficiencies up to 10TOPS/W by modulating computational resolution, voltage and frequency, while maintaining recognition rate and throughput.
Another highlight of the session comes from authors at Massachusetts Institute of Technology, Cambridge, MA with a standalone speech recognizer implemented in 65nm CMOS. The device uses voice activity detection for wake up and tailored feed-forward deep neural network accelerator for speech recognition. It achieves one quarter the word errors, one-third the power consumption and one-twelfth the memory bandwidth of another speech recognizer.
Although the MIT contribution