However, at least one company – Nauto Inc., (Palo Alto, Calif.), a startup working on autonomous driving and automotive collision recording – has already had access to the SDK.
Also there is, as yet, no word on if or when Qualcomm will include cores dedicated to deep learning inside its Snapdragon range of processors. For now the neural network software piggybacks on the existing Kyro CPU, Adreno GPU and Hexagon DSP cores inside the Snapdragon 820 processor.
The Snapdragon Neural Processing Engine SDK does support established deep learning model frameworks including Caffe and CudaConvNet, Qualcomm said.
The intention is to allow companies in a broad range of industries, including healthcare, automotive, security and imaging, to run their own proprietary trained neural network models on portable devices. Common deep learning tasks that can be set up with the SDK include scene detection, text recognition, object tracking and avoidance, gesturing, face recognition and natural language processing, Qualcomm said.
"Snapdragon Scene Detect software runs on the Snapdragon CPU and GPU today," explained Gary Brotman, director of product management, Qualcomm Technologies Inc. via email correspondence with EE Times Europe , earlier this year. "GPU is the optimal core for running DNNs [deep neural networks] at present. No hardware changes were required all of the optimization is done in software. For Scene Detect, Qualcomm provides an SDK containing models/neural nets for 60 image categories (e.g. dog, food, car) along with a runtime and the OEM integrates this into their Snapdragon handsets and apps."
"The Neural Processing Engine SDK means we can quickly deploy our proprietary deep learning algorithms to our Snapdragon-based connected camera devices in the field, which can detect driver distraction and help prevent auto accidents," said Frederick Soo, chief technology officer of Nauto, in a statement issued by Qualcomm.