MENU

How will deep learning change SoCs?

How will deep learning change SoCs?

Technology News |
By eeNews Europe



However, the bigger — and perhaps more pertinent — issues for the semiconductor industry are: Will “deep learning” ever migrate into smartphones, wearable devices, or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimized for neural networks? If so, what does it look like?

“There is no question that deep learning is a game-changer,” said Jeff Bier, a founder of the Embedded Vision Alliance. In computer vision, for example, deep learning is very powerful. “The caveat is that it’s still an empirical field. People are trying different things,” he said.

There’s ample evidence to support chip vendors’ growing enthusiasm for deep learning, and more specifically, convolutional neural networks (CNN). CNN are widely used models for image and video recognition.

Earlier this month, Qualcomm introduced its “Zeroth platform,” a cognitive-capable platform that’s said to “mimic the brain.” It will be used for future mobile chips, including its forthcoming Snapdragon 820, according to Qualcomm.

Cognivue is another company vocal about deep learning. The company claims that its new embedded vision SoC architecture, called Opus, will take advantage of deep learning advancements to increase detection rates dramatically. Cognivue is collaborating with the University of Ottawa.

If presentations at Nvidia’s recent GPU Technology Conference (GTC) were any indication, you get the picture that Nvidia is banking on the all aspects of deep learning in which GPU holds the key.

China’s Baidu, a giant in search technology, has been training deep neural network models to recognize general classes of objects at data centers. It plans to move such models into embedded systems.

Zeroing in on this topic during a recent interview with EE Times, Ren Wu, a distinguished scientist at Baidu Research, said, “Consider the dramatic increase of smartphones’ processing power. Super intelligent models—extracted from the deep learning at data centers – can be running inside our handset.”  A handset so equipped can run models in place without having to send and retrieve data from the cloud. Wu, however, added, “The biggest challenge is if we can do it at very low power.

AI to Deep learning

Search results of 'cats that look like dogs'
(Source: Yahoo)

Search results of ‘cats that look like dogs’ (Source: Yahoo)

 

One thing is clear. Gone are the frustration and disillusion over artificial intelligence (AI) that marked the late 1980’s and early ‘90’s. In the new “big data” era, larger sets of massive data and powerful computing have combined to train neural networks to distinguish objects. Deep learning is now considered a new field moving toward AI.

Some claim machines are gaining the ability to recognize objects as accurately as humans. According to a paper recently published by a team of Microsoft researchers in Beijing, their computer vision system based on deep CNNs had for the first time eclipsed the ability of people to classify objects defined in the ImageNet 1000 challenge. Only 5 days after Microsoft announced it had beat the human benchmark of 5.1% errors with a 4.94% error grabbing neural network, Google announced it had one-upped Microsoft by 0.04%.

Different players in the electronics industry are tackling deep learning in different ways, however.


 

Different approaches

Nvidia CEO at GTC

Nvidia CEO at GTC

Nvidia, for example, is going after deep learning via three products. CEO Jen-Hsun Huang trotted out during his keynote speech at GTC Titan X, Nvidia’s new GeForce gaming GPU which the company describes as “uniquely suited for deep learning. He presented Nvidia’s Digits Deep Learning GPU training system, a software application designed to accelerate the development of high-quality deep neural networks by data scientists and researchers. He also unveiled Digits DevBox, a deskside deep learning appliance, specifically built for the task, powered by four TITAN X GPUs and loaded with DIGITS training system software.

Asked about Nvidia’s plans for its GPU in embedded vision SoCs for Advanced Driver Assitance System (ADAS), Danny Shapiro, senior director of automotive, said Nvidia isn’t pushing GPU as a chip company. “We are offering car OEMs a complete system – both ‘cloud’ and a vehicle computer that can take advantage of neural networks.”

A case in point is Nvidia’s DRIVE PX platform — based on the Tegra X1 processor — unveiled at the International Consumer Electronics Show earlier this year. The company describes Drive PX as a vehicle computer capable of using machine learning, saying that it will  help cars not just sense but “interpret” the world around them.

How deep learning helps a car 'interpret' objects on the road(Source: Nvidia)

How deep learning helps a car ‘interpret’ objects on the road. (Source: Nvidia)

 

Conventional ADAS technology today can detect some objects, do basic classification, alert the driver, and in some cases, stop the vehicle. Drive PX goes to the “next level,” Nvidia likes to say. Shapiro noted that Drive PX now has the ability to differentiate “an ambulance from a delivery truck.”

By leveraging deep learning, a car equipped with Drive PX, for example, can “get smarter and smarter, every hour and every mile it drives,” claimed Shapiro. Learning that takes place on the road feeds back into the data center and the car adds knowledge via periodic software updates, Shapiro said.  

Audi is the first company to announce plans to use the Drive PX in developing its future automotive self-piloting capabilities. Shapiro said Nvidia will be shipping Drive PX to its customers in May, this year.

Qualcomm teased about its cognitive-capable platform, which will be a part of the new Snapdragon application processor for mobile devices, but said very little about its building blocks. The company explained that the Zeroth platform is capable of  “computer vision, on-device deep learning and smart cameras that can recognize scenes, objects, and read text and handwriting.”

Qualcomm pitches its first cognitive computing platform(Source: Qualcomm)

Qualcomm pitches its first cognitive computing platform. (Source: Qualcomm)

 

Meanwhile, Cognivue (Quebec, Canada) sees the emergence of CNN creating a level playing field for embedded-vision SoCs.

Cognivue is a designer of its own Image Cognition Processor core, tools and software, used by companies such as Freescale. By leveraging Cognivue’s programmable technology, Freescale provides intelligent imaging and video solutions for automotive vision systems.

Tom Wilson, vice present of product management at Cognivue, said, “We are finding our massively parallel image processing architecture and datapath management ideally suited for deep learning.” In contrast, competing approaches have often hand designed their embedded vision SoCs to keep pace with the different vision algorithms that have emerged over time, which they’ve applied to their SoC design and optimized each time. They might find themselves stuck with old architecture ill-suited to CNN, he explained.

Cognivue's new Image Cognition Processing technology, called Opus, will leverage APEX architecture (shown above), and enable parallel processing of sophisticated Deep Learning (CNN) classifiers.(Source: Cognivue)

Cognivue’s new Image Cognition Processing technology, called Opus, will leverage APEX architecture (shown above), and enable parallel processing of sophisticated Deep Learning (CNN) classifiers. (Source: Cognivue)

 

Robert Laganière, professor at the School of Electrical Engineering and Computer Science at University of Ottawa, told EE Times, “Before the emergence of CNN in computer vision, algorithm designers had to make many design decisions” involving a number of layers and steps with vision algorithms.

Such decisions include the type of classifier used for object detection and methods to build an aggregation of features (by using a rigid detector e.g. histogram). More decisions include how to deal with deformable parts of an object and whether to use cascade method (a sequence of small decisions to determine an object) or a support vector machine.

“One small specific design decision you make at each step of the way could have a huge impact in object detection accuracy,” said Laganière.


In the deep architecture, however, you can integrate all the steps into one, he explained. “You need to make no decision, because deep learning will make decisions for you.”

In other words, as Bier summed up: “Traditional computer vision took a very procedural approach in detecting objects.” Deep learning is, however, a radical departure, he said,  because “you don’t have to tell computers where to look.”

Bier described the process as a two-phase approach. Learning and training done at dedicated facilities, such as data centers, by using super computers. Then, large data sets in the first phase are translated into "settings" and "co-efficient" for embedded systems to use, said Bier.

SoCs optimized for neural networks?
No consensus appears to have emerged in terms of the best architecture for CNN in embedded Vision SoCs.

Cognivue and the University of Ottawa’s Laganière believe that a massively parallel architecture is the way for efficiently processing a convolutional neural network. In parallel processing, an image to which certain parameters are applied produces another image, and as another filter is applied to the image, it produces another image. “So you may need more internal local memory to store intermediate results in SoCs,” said Laganière.

The bad news is that in a big CNN, you could end up with billions of parameters. “But the good news is that there are tricks that we can use to simplify the process and remove some connections that are not needed,” he explained. The challenge, however, remains in handling a number of different nodes in CNN, and you can’t predetermine which node needs to be connected to another node. “That’s why you need a programmable architecture. You can’t hardwire the connections,” said Laganière.

Meanwhile, Bier said that in designing a processor for CNN, “You could use a simple, uniform architecture.” Rather than designing a different SoC architecture or optimizing it every time new algorithms pop up, a CNN processor only needs a “fairly simple algorithm that comes with fewer variables,” he explained. In other words, "One could even argue that you can reduce programmability for a neural network processor” if we know the right settings and co-efficient to be fed. “But many companies aren’t ready to make that bet yet, because things are still developing,” added Bier.

Chip vendors are using everything from CPU and GPU to FPGA and DSP to enable CNN on vision SoCs. So the debate over CNN architecture has only begun, in Bier’s opinion.

While there is no question that deep learning is altering the future of embedded-vision SoC designs, Bier said that a leading vision chip company like Mobileye has accumulated substantial vision-based automotive safety expertise. “I know many rivals want to eat their lunch, but I think an incumbent like Mobileye still has the first mover advantage.”  

Baidu’s Wu, asked about the challenges of deep learning in smartphones and wearable devices, pointed out three. First, “We are still looking for a killer app,” he said. When the industry developed an MP3 player, for example, people knew exactly what it was for. This made it easy to develop a necessary SoC. While on-device deep learning sounds cool, what is its best application? No one knows yet, according to Wu.

Second, “Deep learning needs an ecosystem,” he said. Collaboration among research institutes and companies is critical and “very useful,” he said.

Third, “We want to make smaller devices capable of deep learning,” said Wu. “Making it high performance at lower power will be the key.”

The topic of bringing deep learning to embedded systems is close to Wu’s heart. He will be a keynote speaker at the Embedded Vision Summit on May 12 in Santa Clara. He’ll speak about “Enabling Ubiquitous Visual Intelligence Through Deep Learning.”

 

About the author:

— Junko Yoshida, Chief International Correspondent, EE Times

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s