Designing for IoT—Part IV—the Cloud

June 03, 2014 // By Christian Legare, Micrium
In this last segment of this series, we will look at back-end services. Some IoT systems way not need back end services – for example, a smartphone controlling a TV – but the majority of the currently envisioned IoT systems reply on the collection, processing and usage of data by the IoT devices and for this, some form of Information Technology different than embedded systems is required.Placeholder IoT

This article is the conclusion to a four-part series:

- In Part 1, we reviewed the choices facing an embedded developer who needs to build wireless networking into an IoT device (that is, the Thing in the Internet of Things).

- In Part 2, we discussed the different type of IoT devices, and the design choices that you face when designing the hardware and software architectures.

- In Part 3, we looked at the Internet itself, how IoT devices make use of it.

The Cloud

The Cloud: This is another interesting buzzword. When I was Director of Engineering at Teleglobe Canada, every time we sat in a meeting room to design a network, we drew it as a cloud. A network is a cloud. The Internet is a cloud. And cloud computing is nothing more than an array of networked computers that allow you to offload processing tasks from your embedded system. The same is true for data storage: why store data locally, when you can store it in a secured data centre, with guaranteed power and back-ups?

There is, of course, one fundamental assumption in cloud computing: The network is always available!

So “cloud computing” is a term coined to put a simple name on something that has become very complex. Many companies have launched services that try their best to hide this complexity; these include Apple’s iCloud, Google Cloud Platform, Microsoft SkyDrive, and others. These Cloud computing/storage systems are intended for use with desktop and mobile personal computers. Embedded developers need something similar for IoT devices.

Industry analysts are forecasting the creation of billions of IoT devices by 2020, and these devices will produce huge amounts of data. There are a few approaches for managing and processing all this data.

I see two trends developing among companies that are moving into IoT:

- Some companies are developing and selling their own proprietary solutions because they feel they have the lead, and want to leverage it to its maximum.

- Other companies don’t have the capability to deploy a complete infrastructure, and so prefer to rely on emerging public or commercial solutions.

You can define your own IoT from end-to-end. Many large companies are trying to do just that, and want to capture a good portion of this emergent market. Others are specialising in certain parts, such as GE with its Predix platform for the data analytics of the industrial internet. Does it mean you absolutely need to buy the provider cloud service stuff? No, you can probably build your own or outsource the expertise you don’t want to build.

Running a backend service must become a core competency for any company attempting to do so; there will be no room for dilettantes. But not all organisations have the DNA to put in place a server farm, guarantee the fail-safe operation of their network, and guarantee data back-ups, system redundancy, and all the crucial things that come with it. Can your system cope with a network failure? If so, for how long?

Early IoT deployment has revolved around sensors and actuators connected to the public Internet via a gateway or hub, and delivering data to an Internet-based server (cloud computing). This is often a vertical integration built around one primary vendor (such as your utility provider and/or telecommunication carrier). All these providers have little or no interest in working together, resulting in an unmanageable melting-pot of services.

The army of devices that compose the Internet of Things will generate more data than any individual Web application. IBM's chief executive, Virginia Rometty, in The Economist blog “The World in 2014” [Ref 2], provides IBM’s estimates on the quantity of data to be processed in years to come: “By one estimate, there will be 5,200 Gigabytes of data for every human on the planet by 2020.”

In his TED talk: The Internet of Things is Just Getting Started [Ref 1], Arlen Nipper calculates that to support the 30 billion connected devices expected to arrive by 2020, we would need to deploy about 340 application servers per day (120,000 servers per year), assuming that we want to deploy all these systems as segregated applications. Mr. Nipper suggests that one way to make IoT possible in the coming years will be to adopt cloud computing.

Around the year 2000, all the telecom carriers claimed that they could each provide the Internet all by themselves. They invested billions of dollars into equipment purchases. Everybody was looking for the “killer application,” the application that would create the “gazillions” of bytes of traffic to fill these networks. At that time, the application that was generating most of the network traffic was e-mail. Today, social media and video sharing are replacing e-mails. When the predicted traffic did not materialise on this huge IP network, it triggered the dot-com bubble burst.

With IoT, we are finally seeing a new “killer application.” When billions of devices exchange information over the Internet, they will require significant network bandwidth, and especially enormous data storage and processing capabilities. A new term has been coined lately to represent this new trend: Big Data.

next: Backend Services...