IoT wireless sensors and the problem of short battery life

July 31, 2015 // By Carlo Canziani
Wireless sensors provide great insight in applications like monitoring environmental conditions or industrial plants and machinery. Because they are simple to install, they can be deployed in a multitude of situations. But one of the factors that most limits the use of wireless sensors is their limited ability to do the job for a reasonable amount of time.

When a wireless sensor's operation is fully dependent on a battery, and the battery is depleted, it becomes just a piece of junk.

If you are designing battery-operated wireless sensors, you face numerous challenges in ensuring your devices operate for a reasonable amount of time. The typical approach is to use energy for just the required activity, then put the device in low-power-use mode. The operation of a wireless sensor can be segmented in a series of activities, each one requiring a certain level of power for a certain amount of time. The most common activities:

  • · Waking up, taking a measurement and processing data into a message

· Powering up the RF power amplifier, transmitting the message, and powering the RF PA down again

· In bidirectional sensors (transmit and receive): waking up, powering up the receiver, receiving, processing data, acting on a message, and powering back down

It is easy to see that multiple actions play a role in discharging the battery.

The simplest way to increase the battery life is to use a bigger battery, a battery with higher capacity. Nevertheless, your customers are likely to expect their sensors to be small and to offer high performance (so they can send lots of data and have local intelligence/data crunching capability). Clearly, your customer expectations are diametrically opposed to the easiest way to solve the issue of short battery life.

How do engineers estimate battery life?

As a design engineer, you need to start making compromises and find the balance between battery size and the wireless sensor’s functionality to get the best performance from a small battery with a sufficiently long time interval between battery replacements.

The optimization process starts by understanding the energy requirements. Gathering data about energy usage is the first step to characterizing device performance.

A battery has a defined amount of energy, specified in Watt hours (Wh) and capacity, specified in amp hours (Ah). If you know how much power is required to operate your device, you can calculate the battery life.

Battery life (hours) = Battery capacity (Wh) / Average power drain (W)

The battery’s energy is also the product of its voltage rating (V) and capacity (Ah). The voltage rating is a midpoint value on the battery’s discharge curve empirically determined to correctly relate the battery’s energy and capacity. Based on this, battery life can also be determined by the formula:

Battery life (hours) = Battery capacity (Ah) / Average current drain (A)

However, when the device is in real operation, the battery life is typically shorter than the number you calculated. The most common comment is: “the battery quality is bad.” Representatives for big battery brands will offer detailed specifications and explain that among batteries of the same type, it is common to have capacity variations of 5 to 10 percent.

But even using conservative battery capacity estimates, battery life typically falls short. The device dies before it is expected to. Why does this happen? Did we correctly estimate energy usage? Probably not. Let’s explore the problem.

The complexity of measuring dynamic current drain

In battery-powered devices like wireless sensors, to save energy the device sub-circuits are active only when required. Engineers design the device to spend most of its time in a sleep mode with minimum current drain. During sleep mode, only the real-time clock operates. The unit then wakes up periodically to perform measurements. The acquired data is then transmitted to a receiving node.

Figure 1: Current levels during the three main states of a wireless sensor

The different operating modes result in a current drain that spans a wide dynamic range from sub-µA to 100 mA, which is a ratio on the order of 1:1,000,000.

Traditional measurement techniques and their limitations

A well-known method for measuring current is to use the ammeter function of a DMM. The accuracy of current measurements made with modern digital DMMs looks good, but specifications are defined for fixed ranges and relatively static signal levels, which isn’t exactly the situation on a wireless sensor due to its dynamic current drain.

The DMM is connected in series between battery and device to measure the current. From time to time we see some reading instabilities due to the sensor’s active cycle or even the transmit mode.

We know that DMMs have multiple ranges, and with auto range it should be able to select the most appropriate range and give the best accuracy. However, DMMs aren’t ideal. The auto range takes time to change range and settle the measurement results. Time to auto-range is often 10 to 100 ms, longer than transmission or active modes times. For this reason, the auto-range function needs to be disabled and the user needs to manually choose the most appropriate range.