Navigant Research Blog

The ZigBee “IP-ification” Wars

— May 30, 2011

Sometimes it seems that ZigBee is the technology everyone loves to hate.

Born circa 2002 out of the desire for a standard for the elusive “Internet of Things”, ZigBee is a set of specifications using the IEEE 802.15.4 radio standard for connecting “things” ranging from home devices (light switches, thermostats, appliances, etc.) to industrial and building controls.  Though the ZigBee standards process included the usual technical arguments, politics, and bickering, it ultimately resulted in a low cost, self-healing, low power (i.e., battery operated) and scalable (i.e. thousands of nodes) technology. ZigBee includes both a mesh “network stack” and “application profiles” that specify the messages used in specific applications (home automation, building automation, etc.).

During ZigBee’s development, a fateful decision was made to NOT use the Internet Protocol suite. This seemed a rational decision at the time: max packet sizes are far smaller (127 vs. 1500+), there was no IP mesh specification, no concept of battery-conserving “sleeping nodes”, and the 15.4 radio silicon had extremely constrained memory.  These constraints led ZigBee engineers to eschew the overhead of IP’s layered architecture, and they set out to “build something that actually worked”.  While most “IP engineers”, noting the power of the internet, saw this choice as just plain stupid, few worked to address the Internet-of-Things challenge.  No small amount of professional (and sometimes personal) enmity emerged between these groups.  (Full disclosure: I was a marketing executive with the leading ZigBee firm in 2007 and 2008.)

When the smart grid community began looking for a Home Area Network (HAN) solution, they latched onto ZigBee as the only viable, mature, multi-vendor solution available.  They worked quickly to develop the “ZigBee Smart Energy Profile (SEP)” for HAN applications.  Today, tens of millions of smart meters are deployed in Texas and California based on this initial specification.

However, the failure of ZigBee to leverage IP emerged as a critical flaw.  ZigBee emerged as a vertically integrated set of solutions that was difficult to connect the IP-based outside world without resorting to application-specific gateways, and was also difficult to adapt the application profiles to other protocols such as HomePlug and Wi-Fi. In contrast, at least theoretically, IP’s layering allows translation between lower layers of the protocol stack while keeping the application layers transparent.

When the NIST standards efforts got turbocharged in 2009 by ARRA stimulus funding, the obvious benefits of IP’s layering, combined with good politicking, led NIST to essentially mandate the use of IP-based protocols.  Additionally, the 6loWPAN specification emerged from the IETF describing how IP packets could be squeezed into small 15.4 frames.  Many claimed a more powerful IP-based alternative to ZigBee could be developed in a smaller memory footprint.  The ZigBee Alliance had no choice but to agree to an IP-based ZigBee standards rebuild.  Smart engineers from both groups began earnestly working together to develop a new standards suite, nominally called “Smart Energy Profile (SEP) 2.0”. The reconciled groups made fast progress against impossibly aggressive deadlines.

However, this April a draft SEP 2.0 ballot failed, causing old animosities to resurface.  At issue is the choice of transport layer protocols: TCP and HTTP as is typical in today’s internet, or UDP and CoAP (Constrained Application Profile) protocols.  TCP/HTTP is notoriously inefficient in terms of bandwidth and end-node processing (witness generations of “TCP offload engines” in server network adapters), while UDP/CoAP is simpler, but new, unproven, and hence obviously not in widespread use.  While nuanced technical pros and cons exist, the heart of the matter is broader and has potentially serious industry implications.

ZigBee is most often implemented in “systems-on-chip (SoC)” that combine a processor, radio, memory, and other functions into a single low-cost chip.  Fitting the ZigBee software into these constrained devices was a concern even before the move towards IP.  Despite optimism that IP-based code would be smaller, current draft implementations are significantly larger, and TCP/HTTP in particular stresses the RAM capacity in these devices.  This potentially threatens the upgradeability of millions of ZigBee-enabled meters and devices already deployed.  For ZigBee SoC vendors and their customers, this is a serious concern.  For others, forcing a new, though more efficient, protocol is too much to ask if ubiquitous protocols already exist, even if it fundamentally challenges existing hardware.  And here enters the politics….

The TCP/HTTP advocates (roughly equal to the original IP proponents) charge that the UDP/CoAP advocates (roughly equal to the original ZigBee proponents) are deliberately stalling SEP 2.0 in order to force the industry to lock-in their original ZigBee solutions (SEP 1.x) for upcoming HAN rollouts.  The UDP/CoAP folks counter that they just want a more scalable solution and protect existing investments. Besides, they say they already have SEP 2.0 solutions available, so there is no advantage to a delay.  They claim the installed base is not being taken seriously, and some technology vendors that lost the initial HAN selections, such as Wi-Fi, might benefit if existing ZigBee installations were rendered obsolete.   So there are many possible political motivations surrounding this ostensibly technical disagreement.

In the meantime, utilities and their suppliers are largely caught in the middle.  If they have not been paying close attention, they should start.  Even if UDP/CoAP is a technical kludge, it has happened before in support of existing installed bases – just look at PPP-over-Ethernet, a spec that allows use of dial-up modem protocols over Ethernet and ATM-based DSL links.  There is nothing particularly elegant about this, yet it allowed an easier carrier infrastructure transition from dial-up internet access to today’s ubiquitous broadband.

The worst possible outcome will be a stalemate adding to HAN technology deployment delays.  Unfortunately, this appears to be the most likely outcome, and contributes to our relatively pessimistic view of near-term HAN adoption. 

 

Energy Storage Can Make the Market More Efficient

— May 27, 2011

There is a fundamental disconnect between electricity generation and consumption. This makes the market inefficient. Moreover, electricity is a perishable good which magnifies this inefficiency.

Currently, generation is aligned with consumption. It also happens that consumption is not particularly smooth – meaning that depending on the time of the day, the weather, and the location, one area may have a moderate consumption profile and another will have spikes that are difficult to predict. This means that generation is increased and decreased in line with consumption – which is itself volatile. It’s no wonder electricity prices are rising and some utilities and grid operators are facing increasing challenges to deliver electrons where they are needed.

In practice, base load generation assets, peak generation assets, intermittent assets and energy storage assets (where available) are manipulated to deliver the right amount of electricity to the grid at each moment. However, at every turn, there are opportunities to maximize the efficiency of generation: base load assets such as geothermal, coal and gas-fired power plants, nuclear power plants and the like, can be optimized for efficiency instead of being cycled up and down to accommodate consumption. Peak generation assets such as natural gas peakers are well-known to grid operators, but are expensive to operate. The benefit of peakers is that they are a proven technology that can provide energy when the grid needs it most. Intermittent assets such as wind and solar contribute to generation, but their benefit is limited by their inherent intermittency. Finally, energy storage assets (limited in deployment) are mostly limited to pumped storage, compressed air energy storage, and batteries of different types – are the key to maximizing all generation assets.

Energy storage provides a storehouse or depot for storing energy between the time it is generated and the time it is consumed. This makes the market more efficient in several ways:

  • Generation assets can be optimized and maximized
  • Generation can be distributed
  • Generation can be intermittent (as with some renewables)
  • Consumption can be managed (as in grid congestion)
  • Consumption can be smoothed (as with load-side storage)

Of course, only a few technologies have had wide-spread success serving the energy storage market. And wide-spread success is a relative term. Although there are approximately 100 GW of pumped storage, for instance, countries such as India and China are increasing energy consumption at a voracious clip. And who can blame them? Energy is a substitute for work and is the key to engineering economic growth for export-driven economies. Hopefully, middle income and even high income countries will understand the benefit of storage in making the energy market more efficient.

 

Do Machines Always Tell the Truth?

— May 26, 2011

Recently, a colleague posted an ironic throw-away comment on his Twitter. The Internet Crime Complaint Center had warned its readers that people on internet dating sites may not always tell the truth. My friend wrote, “BREAKING: People lie on the Internet.” I piled on with my own disposable comment, “So when we finally get the Internet of Things, we can expect Things to lie as well?”

We had a laugh about it, then I got to thinking – this could become a serious problem. Sure, a machine does not have a conscience, so hopefully we don’t have to worry about a sensor going over to the dark side. But those who build machines can start them out on the dark side.

One subtle attack against an infrastructure is to insert process measurement devices that give slightly erroneous readings. Not large enough errors to stand out, but sufficient to cause operators or control software to draw invalid conclusions. (Analogously, many of the most successful financial frauds operate at low amounts, under the radar of built-in countermeasures.) Even more insidious, such a device could be set to read accurately for a given period, perhaps the first two years of its service life, and then begin reporting incorrectly.

Such modifications could have obvious applications in planning an infrastructure attack and hiding its existence for as long as possible. Think of it as the inverse of the Stuxnet attack. Stuxnet sends malicious control codes to devices so that they will fail. What if malicious data were intentionally sent from the measurement devices? Some disturbing scenarios are possible – trains proceeding when they should stop or electric vehicles showing full charge when their batteries are nearly discharged. Incorrect temperature data in manufacturing processes can seriously undermine safety.

Process management systems assume that input data from measurement devices are accurate. So do operators, in the absence of any evidence to the contrary. With inaccurate data, then operators or systems could unwittingly initiate the attacks themselves, with no outside involvement needed. Forensics for such an attack would be extremely challenging.

Recently this blog suggested that while many people know how to abuse your bank account, not many know how to abuse a phase angle measurement. Unfortunately, those few who do are very smart and may understand how to cause considerable harm.

Hopefully these problems are still out in the future but no one can guarantee that. Regardless, there are some steps that can be taken when developing or deploying process control devices:

  • Where possible, deploy devices that communicate in a secure manner to protect data integrity. Encryption is not the answer to all problems but it helps preserve data integrity and prevent man-in-the-middle attacks.
  • Develop new processors using secure manufacturing techniques, which can validate that the finished product exactly matches the initial design – this is helpful in preventing firmware backdoors.
  • Operation of control networks should include regular statistical analysis of readings, to identify abnormalities or trends slowly creeping away from historical averages. These trends may not necessarily be due to erroneous readings, but still should be investigated.
  • Pre-deployment testing of devices should simulate operation over the service life of the device, not simply verify that the device will function correctly on the first day. This can produce a tremendous amount of data, which can be used to test the statistical analyses described in the previous bullet.

Perhaps the most important element in securing any technology is effective training of the people who will deploy and operate it. False alarms can be just as costly and dangerous as missed alarms. Support personnel should understand clearly how the technology works and their responsibilities in managing it.

 

Trane High Performance Buildings Day (part 1)

— May 24, 2011

Last week, I had the pleasure of attending Ingersoll Rand’s High Performance Buildings Day in New York. The event took the pulse of high performance buildings within the real estate market. It also dove into greater detail about the physical underpinnings that enable high performance buildings by going deep into the recesses of three high-profile buildings in Midtown – the AXA Equitable Building, the TIAA CREF building, and Rockefeller Center. Here, I’ll discuss some of the main conclusions of the event and, in a few days, will follow up with some additional perspectives on some of the interesting ideas that came up.

Ingersoll Rand and its brand of building equipment and services, Trane, define a high performance building as:

"Buildings that encompass major building attributes including energy efficiency, sustainable, lifecycle performance and occupant productivity, taking a whole-building approach to performance while creating space that are comfortable, safe, healthy and efficient."

This perspective focuses not only on the energy components of high-performance buildings but also on additional operational benefits to a building owner or tenant that come in the form of increased productivity, reduced absenteeism, reduced building system downtime, and others. While these benefits are known and recognized anecdotally within the industry, there is still work to be done before the lending community has sufficient data to support financing instruments that reflect the added value of high performance buildings. Still, many building owners and managers are convinced and are already making meaningful moves into high performance building.

The event started with a number of presentations. After an introductory presentation from Larry Wash, President, Global Services for the Climate Solutions Sector at Ingersoll Rand, Brian Gardner, Editor of Business Research for The Economist, discussed the findings of a recent study his group conducted on the drivers for energy efficiency in buildings worldwide.

Some of the findings of the study were surprising. For example, only 28% of the respondents worldwide (many of whom were C-suite executives) identified regulation on energy efficiency as a burden to their business. While the nature of such regulation can vary considerably, from prescriptive measures on energy efficiency (such as codes) to market-based instruments (such as commercial benchmarking laws ), and support for these different types of regulation varies between geographies, it suggests that policies, if designed with involvement and buy-in from key stakeholders, can win support from private-sector organizations.

A panel discussion from a number of other important names in the energy efficiency world including Vatsal Bhatt (Brookhaven National Lab), Deane Evans (New Jersey Institute of Technology), Greg Hale (Natural Resources Defense Council), Jeff Meaney (TIAA CREF), Karen Penafiel (BOMA International) and Louis Ronsivalli, Jr. (Trane) expanded on these issues. The conversation touched on some of the other salient trends such as commercial PACE financing, commercial benchmarking laws, the shift from low-hanging fruit to bundling of energy efficiency measures to create an overall attractive efficiency investment portfolio, developments in China and India, and the new technology solutions that are enabling a more data-driven approach to efficiency. I’ll get deeper into these perspectives in a few days.



| No Comments »
 

Blog Articles

Most Recent

By Date

Tags

Clean Transportation, Electric Vehicles, Energy Management, Energy Storage, Policy & Regulation, Renewable Energy, Smart Energy Practice, Smart Grid Practice, Smart Transportation Practice, Utility Innovations

By Author


{"userID":"","pageName":"Blog","path":"\/blog?page=255","date":"4\/24\/2014"}