Sometimes it seems that ZigBee is the technology everyone loves to hate.
Born circa 2002 out of the desire for a standard for the elusive “Internet of Things”, ZigBee is a set of specifications using the IEEE 802.15.4 radio standard for connecting “things” ranging from home devices (light switches, thermostats, appliances, etc.) to industrial and building controls. Though the ZigBee standards process included the usual technical arguments, politics, and bickering, it ultimately resulted in a low cost, self-healing, low power (i.e., battery operated) and scalable (i.e. thousands of nodes) technology. ZigBee includes both a mesh “network stack” and “application profiles” that specify the messages used in specific applications (home automation, building automation, etc.).
During ZigBee’s development, a fateful decision was made to NOT use the Internet Protocol suite. This seemed a rational decision at the time: max packet sizes are far smaller (127 vs. 1500+), there was no IP mesh specification, no concept of battery-conserving “sleeping nodes”, and the 15.4 radio silicon had extremely constrained memory. These constraints led ZigBee engineers to eschew the overhead of IP’s layered architecture, and they set out to “build something that actually worked”. While most “IP engineers”, noting the power of the internet, saw this choice as just plain stupid, few worked to address the Internet-of-Things challenge. No small amount of professional (and sometimes personal) enmity emerged between these groups. (Full disclosure: I was a marketing executive with the leading ZigBee firm in 2007 and 2008.)
When the smart grid community began looking for a Home Area Network (HAN) solution, they latched onto ZigBee as the only viable, mature, multi-vendor solution available. They worked quickly to develop the “ZigBee Smart Energy Profile (SEP)” for HAN applications. Today, tens of millions of smart meters are deployed in Texas and California based on this initial specification.
However, the failure of ZigBee to leverage IP emerged as a critical flaw. ZigBee emerged as a vertically integrated set of solutions that was difficult to connect the IP-based outside world without resorting to application-specific gateways, and was also difficult to adapt the application profiles to other protocols such as HomePlug and Wi-Fi. In contrast, at least theoretically, IP’s layering allows translation between lower layers of the protocol stack while keeping the application layers transparent.
When the NIST standards efforts got turbocharged in 2009 by ARRA stimulus funding, the obvious benefits of IP’s layering, combined with good politicking, led NIST to essentially mandate the use of IP-based protocols. Additionally, the 6loWPAN specification emerged from the IETF describing how IP packets could be squeezed into small 15.4 frames. Many claimed a more powerful IP-based alternative to ZigBee could be developed in a smaller memory footprint. The ZigBee Alliance had no choice but to agree to an IP-based ZigBee standards rebuild. Smart engineers from both groups began earnestly working together to develop a new standards suite, nominally called “Smart Energy Profile (SEP) 2.0”. The reconciled groups made fast progress against impossibly aggressive deadlines.
However, this April a draft SEP 2.0 ballot failed, causing old animosities to resurface. At issue is the choice of transport layer protocols: TCP and HTTP as is typical in today’s internet, or UDP and CoAP (Constrained Application Profile) protocols. TCP/HTTP is notoriously inefficient in terms of bandwidth and end-node processing (witness generations of “TCP offload engines” in server network adapters), while UDP/CoAP is simpler, but new, unproven, and hence obviously not in widespread use. While nuanced technical pros and cons exist, the heart of the matter is broader and has potentially serious industry implications.
ZigBee is most often implemented in “systems-on-chip (SoC)” that combine a processor, radio, memory, and other functions into a single low-cost chip. Fitting the ZigBee software into these constrained devices was a concern even before the move towards IP. Despite optimism that IP-based code would be smaller, current draft implementations are significantly larger, and TCP/HTTP in particular stresses the RAM capacity in these devices. This potentially threatens the upgradeability of millions of ZigBee-enabled meters and devices already deployed. For ZigBee SoC vendors and their customers, this is a serious concern. For others, forcing a new, though more efficient, protocol is too much to ask if ubiquitous protocols already exist, even if it fundamentally challenges existing hardware. And here enters the politics….
The TCP/HTTP advocates (roughly equal to the original IP proponents) charge that the UDP/CoAP advocates (roughly equal to the original ZigBee proponents) are deliberately stalling SEP 2.0 in order to force the industry to lock-in their original ZigBee solutions (SEP 1.x) for upcoming HAN rollouts. The UDP/CoAP folks counter that they just want a more scalable solution and protect existing investments. Besides, they say they already have SEP 2.0 solutions available, so there is no advantage to a delay. They claim the installed base is not being taken seriously, and some technology vendors that lost the initial HAN selections, such as Wi-Fi, might benefit if existing ZigBee installations were rendered obsolete. So there are many possible political motivations surrounding this ostensibly technical disagreement.
In the meantime, utilities and their suppliers are largely caught in the middle. If they have not been paying close attention, they should start. Even if UDP/CoAP is a technical kludge, it has happened before in support of existing installed bases – just look at PPP-over-Ethernet, a spec that allows use of dial-up modem protocols over Ethernet and ATM-based DSL links. There is nothing particularly elegant about this, yet it allowed an easier carrier infrastructure transition from dial-up internet access to today’s ubiquitous broadband.
The worst possible outcome will be a stalemate adding to HAN technology deployment delays. Unfortunately, this appears to be the most likely outcome, and contributes to our relatively pessimistic view of near-term HAN adoption.