Navigant Research Blog

The UK Green Investment Bank – A Hidden Subsidy for the Nuclear Industry?

— May 31, 2011

When the UK coalition government came to power in May 2010 they vowed to be the greenest UK government to date. As part of the promised drive to the (somewhat mythical in the UK) green economy, there was the announcement of a Green Investment Bank. Details were very sketchy on the Bank, its powers, location, size, etc. until May 23 of this year.

At the start of this week, Deputy Prime Minister and leader of the Liberal Democrat party, Nick Clegg, started to flesh out the details with Business Secretary, Vince Cable.

This is what we know, and still don’t know so far:

  • The UK government has stated that the Bank will be established under primary legislation, meaning that it will have full operational independence from the government, under the leadership of a board.
  • The Bank will start lending money in April 2012 but will not be able to raise money itself until 2015.
  • The initial £3 billion ($5 billion) seed capital has been set aside by the UK government that could catalyze a further £15 billion ($25 billion) in green investment – though no clear indication as to where this £15 billion will come from.
  • The advisory group, which is in essence setting up the Bank, is to be chaired by Sir Adrian Montague, who is currently chairman of private equity group 3i, but who was the chairman of nuclear operator British Energy (poacher turned gamekeeper?)
  • The initial priorities of the Bank will be investments in major infrastructure projects such as offshore wind, waste, and non-domestic energy efficiency.
  • What we still don’t know is the size or location of the Bank, though it is thought it will be about 50 to 100 people. We could see a shift in the size of the Bank depending on its location.

The two elephants in the room are its relationship to the nuclear industry and its location.

Nuclear – green or not?  The UK’s relationship with nuclear energy is contentious, at best. Our current government said that it was not shutting the door on nuclear but that it had to operate on a “level playing field” and would not be given access to specific government subsidies.

Since then, both sides of the debate have been drawing up documents, reports, and retorts showing the argument as they see it, and decrying the stupidity of the other side. Such is the UK energy market!

Now getting back to the Green Investment Bank. Remember the phrase “major infrastructure projects”? While investment in nuclear has been ruled out in the first round of funding, the government said that it has not been ruled out for the second round. Taking into consideration that the Bank will have independence from the government and the person in position to run it is an ex-nuclear chief, it is not a huge leap to think that, after 2015, some of the billions that the Bank has to invest will go into new nuclear. But…

The second elephant was location. And to me at least, this could be the swing factor. The three locations that are in the running for the Bank are Bristol, London, and Edinburgh. I understand that a lot of non-Brits will shrug their shoulders and say “so what”.

Quick geography reminder – to give the UK its full title, it is the United Kingdom of Great Britain and Northern Ireland, with Great Britain being made of England, Scotland and Wales – three countries under one flag. With the process of devolution, Scotland, once again, has an increasing number of powers over its own country, independent of Westminster. Within these independent powers are various aspects of energy regulation (e.g. electricity; coal, oil and gas; nuclear energy). So Scotland sets its own energy policy. Holyrood, home of the Scottish Parliament, was the setting in 2010 for the announcement by its First Minister that its renewable electricity target was being raised from 50% to 80% by 2020 with 100% as an aspirational target. It bears repeating: renewable electricity, not nuclear electricity. Scotland’s First Minister even went so far as to rule out new nuclear plants in Scotland.

Now, hopefully you can see the importance of location. If the Bank went to Edinburgh it could, and should, use it to strengthen Scotland’s position as a leading light in renewable energy with Edinburgh as its heart. The investments would be UK-wide but the soul would be Scotland. Would nuclear still be on the table? Probably, but interest in other options would be much higher than if the Bank was in London, home of the old way of doing things. Bristol is the outsider, but again this is in England and therefore missing the call to arms for renewables.

The UK’s energy picture is murky at best with the Feed-in Tariffs (FITs) being messed around with, new policies coming out piecemeal and building codes being relaxed. The Green Investment Bank has the opportunity to be a beacon, a call to arms, for my country. I just hope it hasn’t already been bought and sold to the nuclear industry.

NOTE: For the sake of full disclosure, I am against the building of a new generation of nuclear power plants in the UK. Not only do I believe that nuclear is not sustainable, but also that it is not needed. With new distributed generation technologies coming and increased energy efficiency, coupled with more efficient uses of the resources we do have, natural gas and renewable, long lead time and massive energy projects such as these hark to old times.

 

Trane High Performance Buildings Day (part 2)

— May 31, 2011

In continuation of my previous post, the discussions at Trane’s High Performance Buildings Day in New York on May 19 yielded some interesting perspectives on the state of energy efficiency in the U.S.

One panel discussion centered on the operational phase of a building. According to Trane, operating costs typically account for 60-85% of a building’s lifecycle costs, while the design and construction phase – where concerns about cost are most readily voiced – represents just 5-10% of the total lifecycle cost. Although energy costs may only represent a few percentage points of those lifecycle costs, Larry Wash and Louis Ronsivalli at Trane expressed a shift in thinking more holistically toward efficiency (not just of energy but operational processes in general).

Even an efficiently designed building will drift from its original parameters and standards over time, and, in the long term, a disciplined recommissioned building could outperform a building lacking such processes even if designed efficiently. Wash and Ronsivalli indicated that Trane’s customers are starting to come to them looking to go beyond efficient equipment installation and into the possibility of hiring Trane as a longer-term service provider for facility operations.

The building tours drove this point home. Several of the buildings not only had an energy efficient chiller plant but also a thermal storage system that stores cold water as ice using energy purchased at night for use during the day. The chillers had more than enough capacity to meet the cooling demand of the buildings, so why add the thermal storage system into the mix? Demand charges. Facilities managers in ConEdison’s territory not only pay for the energy they consume but also a fee for the maximum demand in a given month, measured in terms of $/kW.

For example, in one of the buildings visited, the facilities manager pays $36/kW per month for the demand portion of the utility bill. For a 500 kW facility, that’s $18,000 a month for demand alone. Shifting a portion of the building’s demand to the nighttime can reduce the maximum power draw and significantly reduce the demand charge. The financial viability of this, however, is dependent not only on installing the right systems up front but also maintaining them in the long term to ensure that demand remains low. In states like New York, where electricity tariffs are high, maintaining operational efficiency is a high priority.

On a different note, the panelists spoke about the implications of EIA cutting funding for the Commercial Building Energy Consumption Survey (CBECS). The database drives the EPA’s EnergyStar commercial building benchmarking system (which is being adopted by cities and states for commercial benchmarking programs as well as LEED for Existing Buildings. Without up-to-date figures (the last edition, detailing the U.S. building stock in 2003, came out in 2008), the accuracy of these energy efficiency certification programs will come into question, potentially creating roadblocks to further adoption. For the building industry, the suspension of CBECS will likely slow the uptake of energy efficiency in the near term.

 

The ZigBee “IP-ification” Wars

— May 30, 2011

Sometimes it seems that ZigBee is the technology everyone loves to hate.

Born circa 2002 out of the desire for a standard for the elusive “Internet of Things”, ZigBee is a set of specifications using the IEEE 802.15.4 radio standard for connecting “things” ranging from home devices (light switches, thermostats, appliances, etc.) to industrial and building controls.  Though the ZigBee standards process included the usual technical arguments, politics, and bickering, it ultimately resulted in a low cost, self-healing, low power (i.e., battery operated) and scalable (i.e. thousands of nodes) technology. ZigBee includes both a mesh “network stack” and “application profiles” that specify the messages used in specific applications (home automation, building automation, etc.).

During ZigBee’s development, a fateful decision was made to NOT use the Internet Protocol suite. This seemed a rational decision at the time: max packet sizes are far smaller (127 vs. 1500+), there was no IP mesh specification, no concept of battery-conserving “sleeping nodes”, and the 15.4 radio silicon had extremely constrained memory.  These constraints led ZigBee engineers to eschew the overhead of IP’s layered architecture, and they set out to “build something that actually worked”.  While most “IP engineers”, noting the power of the internet, saw this choice as just plain stupid, few worked to address the Internet-of-Things challenge.  No small amount of professional (and sometimes personal) enmity emerged between these groups.  (Full disclosure: I was a marketing executive with the leading ZigBee firm in 2007 and 2008.)

When the smart grid community began looking for a Home Area Network (HAN) solution, they latched onto ZigBee as the only viable, mature, multi-vendor solution available.  They worked quickly to develop the “ZigBee Smart Energy Profile (SEP)” for HAN applications.  Today, tens of millions of smart meters are deployed in Texas and California based on this initial specification.

However, the failure of ZigBee to leverage IP emerged as a critical flaw.  ZigBee emerged as a vertically integrated set of solutions that was difficult to connect the IP-based outside world without resorting to application-specific gateways, and was also difficult to adapt the application profiles to other protocols such as HomePlug and Wi-Fi. In contrast, at least theoretically, IP’s layering allows translation between lower layers of the protocol stack while keeping the application layers transparent.

When the NIST standards efforts got turbocharged in 2009 by ARRA stimulus funding, the obvious benefits of IP’s layering, combined with good politicking, led NIST to essentially mandate the use of IP-based protocols.  Additionally, the 6loWPAN specification emerged from the IETF describing how IP packets could be squeezed into small 15.4 frames.  Many claimed a more powerful IP-based alternative to ZigBee could be developed in a smaller memory footprint.  The ZigBee Alliance had no choice but to agree to an IP-based ZigBee standards rebuild.  Smart engineers from both groups began earnestly working together to develop a new standards suite, nominally called “Smart Energy Profile (SEP) 2.0”. The reconciled groups made fast progress against impossibly aggressive deadlines.

However, this April a draft SEP 2.0 ballot failed, causing old animosities to resurface.  At issue is the choice of transport layer protocols: TCP and HTTP as is typical in today’s internet, or UDP and CoAP (Constrained Application Profile) protocols.  TCP/HTTP is notoriously inefficient in terms of bandwidth and end-node processing (witness generations of “TCP offload engines” in server network adapters), while UDP/CoAP is simpler, but new, unproven, and hence obviously not in widespread use.  While nuanced technical pros and cons exist, the heart of the matter is broader and has potentially serious industry implications.

ZigBee is most often implemented in “systems-on-chip (SoC)” that combine a processor, radio, memory, and other functions into a single low-cost chip.  Fitting the ZigBee software into these constrained devices was a concern even before the move towards IP.  Despite optimism that IP-based code would be smaller, current draft implementations are significantly larger, and TCP/HTTP in particular stresses the RAM capacity in these devices.  This potentially threatens the upgradeability of millions of ZigBee-enabled meters and devices already deployed.  For ZigBee SoC vendors and their customers, this is a serious concern.  For others, forcing a new, though more efficient, protocol is too much to ask if ubiquitous protocols already exist, even if it fundamentally challenges existing hardware.  And here enters the politics….

The TCP/HTTP advocates (roughly equal to the original IP proponents) charge that the UDP/CoAP advocates (roughly equal to the original ZigBee proponents) are deliberately stalling SEP 2.0 in order to force the industry to lock-in their original ZigBee solutions (SEP 1.x) for upcoming HAN rollouts.  The UDP/CoAP folks counter that they just want a more scalable solution and protect existing investments. Besides, they say they already have SEP 2.0 solutions available, so there is no advantage to a delay.  They claim the installed base is not being taken seriously, and some technology vendors that lost the initial HAN selections, such as Wi-Fi, might benefit if existing ZigBee installations were rendered obsolete.   So there are many possible political motivations surrounding this ostensibly technical disagreement.

In the meantime, utilities and their suppliers are largely caught in the middle.  If they have not been paying close attention, they should start.  Even if UDP/CoAP is a technical kludge, it has happened before in support of existing installed bases – just look at PPP-over-Ethernet, a spec that allows use of dial-up modem protocols over Ethernet and ATM-based DSL links.  There is nothing particularly elegant about this, yet it allowed an easier carrier infrastructure transition from dial-up internet access to today’s ubiquitous broadband.

The worst possible outcome will be a stalemate adding to HAN technology deployment delays.  Unfortunately, this appears to be the most likely outcome, and contributes to our relatively pessimistic view of near-term HAN adoption. 

 

Energy Storage Can Make the Market More Efficient

— May 27, 2011

There is a fundamental disconnect between electricity generation and consumption. This makes the market inefficient. Moreover, electricity is a perishable good which magnifies this inefficiency.

Currently, generation is aligned with consumption. It also happens that consumption is not particularly smooth – meaning that depending on the time of the day, the weather, and the location, one area may have a moderate consumption profile and another will have spikes that are difficult to predict. This means that generation is increased and decreased in line with consumption – which is itself volatile. It’s no wonder electricity prices are rising and some utilities and grid operators are facing increasing challenges to deliver electrons where they are needed.

In practice, base load generation assets, peak generation assets, intermittent assets and energy storage assets (where available) are manipulated to deliver the right amount of electricity to the grid at each moment. However, at every turn, there are opportunities to maximize the efficiency of generation: base load assets such as geothermal, coal and gas-fired power plants, nuclear power plants and the like, can be optimized for efficiency instead of being cycled up and down to accommodate consumption. Peak generation assets such as natural gas peakers are well-known to grid operators, but are expensive to operate. The benefit of peakers is that they are a proven technology that can provide energy when the grid needs it most. Intermittent assets such as wind and solar contribute to generation, but their benefit is limited by their inherent intermittency. Finally, energy storage assets (limited in deployment) are mostly limited to pumped storage, compressed air energy storage, and batteries of different types – are the key to maximizing all generation assets.

Energy storage provides a storehouse or depot for storing energy between the time it is generated and the time it is consumed. This makes the market more efficient in several ways:

  • Generation assets can be optimized and maximized
  • Generation can be distributed
  • Generation can be intermittent (as with some renewables)
  • Consumption can be managed (as in grid congestion)
  • Consumption can be smoothed (as with load-side storage)

Of course, only a few technologies have had wide-spread success serving the energy storage market. And wide-spread success is a relative term. Although there are approximately 100 GW of pumped storage, for instance, countries such as India and China are increasing energy consumption at a voracious clip. And who can blame them? Energy is a substitute for work and is the key to engineering economic growth for export-driven economies. Hopefully, middle income and even high income countries will understand the benefit of storage in making the energy market more efficient.

 

Blog Articles

Most Recent

By Date

Tags

Clean Transportation, Electric Vehicles, Policy & Regulation, Renewable Energy, Smart Energy Practice, Smart Energy Program, Smart Grid Practice, Smart Transportation Practice, Smart Transportation Program, Utility Innovations

By Author


{"userID":"","pageName":"2011 May","path":"\/2011\/05","date":"12\/22\/2014"}