Navigant Research Blog

Energy Storage Can Make the Market More Efficient

— May 27, 2011

There is a fundamental disconnect between electricity generation and consumption. This makes the market inefficient. Moreover, electricity is a perishable good which magnifies this inefficiency.

Currently, generation is aligned with consumption. It also happens that consumption is not particularly smooth – meaning that depending on the time of the day, the weather, and the location, one area may have a moderate consumption profile and another will have spikes that are difficult to predict. This means that generation is increased and decreased in line with consumption – which is itself volatile. It’s no wonder electricity prices are rising and some utilities and grid operators are facing increasing challenges to deliver electrons where they are needed.

In practice, base load generation assets, peak generation assets, intermittent assets and energy storage assets (where available) are manipulated to deliver the right amount of electricity to the grid at each moment. However, at every turn, there are opportunities to maximize the efficiency of generation: base load assets such as geothermal, coal and gas-fired power plants, nuclear power plants and the like, can be optimized for efficiency instead of being cycled up and down to accommodate consumption. Peak generation assets such as natural gas peakers are well-known to grid operators, but are expensive to operate. The benefit of peakers is that they are a proven technology that can provide energy when the grid needs it most. Intermittent assets such as wind and solar contribute to generation, but their benefit is limited by their inherent intermittency. Finally, energy storage assets (limited in deployment) are mostly limited to pumped storage, compressed air energy storage, and batteries of different types – are the key to maximizing all generation assets.

Energy storage provides a storehouse or depot for storing energy between the time it is generated and the time it is consumed. This makes the market more efficient in several ways:

  • Generation assets can be optimized and maximized
  • Generation can be distributed
  • Generation can be intermittent (as with some renewables)
  • Consumption can be managed (as in grid congestion)
  • Consumption can be smoothed (as with load-side storage)

Of course, only a few technologies have had wide-spread success serving the energy storage market. And wide-spread success is a relative term. Although there are approximately 100 GW of pumped storage, for instance, countries such as India and China are increasing energy consumption at a voracious clip. And who can blame them? Energy is a substitute for work and is the key to engineering economic growth for export-driven economies. Hopefully, middle income and even high income countries will understand the benefit of storage in making the energy market more efficient.

 

Do Machines Always Tell the Truth?

— May 26, 2011

Recently, a colleague posted an ironic throw-away comment on his Twitter. The Internet Crime Complaint Center had warned its readers that people on internet dating sites may not always tell the truth. My friend wrote, “BREAKING: People lie on the Internet.” I piled on with my own disposable comment, “So when we finally get the Internet of Things, we can expect Things to lie as well?”

We had a laugh about it, then I got to thinking – this could become a serious problem. Sure, a machine does not have a conscience, so hopefully we don’t have to worry about a sensor going over to the dark side. But those who build machines can start them out on the dark side.

One subtle attack against an infrastructure is to insert process measurement devices that give slightly erroneous readings. Not large enough errors to stand out, but sufficient to cause operators or control software to draw invalid conclusions. (Analogously, many of the most successful financial frauds operate at low amounts, under the radar of built-in countermeasures.) Even more insidious, such a device could be set to read accurately for a given period, perhaps the first two years of its service life, and then begin reporting incorrectly.

Such modifications could have obvious applications in planning an infrastructure attack and hiding its existence for as long as possible. Think of it as the inverse of the Stuxnet attack. Stuxnet sends malicious control codes to devices so that they will fail. What if malicious data were intentionally sent from the measurement devices? Some disturbing scenarios are possible – trains proceeding when they should stop or electric vehicles showing full charge when their batteries are nearly discharged. Incorrect temperature data in manufacturing processes can seriously undermine safety.

Process management systems assume that input data from measurement devices are accurate. So do operators, in the absence of any evidence to the contrary. With inaccurate data, then operators or systems could unwittingly initiate the attacks themselves, with no outside involvement needed. Forensics for such an attack would be extremely challenging.

Recently this blog suggested that while many people know how to abuse your bank account, not many know how to abuse a phase angle measurement. Unfortunately, those few who do are very smart and may understand how to cause considerable harm.

Hopefully these problems are still out in the future but no one can guarantee that. Regardless, there are some steps that can be taken when developing or deploying process control devices:

  • Where possible, deploy devices that communicate in a secure manner to protect data integrity. Encryption is not the answer to all problems but it helps preserve data integrity and prevent man-in-the-middle attacks.
  • Develop new processors using secure manufacturing techniques, which can validate that the finished product exactly matches the initial design – this is helpful in preventing firmware backdoors.
  • Operation of control networks should include regular statistical analysis of readings, to identify abnormalities or trends slowly creeping away from historical averages. These trends may not necessarily be due to erroneous readings, but still should be investigated.
  • Pre-deployment testing of devices should simulate operation over the service life of the device, not simply verify that the device will function correctly on the first day. This can produce a tremendous amount of data, which can be used to test the statistical analyses described in the previous bullet.

Perhaps the most important element in securing any technology is effective training of the people who will deploy and operate it. False alarms can be just as costly and dangerous as missed alarms. Support personnel should understand clearly how the technology works and their responsibilities in managing it.

 

Trane High Performance Buildings Day (part 1)

— May 24, 2011

Last week, I had the pleasure of attending Ingersoll Rand’s High Performance Buildings Day in New York. The event took the pulse of high performance buildings within the real estate market. It also dove into greater detail about the physical underpinnings that enable high performance buildings by going deep into the recesses of three high-profile buildings in Midtown – the AXA Equitable Building, the TIAA CREF building, and Rockefeller Center. Here, I’ll discuss some of the main conclusions of the event and, in a few days, will follow up with some additional perspectives on some of the interesting ideas that came up.

Ingersoll Rand and its brand of building equipment and services, Trane, define a high performance building as:

"Buildings that encompass major building attributes including energy efficiency, sustainable, lifecycle performance and occupant productivity, taking a whole-building approach to performance while creating space that are comfortable, safe, healthy and efficient."

This perspective focuses not only on the energy components of high-performance buildings but also on additional operational benefits to a building owner or tenant that come in the form of increased productivity, reduced absenteeism, reduced building system downtime, and others. While these benefits are known and recognized anecdotally within the industry, there is still work to be done before the lending community has sufficient data to support financing instruments that reflect the added value of high performance buildings. Still, many building owners and managers are convinced and are already making meaningful moves into high performance building.

The event started with a number of presentations. After an introductory presentation from Larry Wash, President, Global Services for the Climate Solutions Sector at Ingersoll Rand, Brian Gardner, Editor of Business Research for The Economist, discussed the findings of a recent study his group conducted on the drivers for energy efficiency in buildings worldwide.

Some of the findings of the study were surprising. For example, only 28% of the respondents worldwide (many of whom were C-suite executives) identified regulation on energy efficiency as a burden to their business. While the nature of such regulation can vary considerably, from prescriptive measures on energy efficiency (such as codes) to market-based instruments (such as commercial benchmarking laws ), and support for these different types of regulation varies between geographies, it suggests that policies, if designed with involvement and buy-in from key stakeholders, can win support from private-sector organizations.

A panel discussion from a number of other important names in the energy efficiency world including Vatsal Bhatt (Brookhaven National Lab), Deane Evans (New Jersey Institute of Technology), Greg Hale (Natural Resources Defense Council), Jeff Meaney (TIAA CREF), Karen Penafiel (BOMA International) and Louis Ronsivalli, Jr. (Trane) expanded on these issues. The conversation touched on some of the other salient trends such as commercial PACE financing, commercial benchmarking laws, the shift from low-hanging fruit to bundling of energy efficiency measures to create an overall attractive efficiency investment portfolio, developments in China and India, and the new technology solutions that are enabling a more data-driven approach to efficiency. I’ll get deeper into these perspectives in a few days.



| No Comments »
 

Fuel Cell Vehicles and EVs: More Alike Than Not

— May 23, 2011

For the last few decades, the interest level in (and DOE funding for) FCVs and EVs has had somewhat of an inverse correlation, with one rising while the other falls. Most conversations and development efforts have focused on one vehicle architecture or the other, not both. But taking a view from above, these two platforms are more alike in form and function than they are different.

15 years ago (during the Clinton administration) EV development and interest was progressing more rapidly. Then after California backed off on requiring zero emission vehicles and the George W. Bush administration took over, Fuel Cells were prioritized. EVs and the rest of the plug-in vehicle family are getting more attention today, though FCVs continue to fight for respect. But the worlds are converging and the automotive industry would likely benefit if the research and advocacy groups dropped their swords and worked in parallel.

Mercedes-Benz, with its E-Cell and F-Cell programs, is among the OEMs that appear to be centralizing efforts in making FCVs and PEVs as complementary platforms that can extensively share technology. The architectures are alike in that all battery electric vehicles and FCVs solely use electricity to provide propulsion. While EVs solely use batteries to provide power for locomotion, FCVs use fuel cells in conjunction with batteries (or ultracapacitors) for propulsion power and energy storage. Both vehicles do not directly generate emissions, and many of the secondary vehicle systems use the same or similar electronic components.

A look at a BEV and FCV in development by Mercedes-Benz shows that these vehicles are really apples from the same family tree:


The fuel cells biggest advantage today is that it has nearly double the driving range, and the fuel cell/battery combination delivers more power and greater acceleration than the batteries alone. BEVs today are being sold commercially in greater numbers and also have the advantage of a growing public charging infrastructure that far surpasses the hydrogen refueling infrastructure (at least beyond Southern California).

As my erudite colleague has pointed out, combining the technologies into a range extended FCV makes sense on many levels and our conversations with OEMs indicate that they are kicking the tires on the idea. Mercedes-Benz (with its BlueZero platform), GM, and Ford all see benefits in creating flexible platforms where drivetrain architectures can be mixed and matched like Garanimals. When only electricity is used for propulsion, components such as electric motors et. al. can be leveraged most effectively.

 

Blog Articles

Most Recent

By Date

Tags

Clean Transportation, Electric Vehicles, Energy Management, Energy Storage, Policy & Regulation, Renewable Energy, Smart Energy Practice, Smart Grid Practice, Smart Transportation Practice, Utility Innovations

By Author


{"userID":"","pageName":"Blog","path":"\/blog?page=255","date":"4\/21\/2014"}