Navigant Research Blog

Steady Improvements Crucial to Building Better Batteries

— June 28, 2018

As the quest for creating a better battery looms, advanced battery research firms and manufacturers are looking for the best ways to optimize their technologies to meet the energy storage applications of the future. Navigant Research believes that focusing on incremental improvements to advanced batteries will be the best path forward in the near term. Doing so presents a logical roadmap that allows companies and research agencies to achieve realistic results.

Pack Configuration Doesn’t Get Enough Attention

While much research has gone into the active components of the battery (i.e., the electrode and electrolyte materials), an important feature of the cell that is often overlooked is the pack configuration. Having an optimized cell design based on the desired use case of the battery can increase the cell’s efficiency by up to 23% that utilizes the same chemistry. As lithium has the tendency to swell during charge/discharge cycles, the enclosure must be robust enough to endure mechanical strain and maintain structural integrity. There are currently three main cell configurations:

  • Cylindrical: Components are encased tightly in a can. Often used in smaller electronic applications, but can be manufactured more quickly relative to other cells.
  • Prismatic: Electrodes are flat, and require slightly thicker walls compared to cylindrical cells to compensate for decreased mechanical stability. These cells are great to maximize space utilization.
  • Pouch: Has the most efficient packing structure of all cell designs and can achieve up to 95% space utilization. The pouch’s tendency to swell discourages use in certain applications and environments.

All three battery cell formats have strengths and weaknesses. The choice between pouch and cylindrical cells is still a matter in progress; Navigant Research expects cylindrical cells and pouch cells to be the most economically feasible with respect to energy density. This consideration is important, particularly in motive applications.

Stability Is Crucial for Continuing Battery Innovation

Advanced battery OEMs are taking competing strategies to tackle EV battery pack related issues. While Tesla has reported high power and energy density of its cylindrical NCM batteries, Korean battery giant SK Innovations is working on its enhanced NCM 811 battery cells in prismatic form. The industry standard for NCM batteries has been 622 (i.e., 60% nickel, 20% cobalt, and 20% manganese); NCM 811 cells are achieving higher densities, translating to up to 700 km on a full charge (~75% increase over NCM 622 cell stacks), but they fall short in terms of thermal stability and safety. SK Innovation plans to integrate its NCM 811 battery in the Kia Niro EV by 2020; it is advertised to have approximately 500 km of range and battery of over 70 kWh.

Other Battery Development Goals

LG Chem is working to deliver a cylindrical NCM 811 battery before SK Innovation, saying that it will have these cells deployed in electric buses in 2018. The company upgraded the Renault Zoe’s battery capacity up 76% while maintaining the same volume and slightly increasing the weight of the pack.

Panasonic also made improvements to its prismatic cells through construction changes. The 2019 Ford Fusion Energi PHEV received a 18% boost in energy capacity from 7.6 to 9.0 kWh. The pack is the same size with the same number of cells and unchanged chemistry, but the separator thickness inside the cell has been reduced. This allows more electrode layers inside each can, and consequently more capacity.

Remember the Big Picture

As companies continue to execute on their technology roadmaps, Navigant Research urges them to look not only at the battery chemistry, but the implications of the cell and pack design on the performance of the system. Doing so will reduce costs, help achieve higher power and energy, and allow for faster innovation across all technology product offerings.

 

Rising Power Outage Cost and Frequency Is Driving Grid Modernization Investment

— June 28, 2018

Power outages are happening more frequently than ever. They are also becoming more expensive for customers and utilities. These realities have contributed to the growth of technologies and utility programs designed to increase grid uptime and reduce outages. Specifically, utilities are homing in on ways to improve grid performance by focusing on two major grid characteristics:

  • Grid Reliability: A reliable grid is one that experiences a small number of outages
  • Grid Resiliency: A resilient grid is one that can quickly restore power when an outage does occur

Reliability and resiliency are critical grid characteristics, and the Institute for Electrical and Electronics Engineers has established reliability standards for measuring performance in both. The most commonly measured metrics for grid performance are defined by the California Public Utilities Commission (CPUC) as follows:

  • System Average Interruption Duration Index: The systemwide total number of minutes per year of sustained outage (5+ minutes) per customer served
  • System Average Interruption Frequency Index: How often the systemwide average customer was interrupted in the reported year
  • Momentary Average Interruption Frequency Index: The number of momentary outages per customer systemwide per year

Performance-Based Ratemaking Helps to Incentivize Grid Performance

As customers increase their demand for reduced outages, a shift to performance-based ratemaking (PBR) may motivate utilities to further increase spending on reliability and resiliency. PBR rewards utilities for strong performance in the CPUC metrics (among others) and penalizes them for higher outage statistics. While it is not a frequently used ratemaking strategy currently, more and more PUCs are considering or accepting utility proposals to make the transition to PBR.

With PBR gaining some traction around the US, utilities are more motivated than ever to modernize their grids and improve performance. Global and regional markets for products like smart sensors, distribution automation equipment, and grid analytics are experiencing significant growth. Navigant Research’s T&D Sensing and Measurement Market Overview and Market Data: Smart Grid IT Systems reports further analyze the market, discussing drivers, barriers, trends, and market sizes for such investments.

Outages by the Numbers

Despite utility investments into the extension of enhanced grid visibility out onto the distribution network and the automating of transmission and distribution equipment, power outages are increasing in both number and cost. Eaton’s 2017 Blackout Tracker reports that in 2009 there were an estimated 2,840 outage events, affecting more than 13 million people. Fast-forward to 2017, and Eaton estimates that nearly 37 million people were affected 3,526 outage events. This problem is compounded by the increasing costs of power outages, particularly in the commercial and industrial customer class. For example, a 2016 Ponemon Institute study estimated that the cost per minute of a power outage at a US data center has increased tom $5,617 in 2010 to $8,851. Additionally, S&C’s 2018 State of Commercial & Industrial Power Reliability report found that 18% of companies experienced a loss of more than $100,000 as a result of their worst outage. The same report found that 50% of customers experienced outages that lasted more than 1 hour in the past year, and that 25% of companies experienced power outages at least once per month.

Fast Facts

(Source: Lawrence Berkeley National Laboratory)

Outage statistics like these are becoming less and less acceptable to utility customers, and are driving utilities and their regulators to allocate significant resources toward improving grid performance. Many grid modernization plans of the past were focused on grid protection and control, but utilities today are focused on improving reliability and resiliency.

 

The Dynamics of Bitcoin Mining and Energy Consumption, Part II: Mining Incentives and Economics

— June 28, 2018

This is Part II of a series—Part I is here.

Blockchain systems aren’t really single technologies. They are complex architectures with many interacting parts, and hundreds of different architectures exist. For this discussion, the relevant piece of the architecture is the consensus algorithm. That is the group of rules that determine which nodes in a blockchain network are responsible for validating transactions, packaging them into blocks, and storing them in a distributed data structure. One consensus algorithm in particular, Proof of Work (PoW), is the cause of all the hand-waving about blockchain energy consumption.

What Is Proof of Work?

PoW functions by pitting special nodes against one another in a race to solve a mathematical puzzle that requires a huge amount of computational power (energy) to solve but is trivial for others in the network to verify. Miners are willing to invest resources into the race because the winner gets a prize (12.5 Bitcoin, or ~$90,000 USD). Like race cars, more powerful systems are more likely to win the race and the prize. Miners are incentivized to keep investing in compute equipment as long as they win often enough to net a profit—resulting in massive IT facilities like the one in Mongolia.

In the Bitcoin network, a new block is up for grabs every 10 minutes. To keep this interval constant, the difficulty of the PoW puzzle dynamically adjusts based on the total mining power of the network. In simple terms, as cars get faster, the racetrack gets longer—and drivers have to burn more gas to finish.

Proof of Work Difficulty and Mining Power Trends: 2016-2018

(Sources: Navigant Research, Blockchain.Info)

Mining Activity is Ruled by Economics

We can think about mining economics in terms of a simple profit equation, with the implicit assumption that miners will operate as long as profit is greater than zero:

Profit = Revenue – Costs

Costs (hardware, facilities, electricity, personnel) are the most stable component of the miner profit equation. This is because revenue is comparatively volatile: miners are rewarded in Bitcoin, and successful miners receive 12.5 Bitcoin, but the corresponding value in “real money” changes on a minute to minute basis. Miners must convert their rewards to real money eventually to pay their bills, and almost no one accepts payments in Bitcoin.

Economic Complications

A key point is often overlooked: in the Bitcoin network, miner rewards halve every 4 years. For a given mining facility to remain profitable—all else held equal—the value of Bitcoin must double over the same period. If it doesn’t, the revenue opportunity for miners shrinks and they have to reduce costs to remain profitable.

As the value of Bitcoin increases, more miners are incentivized to join the network or invest in new, power-hungry equipment. If the value drops, the incentive flips, and miners will leave the network or shrink their operations. As miner compute power leaves the network, PoW adjusts its difficulty downward, reducing energy consumption.

What Does This Mean for Future Energy Consumption?

Unfortunately, it doesn’t mean that energy consumption isn’t a problem. It just means that energy consumption is unlikely to continue growing at the prodigious rates seen over the last few years. There is only one doomsday scenario: the value of Bitcoin continues to double every 4 years, pulling more miners into increasingly difficult PoW competitions.

As we’ll see in Part 3, that scenario is highly unlikely (unless you’re this guy). Now that we have a basic framework in place, we can discuss the various factors that will affect PoW mining and energy consumption in the future, and what this means for utilities that have to make plans today.

 

What Is Open Data and Why Is it Important: Part 1

— June 26, 2018

Open data is a big deal among cities. At the Connected Cities USA conference earlier this month, I had a chance to learn about open data initiatives being taken up by local governments across the US. One example is Franklin, Tennessee, which has teamed up with Socrata—a company that provides cloud-based solutions for online data—to create an open data portal. Instead of a static webpage, users can export data to create their own visualizations or analysis. Chapel Hill, North Carolina partnered with OpenDataSoft to provide citizens access to data related to spatial planning, crime, transportation, and more. Chapel Hill hosted a workshop, Open2OpenData, to demonstrate how to use the open website and discuss ways data can address citizens’ needs.

Growing Demand for Open Data

Cities across the US, both big and small, are thinking about open data. Why? In government, data transparency is increasingly an issue as citizens want more information on everything, such as accounting of how tax dollars are spent and progress around smart city goals. In addition to the demand for government transparency, the evolution of technology makes an exponential explosion of open data accessible by digital devices. This creates opportunities for individuals and organizations to take advantage of data to create new services and products for financial gain.

Traction Has Been Increasing for over a Decade

The term open data gained traction at all levels of government since the OPEN Government Act of 2007 was signed. Currently, 48 states and 48 cities and counties provide data to data.gov, the federal government’s online open data repository. Non-governmental organizations promoting open data include:

  • The Sunlight Foundation: Created a set of open data guidelines to address what data should be public, how to make data public, and how to implement policy.
  • Open Data Institute: Works with companies and governments to build an open, trustworthy data ecosystem, and to identify how open data can be used effectively in different sectors.
  • The Data Coalition: Based in Washington, DC, the nonprofit advocacy group promotes the publication of government information as standardized open data.
  • Open Knowledge International: Focused on realizing open data’s value to society, the global nonprofit organization helps people access and use data to act on social problems.

So, What Is Open Data?

Open data is digital information that is licensed in a way that it is available to anyone. The data is typically public, open, or attributed. According to Open Knowledge International, data must be both technically and legally open. The definitions are as follows:

  • Legally open means available under an open data license that permits anyone freely to access, reuse, and redistribute.
  • Technically open means that the data is available for no more than the cost of reproduction and in machine-readable and bulk form.

Other Requirements

In addition to being legally and technically open, open data requires a specific approach based on the kind of data being released and its targeted audience. For example, if the intended users are developers and programmers, the data should be presented within an application programming interface. If it is intended for researchers, data can be structured in a bulk form. Alternately, if it’s aimed at the average citizen, data should be available without requiring software purchases.

The debate about open data in government is an evolving one. So too are the benefits of utilizing open data. This is driven by increasingly sophisticated data analytics allowing us to analyze big data and gain actionable insights to create new value. In my next blog, I will dig deeper into why open data matters and exactly how I see the evolving open data discussion.

 

Blog Articles

Most Recent

By Date

Tags

Building Innovations, Clean Transportation, Digital Utility Strategies, Electric Vehicles, Energy Technologies, Finance & Investing, Policy & Regulation, Renewable Energy, Transportation Efficiencies, Utility Transformations

By Author


{"userID":"","pageName":"2018 June","path":"\/2018\/06","date":"7\/17\/2018"}