Navigant Research Blog

The View from Vancouver, British Columbia

— September 7, 2011

Standing on a dock in North Vancouver I was blown away by the views of the Port Metro Vancouver and the gorgeous skyline of the city. But I was distracted by the traffic jam going on in the harbor – an enormous Hyundai shipping vessel sat overwhelming the view. In Colorado, transport relies heavily on rail and trucks, but in the Pacific Northwest, the ocean is the transport medium. Idling ships, freight equipment, and amenities aboard cruise ships represent a significant power drain, one that is largely met through burning bunker oil. Marveling at the size and complexity of shipping vessels and port infrastructure inspired some curiosity about the logistics of maintaining such operations.

The Metro Port Vancouver trades roughly $75 million in goods with more than 160 economies annually. That’s an impressive amount of cargo and people in motion. From tugboats and cruise terminals, to freight trucks and railways, the Vancouver harbor was lively with vessels and equipment all requiring the combustion of the fuel bunker oil, one of the dirtier varieties of fuel oil. But as it turns out, a handful of ports around the world are pursuing cleaner options.

In 2009, Port Metro Vancouver was one of three ports across the globe that initiated the installation of shore power infrastructure. New shoreline transformers, cables, and switches allow ships to draw power from BC Hydro’s grid and take advantage of what is primarily hydropower-based electricity. In the case of passenger cruise ships, the hourly demand can be as high as 14 megawatts. As a result, these ships power down their diesel generators and stop emitting pollution over the city of Vancouver.

Moving cargo and passengers on the sea is one of the most cost-effective means of transport the world uses, and enables billions of dollars in global trade. It’s also an industry that hasn’t experienced much innovation; one of the most recent developments included streamlining containerization of cargo and many shipping companies were slow to adopt these practices. Ports, however, garner significant bargaining power for driving innovation and have plenty of incentive to do so. The city of Amsterdam is addressing shipping in their smart city project with ship-to-grid technology much like the shoreline power installed in Vancouver. Amsterdam is installing nearly 200 power stations that ships can use to provide power, displacing on-board diesel generators. The goal is to use high quantities of renewable energy to power them.

Shoreline power is an expensive technology option that requires new shoreline infrastructure and ship retrofits. Princess Cruises, a partner in the Port Metro Vancouver project, estimates that it costs roughly $14 million to outfit the vessels with equipment that enables them to draw power from the grid. Other shipping companies are trying less capital-intensive strategies for reducing fuel consumption and vessel emissions. Maerk – one of the largest shipping companies in the world with revenue of $28 billion annually – began implementing reduced-speed operations for its 500 vessel fleet. The company reduced speeds from the average 25 knots to 20 knots, and has even adopted super slow speeds of 12 knots, or about 14 miles per hour. Maerk has shown that a 20% reduction in speed yields roughly a 40% decrease in fuel consumption per nautical mile.

The United States military is also pursuing the clean-up of their port infrastructure, supporting clean energy goals and an energy security agenda. As we all continue to tap the power of the ocean, look for innovative new strategies like shoreline power to spark new moments of curiosity.

 

Smart Cities and Smart Transport: To Charge or Not To Charge

— August 8, 2011

Mobility has been a common theme of recent Pike Research blogs. Dave Hurst and Brittany Gibson offered interesting perspectives on how transportation patterns – particularly those related to private vehicle use – are evolving. Around 20% of a city’s greenhouse gas emissions may be attributable to private and public transport. It is not surprising, therefore, that the transportation system is one of the pillars of many smart city projects. Road and rail links are the economic arteries of the city, and traffic congestion is bad for the environment, the economy, and citizen health. A recent study for the San Francisco Municipal Transportation Agency (SFMTA) into the possible benefits of introducing a congestion charging scheme estimated that the annual economic loss to the San Francisco transport region due to congestion came to more than $2 billion in 2005 and that this would exceed $3 billion each year by 2030. Tough as the transport challenges are in North America and Europe, they are dwarfed by those facing cities in the developing world as they cope with the growth in car use while trying to meet sustainability targets and prevent congestion becoming a barrier to further economic development. For this reason, smart transport is one of the four key elements Pike has identified for any smart city strategy trying to balance the goals of sustainability, citizen well-being, and economic development.

Targeting this opportunity, IBM recently launched its IBM Intelligent Transportation Solution, integrated with the Intelligent Operations Center for Smarter Cities and building on a number of key traffic management projects that IBM has been involved with. Three significant projects – in Singapore, Finland, and Brazil – were highlighted in a briefing last week.

IBM has been working with the Singapore Land Transport Authority (LTA) on traffic management since 2006 and has recently helped it to develop a traffic prediction system in the congested Central Business District. According to IBM, the IBM Traffic Prediction Tool is able to predict future traffic speed and volume with 85% accuracy. This information provides traffic controllers with the insight to better manage traffic flows during congested periods using the city’s traffic control and Electronic Road Pricing systems. The second project is a transport system monitoring solution developed with the Finnish Transport Agency. IBM has helped the agency integrate data on 78,000 km of roadway from a diverse set of systems and applications to provide a holistic view of the road network. The data can then be used to optimize traffic flows, forecast traffic conditions, and develop “what if” planning scenarios. The third example is Rio de Janeiro where traffic management is tied into a broader incident management system based in the City’s Operations Center. The aim is to provide a view on traffic conditions including traffic flows, congestion, roadworks, and accidents.

All these systems are using advanced technology to help optimize traffic flows. A different, complementary approach is to try to reduce or manage traffic volumes through charging. Although congestion charging is not part of the Intelligent Transport Solution, IBM has been involved with some of the most important schemes around the world including Singapore, Stockholm, London, and Milan. However, congestion charging schemes continue to be a highly contentious solution to city transport problems.

Singapore was the first city to introduce a road charging scheme to address congestion with its Area Licensing Scheme in 1975, this was followed by the Road Pricing Scheme, which extended coverage to major expressways. In 1998, both schemes were replaced by the Electronic Road Pricing (ERP) system. Under ERP, motorists pay each time they enter a congestion zone and prices can vary according to traffic conditions. Singapore was followed by congestion schemes in Stockholm and London, both of which are generally seen as successful. Though the impact in relation to other changes in traffic and transport patterns remains difficult to assess, studies have estimated that the Stockholm and London schemes have reduced greenhouse gas emissions by 14% and 16%, respectively, and cut congestion by 22% and 30%.

Despite the success of the existing schemes, other cities have been slow to follow in the face of public resistance, strong lobbying, and political uncertainty. A planned scheme for New York never made it to the legislative stage and U.K. congestion charging schemes were rejected by referendum in Edinburgh and Manchester. In San Francisco, the authorities are now carrying out further consultation and research; any decision about implementing a congestion charging scheme is at least two to three years away and any trial is unlikely to begin before 2015.

We expect more cities around the world to look at congestion charging and time-of-use (TOU) pricing as a means of modifying driving behavior. It is a debate that follows a similar pattern to those around TOU pricing for energy consumption, but is even more contentious. In both cases, consumers and citizens are presented with considerable uncertainty over the impact of significant changes to their lifestyle, which tests their trust in politicians and service providers. In order to implement such changes, the benefits in terms of the individual and society need to be made clear, as do the costs. This has to be the basis for a public debate going forward which must also include honesty about the impact of doing nothing, be that gridlock, energy costs, or environmental damage. Strong political leadership is also vital.

The points that Brittany made in her post regarding the expectations people have about choice in transport are also relevant. Creating a sense that we are active participants in the transport systems of our cities and not passive victims will be vital if we are to have the necessary conversation about how we want those systems to evolve. This can take many forms. The most basic is to make sure that our smart transport systems are also engaging with the users of the system. Rio’s Operation Center, for example, already has a twitter feed informing motorists of current traffic conditions and the city‘s Digital Director is keen to find other ways to share information and generate new applications for citizen engagement.

In the future, intelligent traffic systems will provide real-time monitoring of traffic flows, environmental conditions (temperature, rainfall), driver behavior, infrastructure failings, and accidents, for example. In general, sensor technologies will allow more insight into how a complex city transport network is operating and how it can be managed. Using that information for a flexible and integrated use-based charging system will be a tempting option for many city authorities, but they will have to do a lot of convincing to win over a skeptical public.

 

Defense Security and Utilities

— August 1, 2011

Last week, Australia’s Department of Defence (DOD) released Strategies to Mitigate Targeted Cyber Intrusions, a list of its top 35 strategies to mitigate targeted attack risks. Strategies include old standbys such as multi-factor authentication, removable media control, and web content filtering.

The Australian DOD grouped its 35 strategies into two categories. First, there are four strategies that must be implemented before any others. The report claims that these top four strategies could have prevented 85% of the targeted intrusions that were responded to during 2010. After the top four, there are an additional 31 actions that may be taken “once organisations have implemented the top four mitigation strategies.”

The list identifies some interesting attributes for each strategy, including how effective is it, what is the relative cost, is it preventive or detective, and what is the likelihood of employee resistance? That last attribute is perhaps a bit surprising for a defense agency to consider.

Here are the top four mitigation strategies:

  • Patch applications such as PDF readers, Microsoft Office, Java, Flash Player and web browsers
  • Patch operating system vulnerabilities
  • Minimize the number of users with administrative privileges
  • Use application whitelisting to help prevent malicious software and other unapproved programs from running
  • It may surprise some to see whitelisting on this list, although this blog has been a proponent of whitelisting. The first three measures, however, are the basics of security. Placed where they are, this is essentially a public admission that the Australian DOD does not have consistently effective patch management or ID management programs. Those are the blocking and tackling of cyber security. If a defense agency is not doing them well, who is?

    At about the same time, Invincea Labs reported a sophisticated attack against the U.S. Defense and Intelligence community, starting with a spear-phish and taking nearly 20 steps involving infected applications, beacons, Trojans, droppers, and a possibly compromised root certificate. This attack could certainly be aimed at other defense agencies beside the United States.

    So, the attackers are running multiple-technology, 20-step attacks while their targets may not even have currently patched software. This is far from a level playing field.

    Utilities, as part of a critical national infrastructure, should consider themselves only slightly less likely targets than defense agencies. Considering defense agencies’ obvious dependency on energy, it may be reasonable to assign utilities an equal likelihood of being attacked. Fortunately, those top four mitigation strategies work just as well for utilities as they do for defense agencies.

    There is one additional step for utilities: have a good Change Management process in place, especially to manage patch updates in industrial control systems. When a control system cannot be patched, or not patched quickly, then offsetting countermeasures are preferable to the reliability risk of the patch. The offsetting measures could include stronger physical security or stronger network perimeter security. It is critical that grid operations teams participate in these decisions.

    The Australian DOD has taken a good approach. Its strategy cannot be directly applied to a utility without any consideration, but it is a broad list of strategies that could generate lots of thought and risk mitigation for a utility, too.

     

    New Technologies (and FERC Policies) Increase the Value Proposition for Microgrids

    — June 22, 2011

    Recent advances in the complexity of microgrids currently being installed are stimulating a rush to increase the versatility and function of a technology platform originally conceived around the notion of hyper-reliability. This is why the Department of Defense (DOD) is so enamored by the prospects of microgrids, since they can protect mission critical functions during times of emergencies, including war, by creating islands of energy self-sufficiency.

    In many ways, the ultimate application for DOD is forward-operating mobile microgrids that can be deployed during combat missions, especially those powered by modular solar photovoltaic (PV) units that could be carried in backpacks. Even with mobile microgrids burning fossil fuels, fuel consumption could be cut in half by simply networking diesel gen-sets together instead of relying upon each generator to operate as stand-alone systems. Prototypes of such microgrids, being tested out in actual combat missions in Afghanistan, are currently validating such applications, which epitomize the simplicity of microgrid technology, albeit serving a very important service for troops in combat.

    In a recent Pike Research forecast, it becomes clear that while renewable energy will be a major emphasis at DOD over the next two years, federal investments in microgrids outpace both smart meters and conservation.

    On the other end of the spectrum are highly complex and revenue maximizing microgrids such as the one at University of California-San Diego, a 42 MW state-of-the-art facility that is actually up and running today. This microgrid features two of the most sophisticated microgrid offerings on the market today. The first comes from Power Analytics and represents a models-based management continually updated according to external fuel factors (such as levels of sunlight) and internal factors (shifts in demand). Layered on top of this sophisticated scheduling platform is Viridity Energy’s software, designed to extract the greatest value for the microgrid owner according to real-time market conditions. At present, the Viridity Energy wholesale market optimization features have yet to go live, but they will shortly. (The California Independent System Operator (CAISO) does not yet offer a “plug and play” transmission market.)

    Just within the last year or two, an important insight has emerged among microgrid advocates. Lessons learned from both military and campus-based microgrids has underscored the importance of integrating load shedding systems – such as demand response — with critical control of the generation assets. By incorporating dynamic and interrelated supply side generation with dynamic load shedding schemes, a more stable, robust, and efficient balance may be maintained to optimize energy surety and overall microgrid and macro-grid system stability.

    The Federal Energy Regulatory Commission’s (FERC) recent ruling mandating a demand response (DR) market by authorizing Independent System Operators (ISO) to compensate these distributed resources on par with generators is a game changer and will only accelerate the growing marriage of supply and demand resources within and outside of microgrids. This ruling could transform microgrids from threats to local distribution utilities into valuable resources for the larger grid. The FERC ruling’s primary impact is on energy service provision and less so on capacity and ancillary service offerings. Each ISO/RTO must file its demand response compensation tariffs later this month, but for all practical purposes, it will not be until next summer that this new revenue stream will be available to demand response providers. Just how significant is this new FERC initiative? According to Viridity Energy, payments would double in the PJM demand response market, which is already the most advanced market for demand response aggregation services.

    Yet another twist to the microgrid vision revolves around forecasting. When a microgrid system level control is then coupled with more externally-focused information sources (weather patterns, commodity/energy prices, et cetera) available from enterprise level supervisory control systems (such as that provided by Power Analytics and Viridity Energy), the purported and well-hyped future functionality of microgrid systems is actually already here today.

    The functionality, economics, and modularity of implementing such systems is possible because companies with decades of experience with similar competencies have re-purposed their field-proven tools to the unique needs of microgrid technologies as well as ownership models. A prime example is another firm that has been flying under the radar: Encorp LLC, which released its own “Microgrid System Controller,” this past April.

    The company claims its new technology is the first microgrid system controller to connect onsite synchronous generators (typically diesel generators) with inverter-based solar PV, small wind, and advanced energy storage systems, and then monitor and control the resulting microgrid. Word has it that Encorp may not always win the initial contract, but is frequently called in after the fact to rescue projects that are not performing up to expectations. In essence, the Encorp system controller handles the nuts and bolts of the technology integration, interconnecting the combined generation portfolio of the microgrid to the larger utility grid or operating these devices while in island mode. Few other companies seem to be able to network legacy diesel gen-sets with more modern inverter-based generation and storage options as seamlessly as Encorp.

    The new Encorp offering is based on the company’s well-regarded Gold Box™ and related software technology offerings. With over 1,000 MW of generation capacity under its control at 400 projects around the world, the company is betting big on the microgrid market. The new controller already has been successfully installed at a major international defense contractor site to ensure power reliability and reduce greenhouse-gas emissions. Among the projects Encorp is involved with is a small microgrid at Fort Sill, Oklahoma, where the firm’s technology is creating the building blocks to help meet cyber-security goals. The company hopes to help realize new revenue streams for the DOD by helping to secure power supply for critical processes at Fort Belvoir, Virginia from a new Combined Heat and Power (CHP) installation. And at an undisclosed East Coast military site, Encorp is keeping its fingers crossed that it can work with Power Analytics to help a military base operate indefinitely in the case of a grid outage by integrating 1 MW of solar PV with advanced battery storage.

    The growing sophistication of the microgrid market is truly impressive. We’ve come a long way since 2009. Yet, without the basic on-the-ground know-how and technology provided by firms such as Encorp, all of the functionality and optimization promised by the microgrid value proposition will go up in smoke.

     

    Blog Articles

    Most Recent

    By Date

    Tags

    Clean Transportation, Electric Vehicles, Energy Management, Energy Storage, Policy & Regulation, Renewable Energy, Smart Energy Practice, Smart Grid Practice, Smart Transportation Practice, Utility Innovations

    By Author


    {"userID":"","pageName":"Smart Grid Infrastructure","path":"\/tag\/smart-grid-infrastructure?page=12","date":"4\/17\/2014"}