The mission of this group is to bring together utility professionals in the power industry who are in the thick of the digital utility transformation. 

26,960 Members


EnergyIoT Article 2 – Architectural Challenges to the Energy Transformation

Image by Stuart McCafferty - No copyright

EnergyIoT Article 2 – Architectural Challenges to the Energy Transformation

By Stuart McCafferty, David Forfia, and Eamonn McCormick

Disclaimer:  The viewpoints in this article and others in the series are the personal views of the authors and in no way are meant to imply or represent those of the companies they work for.

In today’s electric power ecosystem, adding new capabilities is tedious, time-consuming, and expensive.  A great example of this is the move towards Advanced Distribution Management Systems (ADMS).  It takes years and 10’s of millions of dollars of investment.  And, when it is finally integrated, it is a constant challenge and expense to add new IT and grid assets – just to maintain marginal situational awareness of the system.

Consider this example. . .  When a homeowner purchases a photovoltaic (PV) system, he must first get the system installed by a certified electrician following the local building code process, then register it with the local power company and get in line for an interconnect agreement, which can take anywhere from several days to several weeks or even more.

If the same homeowner purchases a new television set, he does not need to do anything more than plug it in and connect it to his cable or satellite box.  The cable/satellite company doesn’t care.  And, the system is up and running in minutes with no hassle and no delay.

Why is the grid so different from every other connected service?  Is it that much more complex?  Is it because it is a monopolistic ecosystem?  Or, perhaps, is it because we have not evolved the architecture, policy, and processes to enable a much more dynamic, democratic, and consumer choice driven marketplace?

No one will argue that operating the grid is easy.  We have to maintain synchronization and appropriate voltage, amperage, real and reactive power levels across the entire system for power quality and reliability.  But, we have also been doing this for over 100 years and have learned a lot about how to operate the grid.  It is time to leverage all the knowledge we have experienced in moving electrons from generation to load, to apply new technologies to dramatically simplify adding and removing grid components, and create a new ecosystem that enables the ability to add PV, demand response, battery, and electric vehicle (EV) assets as easily as plugging in a new television.  Imagine instead an ecosystem that allows a consumer to purchase a PV system at Home Depot, either pay someone or install it themselves, and when the consumer plugs it in:

  1. It ANNOUNCES itself
  2. It DESCRIBES itself
  3. It PROVISIONS itself
  4. It COMMISIONS itself

And minutes after it is installed, the utility knows what it is, where it is, and what capabilities it has.  The power flow model is immediately updated to include the new “node”, and the consumer can immediately leverage its ability to convert solar radiation into electrons and perhaps even bid it into a local distribution market.

This is not science fiction.  In theory, we could do this today.  However, the existing architectural constructs and the legacy siloed utility systems make this extremely unlikely, perhaps impossible.  We have a proposed solution that leverages today’s DevOps capabilities that companies like Amazon, Google, Microsoft, Red Hat, and Alibaba have been using for many years to allow their systems and their customers’ systems to elastically scale to enormous dimensions simply, elegantly, and practically.  More to come on that later in Article 3 - 6.

The roots of our architectural challenges derive from how the grid has evolved over the past 100+ years. The grid represents a multi-generational investment in a “top down” architecture that delivers energy from central generation stations at the “top”, to loads “down” from the transmission grid to the distribution grid all the way to the energy consumer. The result is an electrically synchronized “machine” with power flowing downhill from high voltage to low voltage across a vast network of wires, transformers, and switches. The result is a hub and spoke architecture where transmission grids radiate various “branch” distribution networks that direct the energy “downhill”, eventually serving the loads.

The problem is that this is not how the transformed grid will work. In fact, many would argue it is just the opposite.  The transformed grid will have large numbers of distributed assets that include distributed generation, demand response, energy storage, electric vehicles, and devices not yet developed.  These new distributed assets are growing at a rapid pace and will continue that trend as Distributed Energy Resource (DER) prices drop, electricity prices increase, policy changes turn adoption into law, and a societal “call to action” to address climate change become more compelling.  It is time to think differently.

Here’s a Crazy Idea - Think of Everything as a Microgrid

If we can agree that the future grid is a bottoms-up model, then take the next step to consider each building block that makes up the grid as a microgrid - distribution networks, feeders, neighborhoods, buildings, cars, etc. These microgrid building blocks make up the overall grid, can manage themselves, may have markets associated with them, and can run independent of a grid connection as an electrical “island” for some period of time.  The grid as a whole becomes a “collective of microgrids” that can operate independently and in cooperation with other “microgrids”.  This type of thinking is already occurring at progressive utilities like San Diego Gas and Electric where there are plans to pilot 10MW batteries at some feeder locations to support microgrid capabilities, including islanding.  Electric vehicles are a great example of a “mobile microgrid” that is an architectural component and actor in a larger microgrid when it is plugged in and when it is not, it is an islanded microgrid with the capability of operating independently for some period of time.  This “distributed intelligence” begins at the grid edge and propagates outwards and upwards to support larger and larger grid structures.

What’s Wrong with the Architecture We Have Now?

Even without taking the mental leap to “everything is a microgrid”, the change from a one-way power flow model to bi-directional flow is today’s reality.  Today’s grid is different from the way it was designed.  And, Policy mandates requiring rooftop solar on all new construction in California and renewable/clean energy targets in numerous states and even the US military are accelerating the change.  Despite this, most utilities do not have a clear picture of what assets exist Behind The Meter (BTM) and are experiencing extremely challenging operational issues such as the “Duck Curve”.  Two-way power can result in overgeneration and exceeding thermal limits or unachievable load generator ramp rates when demand increases and solar voltaic assets stop producing electricity at sunset.  These types of issues put the grid in safety and reliability danger, and can result in circuits tripping off to protect grid equipment and leave customers in the dark.

Current State Issues

Consider how the current grid (and its architecture) meets the needs of the next generation of distributed energy:

Table 1:  Current Grid Obstacles




Provide reliable and affordable electricity

On the surface the current “top down” grid provides relatively cheap power reliably. However, this is only true if we do not assign economic costs to emissions. The Paris Agreement signed by 175 countries in 2016 have escalated awareness and agreement to address climate change head on through greenhouse gas (GHG) reduction.  It may be only a matter of time before policies are adopted that assign a value or “tax” to atmospheric carbonization contributors for their GHG impact.  In its Four Pillars of a Carbon Dividends Plan, a modest “neutral” carbon tax as suggested by some Republicans led by ex-Secretary of State James Baker ("The Conservative Case for Carbon Dividends") of $40/ton of CO2 could nearly double the cost of wholesale energy.

Fair and equitable for large and small alike

Even for the largest utility players, the current architecture is not working well. Utilities are struggling to integrate more than 20% renewables, meet policy objectives for clean power, and maintain grid reliability. Large utilities are also struggling with high costs and integrating distribution automation systems, DER, Third Party Aggregators, microgrids, electric vehicles, and changes yet to come. The situation is even worse for municipal utilities, co-ops, and other smaller utilities that have smaller customer bases to fund the sophisticated distribution grid automation and situational awareness required for safe and reliable operations.

 Small producers and consumers have fewer choices due to policies, investment, and physical constraints on users’ choices  The lack of market participation choices and electricity costs actually incentivize some organizations to build their own microgrids and opt out of the grid altogether.

 Wholesale energy market rules include requirements on generator or load sizes, capitalization, and operational sophistication making large “utility scale” options the only choice for companies that want to participate in wholesale markets.  This creates unachievable barriers to entry for smaller players.  In the current transmission (ISO) markets, participants must “qualify” and meet minimal local size generation restrictions (e.g. 1MW+).  Currently, owners of DER that do not meet that criteria (the vast majority) must work through an aggregator that meets these requirements – or not participate at all.

Democratic, secure, trusted, reliable, resilient and safe

Following the disasters in New Orleans and Puerto Rico (to name a couple), it is clear that the grid is vulnerable to climate-related catastrophes. In addition, there is growing concern that centralized grids are prone to cascading event cyber-attacks from criminals or “foreign entities”. Both groups can pose a significant threat and can include quite sophisticated actors, offering significant threats to electric safety and reliability. The bottoms-up hierarchy is still prone to attacks and catastrophic events but can be isolated to keep damage as local as possible.

Provide solutions for the critical “deep electrification” challenges facing society

Reliable solar and DER integration - The industry is experiencing increasing reliability issues (Duck Curve).  Solar and DER assets are creating difficulty in complete situational awareness and the management of behind the meter assets.

Transportation electrification - Small numbers of EVs are currently manageable, but higher numbers will require different strategies and technologies to manage loads and protect the grid from overload.  EV mobility means that vehicles can enter and exit both circuits and utility territories, complicating circuit loading constraints as well as billing.

Effective energy storage integration - Integration is costly and “one offs” for every system.  Command and control of energy storage requires “flipping registry bits”, which takes time, is prone to error, and makes troubleshooting complex.

Intelligent air conditioning and heating - Thermostat programs can be “gamed” by the consumer by ramping up or down the temperature prior to an event. When distribution markets come, a 2 degree bid into the market has no meaning and thermostat programs will evaporate.


Business Model Innovation

One of the biggest challenges to utilities is business model innovation. The structural hierarchical model and regulated business model makes it challenging for existing players to “think out of the box” and innovate with new services.  Policy and regulatory reform is required that safeguards the interests of stakeholders, but also enables the ability for utilities to extend services beyond the meter and leverage DERs to provide higher reliability, power quality, and economy for its customers.

Enables pathway to a sustainable energy future

According to the Intergovernmental Panel on Climate Change (IPCC), we have a single decade to transform our grid to a sustainable posture that can take us “net negative” on emissions by 2050 at the latest. The current top down grid that relies predominantly on bulk generation precludes a realistic pathway to a sustainable future. The only realistic way forward would amount to a global shift to nuclear power which has significant barriers, takes years and billions of dollars to build, does not “ramp” well with fluctuations in load and demand, and simply takes too long. A much more viable growth plan is to unleash exponential innovation via a more distributed, intelligent and flexible grid.


DOE’s Concerns on “Siloed” Systems

Dr. Jeff Taft of PNNL and Paul De Martini of Newport Consulting discussed the “system-centric” architecture issue head on in their groundbreaking “Sensing and Measurement Architecture for Grid Modernization.. According to Taft & De Martini, the core problem is that grid systems, sensors, and data are hierarchical and rigidly bound to a specific utility system and often form disjoint data sets that cannot be leveraged by other applications. The electric power architecture is hierarchical with a “system-centric” design rather than “data-centric”, forcing a siloed approach and leading to difficult and expensive systems integration challenges and orphaned data. The authors illustrate this example below, which not only shows the application silo “stacks”, but also implies the complexity around data reuse and integration.

Figure 1:  Traditional Grid Sensor System Structure (Source Taft & De Martini - Sensing and Measurement Architecture for Grid Modernization)

Taft and De Martini highlight that the essential structures are vertical, leading naturally to silos. It is these silos which are the source of the fundamental limitations that a new architecture must address. Some Volt Var Control applications, Circuit Fault Indicator applications, Transformer Oil Analysis and other such applications all configured as silos.  Each has its own dedicated application system, communications systems, and data sets. The results manifest themselves in high costs, complex maintenance requirements, and little or no flexibility. 

 Taft and De Martini further identify that communications networks for grid sensors are generally hub-and-spoke, or, in the case of AMI, local mesh to a hub-and spoke backhaul (via cellular or substation), which is still effectively hub-and-spoke.  Such communication systems are often not scalable, have low bandwidth, and are siloed along with the data collection head ends and application systems.  In addition, the grid communications systems (SCADA, AMI) are normally configured as “polling systems” where the central system makes status requests from the different grid assets.  This round-trip communication paradigm unnecessarily burdens utilities with expensive high-bandwidth communications systems to support the heavy traffic.  A better solution would be an “event-driven” grid whose assets automatically report their status based on rules only when something changes or based on a timed interval (which is still an event).  Event-driven systems could cut the communications traffic in half, or even more, and could also save data storage and analytics costs and improve interoperability by removing siloed head end solutions and making data easily available to authorized systems.

 The industry is essentially stuck. The old paradigm of a top-down hierarchical, system-centric grid and vendor driven solutions is just not capable of moving the industry forward. Despite what some key industry vendors would have us believe, most solutions are not scalable, expensive, and create high latency vertical silos. The cost of expanding and integrating these silos is extremely high. The resulting solutions generally underperform, are brittle, and expensive to maintain, while delivering sub optimal business outcomes.  The industry must find a new way forward to break out of this pattern.  We must “un-silo” systems for better interoperability, move to event-driven communication paradigms, and operate from a data-centric perspective to allow authorized systems to leverage information for greater efficiencies and economics. 

 This was the second in a series of EnergyIoT articles addressing the challenges we are experiencing and proposing a fundamentally different architecture to solve the problems of today and tomorrow.  Our third article begins describing an alternative architecture.  The article, “EnergyIoT Article 3 – EnergyIoT Domain Building Blocks”, will be published on Monday, April 29 on Energy Central and LinkedIn.  It will be followed by deep dives in Articles 4-6 into the individual domains of the ecosystem.

Previous Articles in the Series:

David Forfia, Gridwise Architecture Council Chair

David is the Chair of the GridWise Architecture Council since 2015 and has been a council member since 2013.

The GridWise Architecture Council (GWAC) is a team of industry leaders who are shaping the guiding principles of a highly intelligent and interactive electric system. The Council is neither a design team, nor a standards making body. Its role is to help identify areas for standardization that allow significant levels of interoperation between system components. More about the Council can be found at

David is the current chair of the Technical Advisory Committee and a former member of the Board of Directors of the Smart Electric Power Alliance.  He was also Chair of the SGIP Board of Directors from 2015 until 2017, and as a board member beginning in 2011.
In his current role, he is the Director of Technology Architecture and IT Transformation at the Electric Reliability Council of Texas (ERCOT).  He began his career at Austin Energy Director of Information Technology Services for Austin Energy and was Deputy Director and Chief Information Officer for an $18B pension fund. He holds a BBA from the University of Texas at Austin and an MBA from St. Edward’s University.

Eamonn McCormick, Chief Technology Officer, Utilicast

Eamonn McCormick is the CTO at Utilicast, a leading energy industry consultancy. Eamonn is a passionate believer in the bright future of the energy industry and the importance of collaboration as the foundation for solving for our current industry challenges. He is a results driven technology leader with a track record of success. He has implemented strategic technology change at several large energy companies over the last twenty years in the areas of wholesale markets, transmission and energy distribution primarily. In addition Eamonn is currently chief architect of the Energy Block Chain consortium.

Stuart McCafferty, IoT Architect, Black & Veatch Management Consulting

Stuart McCafferty is an accomplished Smart Grid technical executive with an innovative history, strong relationships in the utility and vendor communities, business and partner development, platform and solution design, go to market planning and execution, and practical application of existing and emerging/disruptive technologies. Prior to B&V, he was VP of EnergyIoT for Hitachi America, where he led the architectural design of a distribution system platform supporting microgrid and Distributed Energy Resources (DER) related businesses.  At B&V, Stuart supports the utility, technology, and vendor communities in strategy and pragmatic application of DER that combines IoT best practices and technologies with energy standards and protocols.

Thought leader in the Internet of Things (IoT), Big Data, Cloud Computing, Artificial Intelligence (AI), Machine Learning, and connected home with practical application within the Smart Grid ecosystem. Expert in utility IT/OT and the application of DER and microgrids for resilience, economics, and reliability.

Stuart is a US military veteran, Air Force Academy graduate, an Energy Fellow for community resilience at the National Institute of Standards and Technology (NIST), an Energy “Expert” for Energy Central, and Vice Chair of the Open Field Message Bus (OpenFMB) user group.


Stuart McCafferty's picture

Thank Stuart for the Post!

Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.

Richard Brooks's picture
Richard Brooks on April 25, 2019

Another insightful article, Gentlemen. Well done.

One key insight from Hawaii, which I may have glossed over in your article, is the need to ensure that autonomous devices (i.e. inverters) are configured to ensure that all devices are following the same "objective function". This is really important, as it affects how certain devices prioritize their operations, i.e. whether to produce real power or reactive power, based on a fixed or floating power factor algorithm. It's important that all autonomous grid resources are properly configured to follow system operations instructions. I believe this is another example of why we need to think differently, supporting your points.

I'm thoroughly enjoying reading and learning from your articles. Do you have to stop at 7? Cheers, Dick.

Stuart McCafferty's picture
Stuart McCafferty on April 25, 2019

Dick, I should have expected as much.  You are SPOT ON again! This is a fundamental issue on why we struggle to scale, why it is so hard to maintain systems, and why it is so difficult to troubleshoot problems. Throughout the series we discuss the idea of using rich semantic information models (CIM, 61850, OpenFMB, 2030.5, others) and robust message buses rather than using legacy SCADA communications and obtuse register-based models like DNP and OPC.  We also discuss the Digital Twin concept in detail in Article 6.  Stay tuned.

Thanks again for your input.

Jake Brooks's picture
Jake Brooks on April 25, 2019

Thank you Stuart, David and Eamonn, for another very useful and thought-provoking article. I think you are starting at the right place when you talk about “system-centric” architecture and invite conversation on how to move to "data centric" designs. Although power systems have several layers, each of which could require its own approach to modernization, your comments about business model innovation raise special concerns. ("Policy and regulatory reform is required that safeguards the interests of stakeholders, but also enables the ability for utilities to extend services beyond the meter ...") It seems to me that system developers and visionaries both have to design data-centric architecture that works for the corporate entities as they are structured today, recognizing that the drivers will likely change as business models and market participants evolve. Providing for cybersecurity, privacy, and full data sets to authorized users whose business models are morphing - that will be a challenge. However, it's not unlike challenges that have been successfully addressed in the IT sector. I'd like to see more discussion about appropriate policy and regulatory reform to facilitate evolution. Some related questions were addressed in my 2018 article on the development of open source power system hubs:  I'm looking forward to your next post!

Stuart McCafferty's picture
Stuart McCafferty on April 26, 2019

Thanks Jake!

We have been talking about finishing up with a Policy-focused article in Article 7.  We dive a little deeper into the data-centricity discussion in Article 6.  IT'S ALL ABOUT THE DATA!!  But, I agree, getting that right will take some effort.  Luckily, we have some rich semantic information models to start with (IEC CIM, 61850, OpenFMB, and IEEE 2030.5 are pretty well harmonized already).  We have the basic building blocks for a common data infrastructure.

Thanks for sharing the article.  I read that the other day.  Well done!

Jim Horstman's picture
Jim Horstman on April 29, 2019

Good article and brings up some points I have given thought to over time especially on the continued reliance on traditional SCADA. Minor quibble with your example of cable tv. The new tv has no impact or potential impact on the 'cable grid' so plug and play is easier. What if I don't have the cable box? I still have to contact the cable company, schedule a tech visit, maybe get some recabling done if I want the box in a different location, etc.

You also mention AMI as being a polling system. While this may be true for obtaining meter reads most AMI systems have the ability to report 'events' such power off, voltage problems, etc.

There also are requirements for new PV installs beyond the grid. In addition to updating the grid model, etc. the customer information system also needs to be ware of the new install which typically will necessitate a rate change. How will the new PV know the account number? How will that be communicate.

I will be interested to see how you will address the communication requirements. Off the top that would seem to have to go through the meter to utilize the AMI network but perhaps you have other ideas.

Stuart McCafferty's picture
Stuart McCafferty on April 30, 2019

Hi Jim, good stuff!  I sincerely appreciate the thoughtful feedback.  Don't disagree with your quibble on the tv example - maybe not the best analogy.

Your PV install/CIS point is the way things are done today, and it's a good point.  But, that is not necessarily the way it will work in the future.  One of my personal beliefs is that we will have democratized distribution market systems that are part of this overall ecosystem and the business models of utilities will change and grow alongside the technological changes.  It will be a good thing for utilities.

I'm glad you said something about the meter.  There are so many nuances about all this and it is hard to remember all the things you talked about and make sure they show up in written format.  Our article around the OT system currently does not address this - and we are publishing it on Thursday.  So, I will go back and offer some ideas on this before we push it out later this week.  We definitely have an opinion here.  :)

Anyway, you can imagine how all this started.  A few guys asking, "Why do we do it this way?".  Followed by blank stares, followed by, "What if . . .". 

I have been having these conversations for many years with my OpenFMB and utility friends.  It was simply time for someone to stick their necks out and propose a solution.  It is kind of scary, frankly.  And, we are doing this for free while working around our normal jobs.  So, not making excuses, but there will be ommissions and possibly some mistakes.  So, call us out on it.  It's ok.  This is why we are doing it.  Let's have some conversations and we can refine.  We are not saying this is perfect, but we truly believe it is a reasonable starting point and it is simple enough that most people can quickly grasp the concepts.

Please stay engaged and continue to provide us with your thoughtful feedback.  We sincerely appreciate it.

Thanks, Stu

Rahul Krishna Pandey's picture
Rahul Krishna Pandey on April 30, 2019

Excellent Stuart to touch base on Architectural Challenges where the Energy sector is struggling for Transformation - What a practical example of ADMS to consider with, yes it takes years and 10’s of millions of dollars of investment.  And, when it is finally integrated, it is a constant challenge and expense to add new IT and grid assets – just to maintain marginal situational awareness of the system.

When we talk about IT systems transformation - one basic need is to have scalability and integration of legacy business systems to harness the ROI. Where Systems design and architecture plans plays very important role not only in production but also during system maintenance and downtime.

Stuart McCafferty's picture
Stuart McCafferty on May 1, 2019

Hi Rahul, thanks again for your kind words and for taking the time to read the article.

I am glad you brought up the IT systems and the maintenance and downtime advantages of this architectural approach.  One of the really cool things about combining the cloud, data-centricity, publication/subscribe message buses, and a DevOps environment is that redundancy is built-in.  You can create multiple instances of applications running in virtual machines (VM) that could represent different versions, or development and test environments.  These redundant VMs can subscribe to the message bus or the data stores without impacting the performance of the operational system.  If one VM fails due to a hardware issue or whatever, another can instantly replace it.  This really simplifies maintenance and develop as well as reducing or completely eliminating downtime.

Thanks again for your comments.  Stu

Lee Krevat's picture
Lee Krevat on May 10, 2019

There are numerous paths to a federated network of microgrids. The paths that involve multi-stakeholder planning are going to go a lot smoother than the chaotic, greed-motivated ones; but I put solar in at a sub-five-year payback.

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »