The mission of this group is to bring together utility professionals in the power industry who are in the thick of the digital utility transformation. 

27,159 Members

Post

EnergyIoT Article 6 – Energy Services (DevOps) Domain - The Heart of the Ecosystem

Image by Stuart McCafferty - No copyright

EnergyIoT Article 6 – Energy Services (DevOps) Domain - The Heart of the Ecosystem

By Stuart McCafferty, Eamonn McCormick, and David Forfia

Disclaimer:
 The viewpoints in this article and others in the series are the personal views of the authors and in no way are meant to imply or represent those of the companies they work for.

Figure 1- EnergyIoT Conceptual Model

(developed for this series with the Gridwise Architecture Council)

This is the sixth in a series of articles introducing an event-driven, data-centric EnergyIoT Conceptual Model.  This article describes the virtual Energy Services (DevOps) Cloud Domain highlighted in the image above by the thick red outlined cloud in the upper right.  The Energy Systems Cloud Domain we envision:

  • “Abstracts” the complexity and brittleness of communications between systems and OT assets using digital twin agents
  • Provides modern “microservices” to support virtualization, containerization, and orchestration
  • Leverages messaging bus technologies such as pub/sub
  • Uses rich semantic information model standards to drive interoperability
  • Enables intelligent grid-edge devices to self-report “by exception” or on a timed basis and support an event-driven architecture
  • Simplifies data usage with secure data-retrieval services and publish/subscribe (pub/sub) message bus standards that enables easier, less expensive system integration, and faster command and control response capabilities
  • Enables a variety of modern data storage systems to support structured, unstructured, smart contract/digital ledger formats
  • Exposes energy-specific services for DevOps teams including source code management, software development and IoT orchestration
  • Provides the most advanced, modern cyber security technologies and tools across the entire ecosystem with the added capability of adapting to new cyber security challenges universally as new/evolved threat management tools become available
  • Reduces time and costs for systems integration by providing  services and standardized data definitions
  • Accommodates opportunities to all stakeholders for innovation, new services and capabilities
  • Includes analytic tools and capabilities to continuously improve and optimize operational and market activities

Figure 2 - Energy Services Cloud

This article describes the architectural components within the above Energy Services Cloud pattern, which also includes a “DevOps” environment designed to support the energy software development community.  The image below was briefly introduced in Article 3 - EnergyIoT Domain Building Blocks.  The middle layer surrounded by the red-oval is another view of the subject of this article.
 

Figure 3 - EnergyIoT architecture "stack" view

In current electric power business applications, vendors have their non-scalable proprietary pipes that connect business apps. The green cloud layer in the conceptual architecture is the energy services “bridge” that abstracts away the hardware from the software. The abstractions are represented by purple boxes in the image above.

The upper communications layer are basically “business semantic” layer SOA services. The lower layer are “hardware oriented” communications services. The digital twins will connect to the “drivers” in the OT communications layer to talk to particular types of devices similar to an operating system has drivers for printers and other peripherals.

Conceptually separating the Energy Services Cloud from the Energy Systems Cloud Domain directly addresses the limitations of siloed systems.  This is the big “aha” within the EnergyIoT architecture.  The “green cloud” abstraction layer exists to enable seamless communication between physical systems, Energy business systems, and data.  Conceptually, this approach allows assets to directly self-report status data when an event occurs rather than use polling systems - or really, any system.  Remember, these devices in the field are intelligent and their capabilities are expanding every day.  The architecture should leverage that intelligence rather than forcing status reporting through some remote system.  This approach enables “grid intelligence”, taking advantage of smart “edge” devices connected to the grid.  There are obvious positive implications to this type of approach:

  1. Data is reported only when threshold value changes occur that trigger an event report, ensuring that only the most valuable information is:
    • Communicated
    • Stored
    • Analyzed
  2. Reduced communication traffic.
  3. Reduced storage costs.
  4. Reduced data to analyze.
  5. Distributed publishing ensures that data is made available directly to authorized “subscribers” and common data stores. 

As a simple example, consider electric power meters.  In today’s architecture, centralized systems poll each smart meter, requesting status information every hour or at another interval.  What if, instead, that intelligent meter knew what interval to report at automatically?  What if a homeowner could bid 2 KWh of demand response from 5PM to 6PM into a newly created distribution market?  The meter could recognize or be notified of the event, then automatically send a meter read at the moment the bid obligation began and another when it concluded.  There would be historical data to see what the readings were before the event and the meter reads at the beginning and end to determine whether the contractual obligation was met to make settlement fairly easy.  And, what if that data resides in a data store that other authorized systems could easily gain access to and use for whatever analysis makes sense?  Imagine the economic and operational efficiency to be achieved.  These are the fundamentals of an IoT, event-driven, data-centric architecture.

The separation of the Energy Services DevOps Cloud unlocks the ability to create a common set of secure services to access data and use in other systems and analysis tools.  It also enables a paradigm of “distributed intelligence” that leverages smart devices on the grid, in businesses, and in homes.  This in turn empowers a democratized ecosystem that embraces innovation and creates an “Uber-like” environment that provides economic opportunities for new and numerous stakeholders with the net effect of more efficient, clean, resilient, and personalized delivery of power.

Data Services

Fundamentally, the EnergyIoT architecture is “data-centric”.  The importance of data cannot be underemphasized.  In fact, “it’s all about the data”!  Being event-driven is also extremely important, but the events generate data, and that is the most fundamental element of the entire IoT ecosystem.  The EnergyIoT Services Cloud includes a rich set of secure data communication (message bus) and storage/retrieval services. 

Designing the correct data structures and services arguably is the first and most important step in designing the Services Cloud.  It is also going to be difficult and likely take time to design it intelligently.  Luckily, in the electric power industry, we have rich semantic information models developed by the International Electrotechnical Committee (IEC) such as 61968/70 Common Information Model (CIM), 61850 (originally a substation automation standard that modeled traditional assets and that now includes DER), IEEE’s 2030.5 (Home Area Network - which is harmonized with IEC 61850), OpenFMB (also harmonized with 61850), and other rich semantic information models that are mostly harmonized with one or the other IEC standards mentioned.  This will be hard work and will require experts from these standards organizations as well as technology experts with experience in designing DevOps systems, data repositories, and messaging payloads.

Smart Contracts, Digital Ledger Technology (DLT)

The benefits of distributed digital ledgers is currently untapped in the energy industry.  DLT often gets mischaracterized as “BlockChain” or cryptocurrency technology.  Although BlockChain and cryptocurrency are forms of DLT, there are other ways of implementing DLT besides BlockChain and applications besides cryptocurrency where DLT is a legitimate and compelling potential technology for energy applications.

The green cloud energy services layer must be founded on data that can be trusted.  DLT technologies were built to be trusted.  They include the following foundational principles that translate well into an EnergyIoT data-centric architecture:

  • Transparent smart contract/business rules
  • Verifiable processing
  • Trusted orchestrated processing of events
  • Distributed control of processes
  • Standards based processes based upon open scientific and agreed standards
  • Encrypted and quantum safe
  • Immutability
  • Consensus based
  • Distributable
  • Auditable
  • Permissioned

Structured Data

Structured data is highly organized information that is stored in fixed fields within relational databases.  It uses a Structured Query Language (SQL) or other standardized method to perform searches, add, modify, or remove fields.  A common IoT best practice for today’s relational databases is to create methods or “microservices” that encapsulate standard SQL searches and data-manipulation commands.  This abstraction allows for a highly controlled additional level of access to relational data and simplifies data processing for authorized systems.  It also prevents an old “worst” practice of sharing database passwords with developers and jeopardizing the integrity of the table structures and the information stored.

Unstructured Data

Not all information is easily decomposed into relational fields and tables.  This type of information may be human-generated or machine-generated, and it may be text or non-text data.  This includes video, documents, contractual information, pictures, logs, and any other type of data that doesn’t fit neatly into a highly structured format. 

Cloud providers have mastered how unstructured data is managed and stored.  Solutions like Hadoop, Apache Hive, MongoDB, and Cassandra have excellent performance and offer existing tools and services to support highly scalable capabilities.

Digital Twin Agent

The architecture includes an abstraction layer that includes Digital Twin Agents which simplify the communication between Energy Systems and grid assets (adapters) or can be used in simulation environments to emulate the behavior of assets.  This special form of a microservice is a critical element when scaling grid networks and Energy Systems to support thousands or even millions of DER.  Consider Digital Twin Agent adapters as containers that spin up when you talk to a physical asset and go away when you are done – meters and switches are examples of Digital Twin Agent microservices that could live in the cloud and come and go only when you are talking to the asset.  Some Digital Twin Agents may remain resident in memory all the time if they are real-time mission critical pieces of the overall system and are likely to be physically located within or near the asset they virtualize.  One major advantage to this approach is the ability to manage Digital Twin Agent container upgrades that provide additional functionality or bug fixes.  The upgrade changes can be performed one time and propagate immediately out to all assets of that model and make.

The architecture envisions Agents that announce the asset, describe the asset’s capabilities, provision the asset, and commission the asset.  The Digital Twin Agent ensures there is redundancy/fail-over capabilities built in and there is distributed intelligence that guarantees when communications are lost, the Agents continue to operate independently in the last command set they were provided from a parent or authority Agent.  This is currently a technology gap since there is no energy standard for digital twins.  

There are Probably at Least Two Types of Digital Twins

The figure below is the authors’ current thinking on Digital Twin Agents, but it may not be exactly right.  More work is required to establish agreement and common interfaces, features, and containerization/orchestration methodologies.  Docker and Kubernetes have many of the capabilities that support the functionality envisioned for Digital Twin Agents in the architecture.

Figure 4 - Author's Digital Twin Agent concept

One important point of distinction is that many people jump to the conclusion that a Digital Twin is a virtual “mirror” or an emulator of the physical hardware or entity being represented that accurately mimics the operational behavior of the asset.  After much thought and discussion, some of the authors believe that is not necessarily true - and it is why the authors use the term “agent” to make the distinction that this concept may not align with some implementations of Digital Twins.  

In some IoT implementations, Digital Twin emulators are used to support simulation - this type of Digital Twin may be used in power flow modeling or planning types of exercises.  In this case, the Digital Twin is quite sophisticated and requires an operating environment that has significant horsepower, such as a server. It is a “mirror” of the physical asset that includes an emulation engine, mirroring the full behavior of the asset.

The second type of Digital Twin is a communication abstraction, providing services and systems a “bridge” to an asset that standardizes communication to assets of the same class through “adapters” that translate semantic message payloads to whatever protocol the hardware speaks (e.g. ModBus, OPC, DNP, etc.).  In fact, some IoT developers would simply call these adapters, rather than a form of Digital Twin.  No matter what you call it, the adapter form of a Digital Twin Agent within the EnergyIoT architecture not only performs the adaptive communication, but it also includes event handlers, an archivist, properties, and methods - so it is more than just an adapter.

Digital Twin Agent adapters are compact, have relatively low processing power requirements, and can be physically located in the cloud or on the OT physical system near the asset or as part of the asset.  For the purposes of the EnergyIoT architecture, adapter functionality is a minimum requirement of a Digital Twin Agent, while having the additional emulator capabilities would be helpful when performing simulation and optimization processes.

Security and Identity Management

In a data-centric ecosystem, there is the ability to inspect every bit of information as it travels through the system.  The data can be filtered to check that it is within appropriate operational limits, searching for potential “spoofing” of an authenticated data publisher.  Intelligent analytics can be trained to search for “bad actors”, tampering, theft, and intrusions.  Any anomalous system behavior can be detected, flagged, quarantined, and/or have a human dispatched to inspect in person.  The EnergyIoT ecosystem described in these articles “has security designed-in” that allows us to apply the most sophisticated analytics the industry has to offer, which will continuously improve, “learn”, and adapt from larger and larger data sets.

Figure 5 - Example of how built-in security services could support the EnergyIoT architecture

The image above provides an example of how security is “built-in” in an event-driven, data-centric architecture.  The orange-colored objects provide opportunities for security microservices to:

  1. Ensure that data entering the system is from an authorized and authenticated source
  2. Ensure data is within normal operating limits
  3. Ensure that business systems and other actors are authorized and authenticated to view the data
  4. Provide mitigation services when any of the above 3 conditions have not been met, which could escalate in scale from flagging erroneous data and its source, quarantining data publishing sources, or dispatching a human to perform a visual inspection

Software Development and Source Code Management

The Energy Services Cloud supports the development community through its DevOps environment.  Source code management is a key component of any development environment.   All of the major cloud vendors provide built-in services for software developers.  The development environments support multiple languages, operating systems, and target deployments (cloud, on premise servers, devices). The software development services include:

  1. Development
  2. Collaboration and reuse
  3. Version management
  4. Test
  5. Deployment

Orchestration

For the purposes of this orchestration discussion, the authors are not describing the orchestration of containers (e.g. Kubernetes), but rather a simple “What You See is What You Get” (WYSIWYG) development toolset.  This may or may not be part of the Software Development toolkit described in the previous section.  Instead, it is a visual open source development environment called “Node-Red” or other yet to be developed toolset  The “Node-Red” IoT software development tool currently has a large existing user community and hundreds of developed software objects to support rapid development by “wiring” different objects together and “orchestrating” the processes similar to National Instruments’ Labview.  This tool was designed with IoT in mind.

Figure 6 - Node-Red IoT Development Tools (source: https://node-red.org)

Node-Red is an event-driven architecture built on top of node.js and can run natively on many low-cost single board hardware platforms, including Raspberry Pi, BeagleBone, and Android.  As an orchestrator, it provides publication/subscribe message buses like MQTT simply and seamlessly.  The executable runs on all operating systems within a browser environment.  At the time of this writing, there were 3,228 open source contributions, and is growing all the time. 

The Energy Services Cloud would ideally have its own library of software objects and orchestration products that could be contributed by vendors to support connecting to their Digital Twin Agents and by software development companies to create new capabilities and expose innovative services to other developers.  Some independent validation and rules will be necessary, but the key point is that tools like the Node-Red environment allow for rapid development and reuse of node objects that will evolve and grow in number over time.  Applications developed in these tools can be deployed using containers and associated orchestration services, which will be discussed in the following section.

Cloud Microservices and Container Technologies

Microservices are an Application Programming Interface (API) for Service-Oriented Architectures (SOA) that decompose software methods into the smallest functionality as practical with the intended purpose of maximizing its reusability by other services or applications.  Microservices and container technologies are modern software techniques that have compelling implications for the energy industry, offering opportunities to “virtualize” the Operational Technology (OT) physical assets and create a much more interoperable electric power grid and its systems.  It is the “fabric” of the Energy Services Cloud abstraction capabilities.

The Open Group defines a service as supporting four foundational principles:

  1. It logically represents a business activity with a specified outcome.
  2. It is self-contained.
  3. It is a black box for its consumers.
  4. It may consist of other underlying services.

There are good and bad things associated with microservices.  The good is that the use of microservices can dramatically reduce the amount of development and testing requirements, resulting in more rapid software development.  The bad is that the use of an individual vendor’s microservices can “lock in” software development teams who become dependent on proprietary microservices that will not easily port to another vendor’s solution.

Containers are the minimum set of Operating System (OS) components and software dependencies packaged together to allow an application to run as a Virtual Machine (VM).  In other words, virtualization is performed through the use of containers.  Docker is the preeminent tool for creating containers that can be deployed to a variety of hardware devices including servers, personal computers, industrial computers, and single-board computers like Arduino and Raspberry Pi.

The real “magic” of containers is in the orchestration services.  The most popular orchestration system, Kubernetes (pronounced Koo-ber-net-eez) provides automated deployment, scaling, redundancy, and management of containers.  Kubernetes monitors and manages the container deployments and can quickly spin-up new or redundant containers if something goes wrong, managing the overall health and operations of the container.

The authors’ Digital Twin Agent concept presented previously is a form of a microservice that may use container technology for virtualization.  It would be deployed using an orchestration product like Kubernetes.

Artificial Intelligence and Optimization

Perhaps the most exciting opportunities for the electric power industry and the transition to an EnergyIoT architecture is the ability to apply today’s (and tomorrow’s) most advanced analytic systems.  One analytic opportunity was showcased earlier in the Security and Identity Management section to identify bad actors, spoofing, theft, tampering, and intrusions.  The same types of Artificial Intelligence (AI) and deep learning techniques can be applied to all types of operational, market, planning, forecasting, settlement, and billing efforts.  Analytics are especially effective in an event-driven, data-centric ecosystem where the data is meaningful and available to authorized systems through reusable microservices.  System vendors will rapidly adopt these analytic tools to enhance their capabilities, predict and solve problems before they occur, and to optimize their customer’s business functions that their systems support.  The number and breadth of opportunities are boundless, enabling a Renaissance era for the electric power industry as these tools to learn and the operation of businesses, systems and assets become “faster, better, and cheaper”.

Service Oriented Architectures (SOA), Message Buses, and Message Payloads

SOA, message buses, and message payloads have been around for decades.  The basic premise of SOA is the idea of “loosely coupled” services that provide some business value and can be reused by other services and applications.  It is a rather simple concept that has its roots dating back to the early 90’s with the advent of Visual Basic eXecutables (VBX) and ActiveX reusable components.  These early software components allowed other software developers to embed highly complex capabilities into their own applications using the components’ Events, Properties, and Methods to abstract the inner logic as a “black box”.  Conceptually, this is the same loosely coupled foundation that SOAs provide.

Message buses, sometimes referred to as message-oriented middleware, orchestrate communications between different services and applications.  Some message buses include a message queue and “broker” that perform routing logic in a structured way.  Some brokers may even provide “quality of service” and guaranteed message delivery services for some routing situations.

Some forms of message buses route messages in “broadcast” or “multicast” modes to allow any approved actor on the bus to see the message.  This is not the optimal paradigm for security and also creates additional processing needs of each actor on the bus to determine whether the message was intended for them or not.  Instead, the publication/subscribe (pub/sub) message bus technique adds an extra layer of protection through a concept called “topics”.  Authorized Publishers can publish topics to authorized Subscribers, providing highly-granular and structured management of data, the actors that can send it, and the actors that can receive it.  Data is routed in a peer-to-peer fashion and can be encrypted for an additional level of security.

Message payloads can be schedules, variables, or any type of structured or unstructured data.  Using today’s compression and binary capabilities, payloads can be extremely small, allowing low latency communication.  Agreement on common messaging standards is critical to ensure interoperability.  Use case and information modeling methodologies can be used to define common payloads for different grid operation, market, and business functions.  Much of this work has already been accomplished through groups like the IEEE and IEC (to name a few of the prominent information model authors) and can be quickly leveraged to support standardized messaging payloads.

SOA, message buses, and standardized message payloads are a foundational piece of the EnergyIoT architecture that promotes rapid integration and interoperability of new grid assets and systems.

Conclusion

The Energy Services (DevOps) Cloud domain (Green Cloud) is the heart of the overall EnergyIoT ecosystem.  This domain is the abstraction layer that dramatically simplifies integration of new assets and systems.  The microservices provide common “black boxes” to access data, apply analytics and deep learning techniques, interoperate with other services and applications, connect to and aggregate grid assets through Digital Twin Agents, and perform virtualization, containerization, and orchestration.  The most advanced identity management and security tools are available as part of the native environment.  The domain is scalable, flexible, adaptable, and extensible.  With the inclusion of a DevOps environment the time and costs for systems integration are substantially reduced using common services and standardized data definitions.  The architecture is data-centric.  The architecture is event-driven, so grid assets can report state information only when something changes or on a timed basis, reducing the amount of communication traffic, non-actionable message payload processing, as well as the amount of data to store and analyze.

The company that develops the “green cloud” portion of the EnergyIoT architecture will not only greatly benefit financially, but will also achieve the “social innovation” kudos from the rest of the world for providing the mechanism to modernize the electric industry’s technology, provide a method to directly address the reduction of greenhouse gases (GHG), and enable the rapid integration of more economical generation sources such as grid scale PV.  This is a big deal – a moonshot opportunity for mankind. But, it will take great vision, dedication, and commitment from one or more technology companies with both the financial and human resources as well as a strong understanding of the specific needs of the electric power industry.  If you are one of those companies, feel free to contact any of the authors at Energy Central or on LinkedIn.

This is the sixth in a series of EnergyIoT articles proposing a fundamentally different architecture to solve the problems of today, propelling the electric power industry into the 21st century and beyond.  The seventh and final article, “EnergyIoT Article 7 – The Roadmap and Next Steps”, will be published next week on Energy Central and LinkedIn.

 The rest of the article series can be found here:  

About the Authors

Stuart McCafferty, IoT Architect, Black & Veatch

Stuart McCafferty is an accomplished Smart Grid technical executive with an innovative history, strong relationships in the utility and vendor communities, business and partner development, platform and solution design, go to market planning and execution, and practical application of existing and emerging/disruptive technologies. Prior to B&V, he was VP of EnergyIoT for Hitachi America, where he led the architectural design of a distribution system platform supporting microgrid and Distributed Energy Resource (DER) related businesses.  At B&V, Stuart supports the utility, technology, and vendor communities in strategy and pragmatic application of DER that combines IoT best practices and technologies with energy standards and protocols.

Thought leader in the Internet of Things (IoT), Big Data, Cloud Computing, Artificial Intelligence (AI), Machine Learning, and connected home with practical application within the Smart Grid ecosystem. Expert in utility IT/OT and the application of DER and microgrids for resilience, economics, and reliability.

Stuart is a US military veteran, Air Force Academy graduate, an Energy Fellow for community resilience at the National Institute of Standards and Technology (NIST), an Energy “Expert” for Energy Central, and Vice Chair of the Open Field Message Bus (OpenFMB) user group.

David Forfia, Gridwise Architecture Council Chair

David is the Chair of the GridWise Architecture Council since 2015 and has been a council member since 2013.

The GridWise Architecture Council (GWAC) is a team of industry leaders who are shaping the guiding principles of a highly intelligent and interactive electric system. The Council is neither a design team, nor a standards making body. Its role is to help identify areas for standardization that allow significant levels of interoperation between system components. More about the Council can be found at www.gridwiseac.org

Eamonn McCormick, Chief Technology Officer, Utilicast

Eamonn McCormick is the CTO at Utilicast, a leading energy industry consultancy. Eamonn is a passionate believer in the bright future of the energy industry and the importance of collaboration as the foundation for solving for our current industry challenges. He is a results driven technology leader with a track record of success. He has implemented strategic technology change at several large energy companies over the last twenty years in the areas of wholesale markets, transmission and energy distribution primarily. In addition Eamonn is currently chief architect of the Energy Block Chain consortium.

 

Stuart McCafferty's picture

Thank Stuart for the Post!

Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.

Discussions

Jim Horstman's picture
Jim Horstman on May 13, 2019 10:01 pm GMT

I've been waiting for this one to see if it addressed any of the questions/comments I had previously posted. First was the inclusion of software development within the Energy Services Cloud. DevOps is a software development methodology that is independent of the use of the software, in this case for the utility world, so why include it here? Being a CIM bigot, I certainly appreciate the use of standards for message exchanges and the concept of reusable services, etc. but including DevOps in the architecture seems a stretch.

I'm also not a big fan of 'buzz' terms like Digital Twins since they are primarily marketing hype and pretty soon become to mean whatever people want them to mean. Throwing in Digital Twin Agent only adds to the confusion since it seems to be, as I think you sort of acknowledge, somewhat of a stretch to associate it with the Digital Twin concept. The Agent concept is good but maybe by a different name that won't cause confusion as to its purpose.

While on the DT subject I would also like to see the concept of the network model and the management and services related to the model incorporated in the architecture. In a current EPRI project on Network Model Management (NMM) we had some discussion relative to the digital twin concept and whether or not we should be using that term in our work. We pretty quickly decided against that as we felt that a digital twin would be just one use of a network model.

In the area of services, an environmental data (which includes weather but also other things like fire and earthquake) service would be beneficial. Environmental information has been incorporated into the CIM. While weather information is used for load forecasting as noted in your architecture it plays a far broader role. A current 'hot' topic is the use of weather to predict outages. Of course in California, in particular, fire information is also, I don't want to use the term hot here, big topic.

Great work on the architecture!

 

Stuart McCafferty's picture
Stuart McCafferty on May 21, 2019 2:04 pm GMT

Hi Jim, sorry for the delay in responding.  Things have become crazy.

I really appreciate all your thoughtful comments.  Let me try to address them with the same thoughtfulness.

The software development piece is necessary.  It could be a cloud of its own, frankly, but I think it fits in the services cloud pretty well.  In my own personal thinking, anything portrayed in the clouds could be anywhere, and it is likely we will see many different competitive options from many different companies.  In no way are we implying that this is one company or one solution, but there is a real possibility that one or more cloud companies will provide "one stop shopping" for a majority of the services we describe in the article.

I agree with your comments on the Digital Twin.  The term is overloaded and vague.  It has different meanings to different people.  Eamonn, David, and I argued about DTs more than anything.  I eventually just pushed my own thinking into the article.  I was as specific as possible and believe there are different levels of DTs - from a simple translator to a very sophisticated emulator.  The DT model we provided in the article is something I have personally been chewing on for several years.  I started drawing it a couple years ago and have continued to refine it as my thinking has changed.  The idea of using Kubernetes and container technology makes sense in my opinion.  Throwing all those thoughts out there was the most risky part of this entire series.  We could be wrong, and I admit that freely.

I would love to see someone put out some articles on network models.  We don't have an expert on our team in that area.  So, if anyone is reading this and has that expertise and thoughts that can help others zip all of this up into a practical solution, please contact me here on EC or LinkedIn. 

As for the environmental data, I couldn't agree more.  And, that is the beauty of a data-centric architecture and the use of well-designed standards like CIM. This is just common sense and nothing we talk about in this series is rocket science or anything that people aren't already doing in other industries.

Thanks again, Jim.  I really appreciate your opinions and willingness to share them.  This is the whole reason we wrote the series.

Jim Horstman's picture
Jim Horstman on May 21, 2019 7:37 pm GMT

Stuart, here's a link to an article I wrote on EPRI's grid model data management project. It contains links with further information. https://www.energycentral.com/c/iu/stepping-manage-distribution-grid-model-data

Michael Ash's picture
Michael Ash on Sep 2, 2019 4:07 pm GMT

Hey Stuart, this is a meaty tome compare to the first article we co-authored back in Aug 2005: Advanced Metering Infrastructure: An Open Systems Abstraction Layer Strategy.  I appreciated the considerable effort and time investment it took to generate an article of this magnitude. Your insights have grown exponential since that first article. I think your contribution to an open-source approach that, in the future, will drive energy efficiencies by reducing complexity will be an incremental solution to the complex problem of global warming. I am sure your work here will inspire others to make future contributions as well. It is only through effort like these to produce new insight can we tackle a problem of this magnitude.

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »