The mission of this group is to bring together utility professionals in the power industry who are in the thick of the digital utility transformation. 


You need to be a member of Energy Central to access some features and content. Please or register to continue.


Predicting the future of utility analytics

By Dan Brancaccio 


The term smart grid has been around for some time now, although the meaning has shifted over the years. In its earliest usage, it was applied to systems that made use of data to perform power quality studies and post event analysis. More recently, the term has come to mean using data to implement a self-healing grid. Self-healing is another term that has multiple definitions from standard protection relays and breakers to distribution level fault location, isolation and service restoration (FLISR). There are multiple definitions of what constitutes a smart grid, but the common denominator is that they all rely on data.

When it comes to modern controls, the latest new term in the utility industry is big data. Some new data sources, like AMI and synchrophasors, deliver significantly more data than the older SCADA systems. New applications have been developed and deployed to take advantage of these new data sources including wide-area situational awareness, revenue protection, forecasting modal analysis and others. 

All of these new applications are designed to bring additional levels of actionable information to the utility industry. Without accurate and available data, the quality of information is called into question and the promise of efficiency obscured. 

Accuracy, availability and latency

Data accuracy is controlled by the measurement devices and their connection to the assets they measure. Once installed and calibrated, such a device will remain accurate if properly maintained. Even with the simplest communication protocols, any changes in the measured value will be detected during error checking. As such, if a measured value is accurate at the source it will be accurate when delivered to its destination. 

Availability is a much more complex issue. Even when working perfectly, most data delivery infrastructures (i.e. communications networks) were designed to minimize bandwidth utilization. To effectively deliver data over systems using 1200 baud modems and equivalent infrastructure, every attempt must be made to minimize the frequency of measured values transmission. Measurement systems like synchrophasor and AMI are using newer and more capable communications infrastructures. However, if they are not designed properly, even these systems can result in data availability issues. 

Another key to availability is latency. An extreme example of high latency is the data collected by a protection relay during a breaker operation. This data, considered non-operational (i.e. not required to operate the system), can supply valuable data to the smart grid. However, acquiring this information often involves rolling a truck out to a substation and manually collecting the data. 

Latency plays an important role when using data for system automation. Data archiving is also important in data availability. Beyond the near real-time uses for data, many applications require access to historical data from older data systems like SCADA to newer synchrophasor and AMI systems. Without proper architecture, these archives can quickly become overwhelmed by the quantity of data or in some cases, due to poor organization, the data becomes difficult to extract.

These issues with accuracy, availability and latency can apply to a number of situations and jobs across the industry. Below are a few examples:

Planning engineers: Planning engineers are responsible for determining which systems may need upgrades to meet future needs. To better perform their engineering studies, a good baseline of data is needed; archived SCADA data is a good candidate to be used for system baselining. However, there can be issues with this approach because data collection for SCADA is configured for system operation. In most cases, this is not optimal for planning. 

To minimize bandwidth utilization, SCADA systems are setup with wide dead-bands resulting in significant gaps in the archived data. When planning engineers try to compare the measured values, from SCADA to usage measurements taken from meters, inconsistencies appear as a result of gaps in the archived data. Poorly implemented naming conventions can also be a subtle form of a data availability issue. If an end-user cannot find the archived data, it is as if the data was unavailable. This is a perfect example of a data availability issue affecting the performance of a task.

FLISR: After a fault, the power distribution utility has to locate and isolate the faulty section and restore unfaulted areas. Quicker restorations following a fault improve on reliability indices and customer satisfaction. Although FLISR can improve restoration times, the issue is that some FLISR systems are implemented without first ensuring SCADA data availability. This result is FLISR that is less than optimal. Utilities that decide to implement FLISR should first ensure the data required by FLISR is accurate and available. Needless to say, when data is available, it can expedite restoration.

Revenue protection: Some estimates put revenue loss due to theft at one percent for large utilities. The recent deployment of smart meters can be used to build systems with analytics that can perform theft detection. Most revenue protection analytics require, at a minimum, customer usage data from smart meters and archived SCADA data. Initial design of AMI and MDM systems may have been focused exclusively on customer billing. In this case, data availability is compromised because it is difficult to integrate as a result of system architecture. 

Wide-area situational awareness: Recent deployments of synchrophasor technologies have introduced new capabilities to the utility industry. There are multiple analytic applications that promise improved wide area situational awareness allowing neighboring utilities to coordinate activities to mitigate inter-area system effects with better tools. While synchrophasor data enables these new tools, the high report rate of 30, 60, or 120 measured values per second brings a new level of complication to existing utility communication infrastructure. Although historically available serial communication systems can be used for synchrophasor data, it is best to design networks that use newer technologies that support TCP/IP or UDP. Even these newer technologies can, if not designed properly, result in data availability issues as a result of the high transmission rates and communication protocols that are not ideal for packet-based technologies.

Asset health: Asset health, system health and computer based maintenance are all terms used to describe automated techniques to determine the likelihood of failure for a utility asset. These systems can range from a fairly simple spreadsheet—where utility engineers manually rate certain key asset parameters resulting in a health score for the asset -- to complex, rules-based analytic applications. 

These complex systems become increasingly dependent on accurate and available data. For example, a complex asset health environment for an asset like a transformer will use data collected from the SCADA archive to retrieve loading information. An asset management system like SAP can retrieve nameplate information as well as date of purchase, maintenance data from workforce automation systems, dissolved gas measurements from automated or manual analysis, breaker operations from SCADA and non-operational data such as ambient temperature or oscillographic waveform data captured by protection relays.

Currently there are attempts made to bring synchrophasors into the mix of determining asset health. It’s important to note that asset health relies on data from multiple sources, and all must be accurate and available. In this case, availability isn’t just based on the measurement equipment and the communication network but dependent on the data archiving systems and most importantly, the data integration architecture.

Improving data accuracy and availability

Accuracy can be improved through vendor contacts and measurement equipment maintenance. Most utilities institute asset maintenance plans but few have dedicated policies and procedures for maintaining the measurement devices and associated equipment. The same can be said for communications equipment. While utilities have become increasingly aware of the need to maintain the communications systems, issues are resolved as they arise. Noted earlier, availability is more complex of an issue than accuracy so utilities can take the following actions to improve availability:

* Institute measurement and communication equipment maintenance policies and procedures

* Develop dashboards to display availability KPIs

* Determine application and end-user data availability requirements

* Build cross-functional teams to address availability issues

* Investigate new high availability, low latency communication architectures

* Develop and enforce data naming conventions

* Take time to architect data archive and data warehouse before loading data 

* Address data integration requirements

* Explore new communication protocols better suited for streaming data, like synchrophasors


Left within individual silos, there is nothing exciting about data. However, when properly collected, it is quite interesting to unlock its secrets to ensure the grid is fully optimized. Ensuring that data is accurate and available is the key to unlocking the promise of advanced analytics solutions for the future. When deciding to leverage existing or new data sources to improve visibility and operation of a utility system, it is a good idea to look at how other utilities have addressed issues concerning data accuracy and availability and the results in efficiency and communications they’ve gained in process. 

Dan Brancaccio is principal consultant at BRIDGE Energy Group. 


This article is from the fall #bigdatainvasion print issue. Read the whole issue by clicking here. 


Get more insights on big data and analytics at Utility Analytics Week, which is coming to New Orleans in just two weeks. For info, including cost and registration, click here:


And for more information on the Utility Analytics Institute, including insights, membership and a resource library at your fingertips, hit this link: Or visit the Institute on Twitter @utilanalytics. 

Guest Writer's picture

Thank Guest for the Post!

Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.


No discussions yet. Start a discussion below.

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »