Managing Severe Weather: How utilities can move from reactive outages to proactive event management

On a cold December evening, an executive at a large northwestern electric utility sent Matt Schnugg a short, blunt text: bad weather could hit over Christmas Eve.

The message wasn’t dramatic — it was a reminder. Utilities, Schnugg told Energy Central while preparing for a recent episode of Power Perspectives, will routinely have emergency operators skip the holidays to keep the lights on. That quiet devotion is part of the reason the stakes of severe-weather planning have never been higher.

Add in the hard fact that roughly 80% of U.S. power outages are tied to weather, and it’s clear why the industry is rethinking how it prepares, responds, and recovers to major routine and major weather events.

 

The old playbook

The problem is familiar: too many utilities still operate on “last storm” playbooks, stitched together across departmental silos, legacy systems and manual workarounds. Those practices slow decision-making, degrade data trust, and magnify cognitive load for operators in the worst moments. As outlined in more detail in a recent white paper, static data, paper-based field processes, and stand-alone outage tools make utilities inherently reactive rather than proactive — and that latency matters when a storm or fire is closing in.

 

Shifting that Mindset

 “Holistic event management” is the phrase that Schneider Electric uses with its partners to describe an end-to-end approach that spans planning, real-time response and after-action learning. But what that really means is straightforward: managing and mitigating the impact of severe weather, and doing so with urgency.

At its core the approach is straightforward: build a trusted, asset-centric digital twin of the network; stitch together external risk data (weather, vegetation, traffic, critical facilities); and give operators filtered, actionable choices rather than raw noise.

Here’s what that lifecycle looks like in practice, and why each phase changes the game:

Predict: see the risk before it shows up
Prediction starts with a high-fidelity network model and layered risk data. By combining GIS-based digital twins with third-party risk models and weather feeds, utilities can run scenario simulations, identify vulnerable corridors and pre-position resources where they’ll do the most good.

Schneider Electric’s ArcFM suite and Ecostruxure Asset Management System, AiDash’s Climate Risk and Veg Management Systems, and Technosylva’s Firesight are examples of the mapping and content that make those “what-if” exercises possible. The result: plans that anticipate the next event, not the last one.

Prepare / Mitigate: harden selectively, automate reliably
Preparation is about targeted actions: adjust relay settings, disable certain auto-reclose functions, stage crews, charge batteries, or, at the extreme end, initiate public safety power shutoffs (PSPS) to prevent ignition.

The difference between ad hoc changes and safe, auditable mitigation is trust in the data and the workflow. Operators need to see risk polygons with data containing asset ignition risk, vegetation fuel scoring, fire spread outcomes, and critical asset mapping, overlaid with real-time telemetry in the same pane of glass so they can act confidently. That integrative capability is precisely what a platform approach, meaning the pairing of ADMS orchestration with GIS situational awareness, delivers.

Communicate & Monitor: one view, many users
When weather turns severe, everyone needs to be aligned from the control room to the field crews, the call centers to the regulators, and ultimately the public. Interactive outage maps, filtered dashboards that surface critical alarms (e.g., faults near hospitals or evacuation routes), and mobile apps for crews create a single source of truth.

This approach dramatically reduces the “who knows what” delays that cost hours in traditional workflows. ArcFM Web and mobile tools plus ADMS dashboards provide those shared lenses for situational awareness and field execution.

Assess, Restore & Reflect: close the loop faster
Fast restoration requires fast, accurate damage assessment. When crews capture geo-tagged photos and update assets in the field, that information flows back into the GIS and ADMS so operators can prioritize and automate restoration steps like fault isolation and service restoration.

Equally important is the after-action pass: correcting GIS discrepancies, automating change-detection feeds back to operations, and replaying events in operator training simulators so lessons stick. Automating those downstream updates directly leads to shrinking the time from “lights on” to “lessons learned.”

 

Concrete Examples from Schneider Electric

On the podcast, Schnugg emphasized a human-first insight: severe events overload the people running the grid. And in this discussion, he shared the useful framing of moving from cognitive load to cognitive simplicity, meaning platforms should reduce context switching and deliver the “right information to the right people at the right time.” Doing so requires not just integration, but also smart filtering and prioritization (often via analytics or AI) so operators focus on high-impact decisions rather than low-value noise.

Schneider’s experience in this space drives home the opportunities. By integrating critical applications across GIS, such as ArcFM, ADMS, and DERMS into an ecosystem that can be shared with other critical vendors, we can ingest best-of-breed risk models and third-party insights, deliver them in operator workflows, and move utilities from reaction to orchestration. Schneider Electric’s recently launched One Digital Grid Platform provides the necessary architecture, data services, and security to realize this platform-centric approach. But experts caution to keep in mind that platform interoperability, not a single vendor monopoly, is the endgame, which is why a platform embracing a shared ecosystem is so critical.

 

Schneider’s Practical Takeaways

In these journeys, being able to learn from lessons of others who have made the leap first are key. From that Schneider experience, some key areas to highlight include:

  • Treat severe-weather management as a lifecycle investment (predict → prepare → respond → recover → learn), not a one-off project.

  • Prioritize a reliable network model (digital twin) — it’s the backbone for automation and shared situational awareness.

  • Reduce cognitive load with filtered alerts and workflows that let operators execute mitigation steps inside the same interface.

  • Make post-event updates automatic: feeding field findings back into the GIS and operations systems prevents stale data from undermining future decisions.

The bigger point is cultural as much as technical: weather-driven outages are no longer rare anomalies but frequent systemic threats. Solving them requires vendors, utilities, regulators, and communities to adopt a shared language and shared data. As Schnugg highlighted in wrapping up the podcast, the future of the grid is a “connected ecosystem,” which isn’t marketing jargon so much as a practical roadmap for resilience and the ability for a utility to continue evolving with its escalating threats.

If, as we know, 80% of outages are weather-related, the choice is stark: a) keep patching yesterday’s tools and keep paying the operational price, or b) invest now in integrated, map-driven platforms that let operators act early, communicate clearly, and learn quickly. For utilities that want to move from reactive heroics to repeatable resilience, the path is increasingly clear — and increasingly within reach.

Want to learn more? Visit Digital Grid at Esri's IMGIS in Booth 105. Register for Schneider Electric's social and onsite preparations for the event here.

4
1 reply