The mission of this group is to bring together utility professionals in the power industry who are in the thick of the digital utility transformation. 

27,292 Members

WARNING: SIGN-IN

You need to be a member of Energy Central to access some features and content. Please or register to continue.

Post

Beating the Air-Gap: How Attackers Can Gain Access to Supposedly Isolated Systems

The US Department of Homeland Security recently reported that Russian hackers have been able to exploit insecure vendors within the power industry in order to gain access to privileged, air-gapped systems inside America’s electric utilities.

While this case is alarming, it should not be thought of as anomalous. Nor should utilities make the mistake of assuming that only a large nation-state like Russia has the capability of conducting such an attack.

For too many years, the industry has falsely assumed that current air-gap measures are a sufficient defense against malicious intrusions inside critical systems. What this recent case shows is that real-world hackers are able to “breach the gap” in US utility systems — but it also draws attention to the fact that the methods for doing so are becoming more well-known within the hacker community, increasing the chances of similar attacks by smaller actors like cybercriminals.

Stuxnet was the first widely publicized attack that breached an air-gap. However, since that time there has been extensive research by security professionals, universities and ‘ethical hackers’ into how an attacker could pass through the air-gap in order to gain access to ICS components and establish covert channels for remote executions, malware and data theft.

This should be deeply alarming to utilities, because it is occurring at the same time that non-state actors are becoming more capable. Cybercriminals, “hacktivists,” terrorist groups, etc. — in the past few years we’ve watched as all of these groups have gained a high level of sophistication and capability. To make matters worse, nation-states (ex: Russia, China, North Korea, Iran) appear to be actively supporting some of these smaller hacker groups, as potential proxy hacking forces to their own geopolitical interests. It is important to realize that state-developed tools leak into the private/criminal sector. When the state developed tools are discovered, they are analyzed and the tools may either be directly repurposed or learnings from their study are distributed and included into criminal toolsets.

While state-sponsored cyber operations targeting US utilities are largely limited to espionage (except in times of war), these smaller groups do not share those constraints. This is the real immediate danger and it is why utilities need to take the threat of air-gap bypassing seriously, and to start assuming now that more attackers will begin to use it.

To better understand this threat, let’s first take a closer look at what happened in the Russian intrusions.

Based on the DHS announcement, here is what we know:

·       Hundreds of US utilities were compromised between 2016-2017

·       The attackers were Russian state-sponsored hackers

·       The hackers got so far into the affected utilities they could have “thrown switches” and “disrupted power flows” if they had wanted to

·       DHS warns that many utilities do not realize they remain compromised by this hacking group

·       DHS also warns that the Russian attacks on utilities, power plants and other players in the energy space are continuing, as they have for years

How they did it:

·       The hackers began by identifying the vendors of US utilities which have special access to these sensitive networks — for instance, the ability to update software, run diagnostics on equipment, and other services

·       They then targeted these vendors with “conventional tools” used by hackers — including spear-phishing and watering-holes in order to get inside the vendors’ computer networks

·       Once a vendor was breached, the hackers would then steal credentials which enabled the vendor to access utility networks and operations

·       These credentials gave the hackers direct access to the utility networks

·       Within the utilities, the hackers “vacuumed up information” about how the networks were configured, the type of software and hardware that was used, the administrative accounts which controlled it and more

But how can a hacker go from a compromised vendor to an infiltration of an air-gapped utility network?

Here are four primary methods attackers could use:

Air-Gap?

To start with, it’s important to ask, is there really an air-gap?

As eWeek explains, “Probably the most common danger is assuming air-gapped networks aren’t connected to the internet. In reality, a surprising number actually do have an internet connection that the IT staff misses or doesn’t realize poses a threat.”

They point out that common oversights with air-gapping include: dual-homed computers and servers, rogue WiFi routers, legacy connections and backup connections.

Similarly, a report by SecurityNow cites the erosion of air-gapping due to contractors: “in the past, ICS were well isolated from the outside world. However, third-party contractors and consultants, as well as different system administrators, have gained different levels of access over the years, making these systems more vulnerable.”

Israeli security firm Waterfall notes that remote access for vendors and contractors is a significant and growing problem: “… it was not the Russians who breached the air gaps, it was the utilities themselves. The air gaps were breached by the utilities who installed the firewalls to enable remote access for their vendors … The remote access problem is widespread. Large vendors routinely set up Remote Desktop VPN access into their customers, so the vendors can monitor and adjust equipment at electric utilities remotely.”

Any utility that enables remote access for contractors and vendors is creating a backchannel for hackers to exploit.

Physical Access

However, even with a more robust air-gap in place, there are multiple ways for attackers to piggyback off of physical access methods.

The most common method is to infect a USB drive (or other removable media) which a contractor or employee is likely to connect to the secure computer. This is how the Stuxnet malware was able to disrupt Iran’s nuclear program. Other real-world examples of malware jumping the air-gap by first infecting USBs include “Agent.BTZ,” “SymonLoader," “Rain Maker,” “Brutal Kangaroo” and “Cottonmouth.”

Researchers have also unveiled their own proof-of-concept USB malware attacks, like USBee.

Compromised Users and Malicious Insiders

Attackers could also utilize insiders within the utility, from engineers to executives, in order to conduct malicious operations. These may be insiders who are actually compromised, but more commonly, this involves insiders whose personal devices and/or corporate accounts have been compromised, allowing the attacker to insert malware targeted at devices and systems within the protected environment.

Blackmail and bribery are both risks which ought to be considered, particularly given the growing importance of the cyber domain to nation-states — who want to have a “plan in place” to strike at the US if they should ever be attacked themselves.

However, it is not unrealistic for cybercriminals to use this method as well, particularly when one considers the increased focused criminals are now giving to cyber extortion campaigns. Note that devices outside of the protected environment typically have far fewer network, operational, or physical security protections and may be surreptitiously physically tampered to make the device a remote trojan.

Supply Chain

Computer equipment and software that is vulnerable or infected prior to being delivered to the utility is another threat the industry needs to prepare for.

This type of effort would almost always be organized by a nation-state, but it could also be exploited later on by a number of downstream actors, once the vulnerability becomes more generally known.

So-called supply chain attacks are real. A few recent examples include Juniper’s firewall backdoors and vulnerabilities in Cisco ASA devices and Fortinet FortiGate firewall. Researchers at the University of Michigan also demonstrated how backdoors could be hidden in computer chips during the manufacturing process.

The recent attack against the Ukrainian power grid involved compromising a third-party software package used by the utility. The developer of the software was compromised by the attackers. When the updated software package was distributed to the utility, the backdoor was installed for later use.

Effective Security Controls

As for the control of sensitive networks, the techniques are known – but operators and technicians do not always implement them. This is largely due to convenience and efficiency demands, which are prioritized over the need for stringent security controls.

To begin with, utilities should start from an assumption of distrust. Unless explicitly hardened and tightly managed, it is important for utility operators to assume their systems have been or will be compromised. This includes ordinary corporate desktops, personal devices, the ICS and all vendor/contractor systems.

Next, it is important to have a layered system of checks and controls on employee behavior. For instance, it is common for organizations to perform an initial background check on an employee and then to trust that employee afterward without any follow-ups. Utilities must do re-checks of employees on a regular basis to determine if they have any new issues that may make them less trustworthy. These same background checks and verifications should also be conducted for contractors and vendors. This is particularly critical if vendors or contractors have physical access to protected environments of sensitive equipment. Additionally, this physical access should be monitored (ex: cameras, accompanying staff presence, etc.).

Does the organization have strong auditing of all actions by employees, vendors and contractors? Auditing does not prevent erroneous or malicious activity, but it adds critical accountability to operations.

If questions of staff trust are present and/or extremely high-impact operations are to be conducted, then it is necessary to implement two-person controls, where two individuals must explicitly cooperate to conduct the operation. The individuals can be required to use biometrics and/or hardware tokens to enable operations.

General corporate desktops run general purpose applications and are exposed to the corporate e-mail system. They typically have access to the internet as well. Such systems are not to be trusted for sensitive functionality. Utilities should trust the systems of contractors and vendors even less. Direct access to sensitive systems, networks and devices must be blocked from devices and systems that are used by contractors, vendors and general corporate desktops. (If access has to be allowed, such access must only be from systems that meet the same security controls - and monitoring - as the security critical enterprise systems.)

It is best if the security critical environment has its own, hardwired wholly isolated network. But such deployments have gone the way of the dodo over the past few decades, killed off by the search for cost savings and convenience. Given that organizations have chosen to rely upon the common communications infrastructure (and indeed, the POTS – plain old telephone system – itself now relies upon the common communications infrastructure), we are now forced into creating logical or virtual isolation, and as noted above, almost all installations have ports that allow access from the larger (and untrustworthy) environment.

Security critical systems must be on their own protected networks – with their own isolation firewall. It is not acceptable to simply use the enterprise firewall to segment off the security critical environment unless the organization is prepared to lock down the enterprise firewall and its associated security and process controls to at least the extent required for the security critical environment – in such an environment, the security critical environment is one configuration error away from being exposed on the internet. The security critical firewall, its configuration, and traffic to the outside world must be monitored from within the security critical environment itself, as the monitoring system in the lower trust environment may itself have been compromised. If the host network is not itself secure, it is necessary to build and manage secure sub-environments using techniques such as VPNs or IpSec.

Networks and environments must be built (or rebuilt) clean. They must then be protected from (potentially) compromised networks. For normal traffic, this requires filtering all traffic into and out of the network through firewalls and proxies, validating that all data is of expected structure, format and flow characteristics. For example, event and logging information will flow from trusted environments into less trustworthy environments. The egress filter that allows this data flow must also block unexpected and inappropriate data flows. Note that organizations that are facing the highest levels of attack cannot rely upon anti-malware software to detect compromise – the attacker will know what software they are using and will validate that the attack code they use is not picked up. Anti-malware software runs at the highest level of privilege and parses presumably untrusted code. As such, anti-malware engines are high value targets – and have a long history of vulnerabilities.

The secure environment must be kept clean. In particular, defenders do not want attackers modifying code, running their own code, or running their own scripts (Powershell in Microsoft environments, Bash/Borne/C/… shell in *nix environments).

·       Disable general scripting if possible. If scripting is mandatory:

o   Require script signing for operation

o   However, if this is not possible then limit who can run scripts and log and audit the usage of scripting

·       Where possible, code security is most strongly enforced by implementing ‘Trusted Boot’ for general purpose computing devices and enabling full code signing so that only code signed by the organization’s security organization or code signed by explicitly listed trusted roots (including scripts) will run. This requires that the organization’s security organization be able to sign all executables beyond those signed by the trusted signers (say Microsoft, Red Hat, Siemens, etc.) – including any updates and/or patches that have not been signed by trusted organizations. It is important to note that when an organization trusts a third party, compromise of that third party will compromise the organization.

Implementing code signing is most straightforward in new, and relatively homogeneous environments. It is far harder, if not impossible, to implement code signing in environments with large legacy contributions, particularly if critical components do not have support for the signature validation loaders required for code signing implementation. In many cases, organizations find implementing code signing to be beyond their operational capabilities.

·       Where code signing is not feasible, other security controls are necessary.

o   Executables must have their ACL’s set to restrict writing and all writes to the executables or folders containing executables must be alerted and monitored at the operating system level.

o   A file integrity monitoring program such as Tripwire or equivalent must be used to look for unauthorized changes to protected files.

o   If Trusted Boot is not implemented, a regular sampling program of off-line examination of boot images must be instituted to look for and detect root-kit attacks.

·       Note that code security applies to third parties as well. Last year’s attack against the Ukrainian utilities came via a compromised third-party application used by the utility. One major credit card breach occurred via a compromise of a HVAC vendor for the organization, allowing access to internal networks. If organizations are concerned about well-resourced directed attacks, they must anticipate that their partners will also be attacked and require that these partners also maintain appropriate security controls.

The secure environments must be managed. The organization has several choices.

·       The simplest is to have staff leave their personal devices behind and enter secured rooms to work in highly controlled rooms – a standard approach for facilities at higher levels of security. Such an approach does not allow remote access, a significant problem with the current operational model.

·       Have their staff log into hardened managed devices that are located in the secure environment to do their work in that environment. This is simpler to say, than it is to securely implement:

o   The staff person could authenticate to a Secure Administrative Workstation (SAW) or thin client, and then use two-factor authentication over a cryptographically secured connection (that validates both the user, second factor, and identity of the SAW/client) to a jumpbox that provides access to a secured work environment. The user would then work within the context of the work environment (dedicated management system, strongly managed virtual desktop, or equivalent). All tools, scripting environments, and the like would be installed, managed, and monitored within the work environment as the client environment is not trusted (despite its hardening).

o   The same model could be implemented with somewhat less assurance by using a Privileged Account Workstation (PAW) or having the administrator use an alternate administrative account (much less assurance that using a PAW) rather than their domain account.

o   Commercial Privileged Account Management (PAM) tools exist that perform much of this functionality, and are widely used in security sensitive organizations. PAM tools such as CyberArk, can enable multiple-may access, logging of administrative actions, dynamic use of one-time passwords, and numerous related steps. In pure modern Microsoft environments the ‘Enhanced Security Administrative Environment’, also known as the ‘Red Forest’ model can provide substantially similar defenses. Unfortunately, Microsoft’s ‘Red Forest’ approach is not well suited to heterogeneous environments with substantial legacy devices and systems.

 

It is important to note that even where utilities have managed to successfully protect their central control systems, significant disruption of the power grid stability can be accomplished by attackers who have dynamic remote control over user load, particularly if they have some limited control over lesser parts of the power grid.

Dr. John Michener's picture

Thank Dr. John for the Post!

Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.

Discussions

No discussions yet. Start a discussion below.

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »