Article Post

Clouds and Lightning - Data Centers and Energy

1.        Introduction

I'm sure any reader of this paper has heard the term "…in the cloud". This begs the question: where is the cloud? It's in data centers, and the applications and data this refers to can be anywhere in the world. The cloud is (mostly) the Internet.

This document is not about the Internet or any other cloud architecture, network, data-structure or application, but rather about data centers as facilities, utility clients, and (eventually) their energy use.

Although data centers support some of the most important businesses in the U.S. economy, massive investments in efficiency have resulted in their energy use remaining relatively constant over the last ten years while their capabilities are increasing with a compound annual growth rate (CAGR) of approximately 20%.

From a utility perspective many commercial and industrial loads are extremely important, but few, if any loads, have the financial impact of data centers.

Data centers in the U.S. consumed about 70 billion kWh in 2014, (roughly 1.8% of total U.S. consumption). Historically, data center electricity consumption increased by:

·         Nearly 90% from 2000 to 2005

·         24% percent from 2005 to 2010

·         About 4% from 2010 to 2014

And consumption is expected to continue increasing by 4% from 2014-2020. U.S. data centers are projected to consume approximately 73 billion kWh in 2020. [1]

The engines of data center work are the servers, and they experienced similar leveling. Severs shipments increased by:

·         15% per year from 2000 to 2005

·         5% per year from 2005 to 2010

·         3% per year from 2010 to 2014

And that last rate is expected to continue through 2020.

The one thing that has become (much) larger is the maximum data center size, and we have coined a new term in the process: “hyperscale” data centers. In researching this I could not find any clear definition of what a hyperscale data center is, but there seems to be a general consensus that these (1) use the cloud, virtualization and other techniques to allow unprecedented computing loads and rapid transfer of resources to different tasks, and (2) in the commercial realm are generally being implemented by "hyperscale" operators. The latter are defined by one or more of the following metrics:[2]

·         More than $1 billion in annual revenue from infrastructure as a service (IaaS), platform as a service (PaaS), or infrastructure hosting services (for example, Amazon/AWS, Rackspace, Google)

·         More than $2 billion in annual revenue from software as a service (SaaS) (for example, Salesforce, ADP, Google)

·         More than $4 billion in annual revenue from Internet, search, and social networking (for example, Facebook, Yahoo, Apple)

·         More than $8 billion in annual revenue from e-commerce/payment processing (for example, Amazon, Alibaba, eBay)

See subsection 2.1 for the service definitions under "Cloud services …".

Hyperscale is the near future. However before we delve into hyperscale, we need to explore the immediate past and present of data centers

2.        Current Data Centers

The technology in data centers, including energy-technology, is evolving rapidly. Private data center owners will be comfortable with innovative energy solutions, especially if they make their data centers operate more efficiently and reliably.

For the above reasons we are taking a deep dive into the data center business. This is with the understanding that the information in this section has a short shelf-life.

2.1.              Business Models

There are three basic data center business models plus one major variant as described below. A given data center owner/operator may embrace multiple business models and may provide other related services. Note that I’ve gone into some detail in the last type (cloud services). Currently, about 85 percent of data center workloads are cloud based. Cisco (ref 2) estimates that by 2020 92% of workloads will be processed by cloud data centers.

Colocation (commonly, colo): Locating customer equipment in a datacenter. Colocation often refers to Internet service providers (ISPs) or cloud computing providers that furnish the floor space, electrical power and high-speed links to the Internet for customers' Web servers. Colocation eliminates having to build secure facilities that provide power and air conditioning for client-owned servers. In addition, colocation centers are often near major Internet connecting points and can provide access to multiple Tier 1 Internet backbones. Although most equipment monitoring is performed remotely by the customer, a colocation datacenter may offer equipment maintenance and troubleshooting arrangements.

Colos may own or lease the building (seems to be an even mixture).

Wholesale Data Center Space: This is a variant of colocation, but a completely different business model. Whereas each colo-provider’s customer might have their equipment in a single cabinet (or even smaller) space, a wholesale provider might supply a whole floor to a single client. This allows for a high degree of customization and low prices. A major player in this space is Digital Realty.[3] Wholesale suppliers have historically gone after deals of 1 MW and above, but some are starting to go after deals of 500 kW and even smaller. Typical per-cabinet power ranges from 10 to 35 kW.

Managed Services: This is an umbrella term for computer, network and software surveillance and maintenance. The actual equipment may be in-house or at a third-party's facilities, but the "managed" implies an ongoing effort; for example, to make sure the equipment is running at a certain quality level and to keep the software up-to-date. Managed services are frequently offered by colo data centers, are also offered by third parties and can be provided directly by the customer’s IT staff.

Cloud services or cloud computing: is the latest way to manage and deliver applications to the end user. It may take the form of "software as a service" (SaaS), which provides the entire application. Or, it may be "infrastructure as a service" (IaaS), where only the cloud provider's servers and operating systems are used, and the IT department deploys its own applications on this platform. Both SaaS and IaaS are described in more detail below.

Cloud computing's distinguishing features are self-service, scalability and speed. Self-service means that end users can sign-up online, activate and run whatever applications they are offered from start to finish. The main limitation is that this compute method is mainly limited to Wintel (Windows Operating System on Intel-compatible platform) and Web-centric applications.

The cloud is scalable, meaning that it can be quickly set up to handle extra workloads, such as increased holiday Web traffic or when new products are launched. In addition, Internet cloud providers may be connected to multiple tier 1 backbones for fast response times and high-availability.

SaaS providers deliver the entire application to the end user, relieving the organization of all hardware and software maintenance. Myriad applications running from a Web browser use this model, including Web-based e-mail, Google Apps and's CRM. Using Windows applications is also supported, for instance, using Windows Azure. For IT, this has been a paradigm shift, because private company data is stored externally. Even if the data is duplicated in-house, company data "in the cloud" leads to security and privacy issues.

Also called "cloud hosting" and "utility computing", IaaS outsources the platform used to support operations, including storage, hardware, servers, operating system, and network components. Using the cloud only for computing power can be more economical than building new datacenters or renovating old ones. In-house datacenters must forever deal with security, environmental and backup issues, as well as hardware and software maintenance.

For both small and large Web publishers, cloud providers such as Amazon and Google are invaluable. Their hardware can be configured to handle tiny amounts of traffic or huge amounts of traffic. In either case, IaaS providers charge for actual usage, and there is no wasted expense with underutilized in-house servers. See Amazon Web Services and Google App Engine below.

Virtualization: The cloud employs server virtualization, which, among other benefits, allows application workloads to be easily added and removed as self-contained modules. In fact, virtualization has been a major enabler of the cloud computing model. However, the amount of work required by the customer differs greatly. Configuring virtual implementations on servers can range from being almost entirely automatic to requiring that the IT administrator be thoroughly familiar with the software.

In the platform as a service (PaaS) model, the service-user creates the software using tools from the service-provider. The user controls software deployment and configuration settings. Providers supply the networks, servers, storage and other services.

The current market leader for virtualization software appears to be VMware.

Private and Hybrid Clouds: Enterprises create private clouds in their datacenters that employ the same technology as the public cloud services. A private cloud provides the same flexibility and self-service capabilities, but with more control of privacy.

A hybrid cloud is both private and public. If the private cloud is overloaded, applications are pushed out to public cloud services. Extending software and databases from internal servers to a provider's cloud and managing both venues from a central console are major issues in cloud computing administration.

The major competitors in cloud services are.

·         Amazon Web Services[4]

·         Google Cloud Platform[5]

Each of these offer a wide range of products for computing, storage, database management, networking, migration, other development and maintenance activities.

As this paper is being finalized, Amazon has just reported stellar quarterly results, mostly fueled by AWS. "AWS revenues have grown at a compound annual growth rate (CAGR) of over 60% in this decade, with the trend continuing through the first half of the year."[6]

2.2.              Location

Due to energy costs (see next section 2.3) there are two variants in location:

·         Data Centers that are location-independent tend to be located in the lowest energy-cost areas and best areas for use of ambient air for cooling.

·         However many data centers are location-dependent. These include:

o   Office complex that needs an in-house data center

o   Server Huggers (see below)

o   R&D operations

o   Government restrictions (State/Local bodies want to keep data center in their jurisdiction and Federal facilities tend to be located at existing campuses)

o   High-security applications

A server-hugger is someone that wants to be able to personally manage their equipment in a data center. These are typically IT staff rather than management. Logically, this attitude is dominant with colo-users, and much less prevalent among cloud-users.

Because of location-dependent data centers still comprise a major portion of the market (see below) these tend to be well-represented among the high energy-cost regions (California and the Northeast). Therefore, advanced energy solutions that can reduce energy charges will have high value.

Currently there are 1685 colocation data centers. The following are the top 10 states by colo data center population, with the number of datacenters in parenthesis.[7]

California (224)

Texas (176)

New York (104)

Florida (101)

Illinois (88)

Ohio (73)

Virginia (70)

Washington (60)

New Jersey (57)

North Carolina (47)

2.3.              Energy Use and Efficiency

The data center industry has a metric to represent the energy efficiency of electric energy use: Power Usage Effectiveness (PUE). PUE is the total data center power input divided by the IT power input. However this is not a single metric, but rather a family of metrics. The bullets below describe these and the percentages are those used by data center owners in a recent survey:[8]

·         PUE Category 0: IT load measured at UPS output(s); total data center power measured at the utility meters; measures is peak utilization/demand in a single snapshot; used by 30% of respondents.

·         PUE Category 1: IT load measured at UPS output(s); Total data center power measured at the utility meters; uses 12-month cumulative readings; used by 25% of respondents.

·         PUE Category 2: IT load measured at Power Distribution Units (PDUs) supporting IT loads; total data center power measured at the utility meters; measures peak utilization/demand in a single snapshot; used by 19% of respondents.

·         PUE Category 3: IT load measured at the point of connection of IT devices to electrical system; measures total data center power at utility meters; uses 12 month cumulative readings; used by 11% of respondents.

In the above referenced survey of large data centers, data centers reported an average PUE of 1.7.

The diagram below shows the major subsystems involved in power delivery.[9]

The average colo data center size is around 75,000 square feet, which is enough for about 300 standard-racks. Overall data center power consumption (room envelope, currently installed design) is about 85 watts per square foot.[10] Assuming a server utilization of 75%, assuming 85% of the data center is dedicated to production floor space and a PUE of 1.7, the above average data center consumes about 7 MW.

2.4.              Power Source

There are two considerations when it comes to the primary power delivered to a data center: reliability, and sustainability. Although most data centers will have back-up power composed of uninterruptable power supplies (UPSs) backed up by diesel emergency generation, these are only designed to run intermittently. The typical UPS run time is 20 to 30 minutes, and diesel generator sets only have fuel to run for a few days. Thus frequent and/or extended outages could pose a major problem. The main advantage offered by large data center energy parks is that they can offer an additional layer of generation to buffer the data centers from extended outages and/or high demand charges during peak demand period. This generation can incorporate renewables to provide sustainable generation.

Renewable electric and heat energy are being embraced by some of the largest data-center owners in the technology industries, including Apple, Facebook and Google. Many of these firms simply buy renewable offsets or contract for green electric-energy from remote projects. Apple is an example of a company that also looks for nearby renewable resources or power generated at the data center. Two example of this are below:

·         One of Apple’s largest data centers is in Maiden, North Carolina. This data center uses 40 MW (peak) and 100% of this is renewable. Apple uses nearby PV cells and on-site biogas-fed fuel cells to power this site.

·         Two very large (each more than 338,000 square foot) Apple data centers Prineville, Oregon are also 100% renewable powered by nearby wind, solar and small hydro. The hydro is a small plant (5 MW) that is several miles away.

100% of Apple facilities in the U.S. are powered by renewable electricity.

2.5.              Resilience

One method of measuring overall data center robustness is Uptime Institute’s tier system. This measures all elements of a data center’s operation (not just power). There are four tiers as defined below.

Tier 1 = Non-redundant capacity components (single uplink and servers), 99.671% availability

Tier 2 = Tier 1 + Redundant capacity components, 99.741% availability

Tier 3 = Tier 1 + Tier 2 + Dual-powered equipment and multiple uplinks, 99.982% availability.

Tier 4 = Tier 1 + Tier 2 + Tier 3 + all components are fully fault-tolerant including uplinks, storage, chillers, HVAC systems, servers; everything is dual-powered; Tier 4 guarantees 99.995% availability.

2.6.              Data Center Growth

Global data center construction is expected to grow from $14.6 billion in 2014 to $22.7 billion in 2019, a compound annual growth rate (CAGR) of 9.3%.[11] There are approximately 3 Million data centers in the U.S. Although data center construction in the U.S. continues to grow moderately (CAGR of around 4%), the number of data centers remains fairly constant. Overall estimated workload performance of data centers in each region can be seen in the chart below.

3.        Factors Accelerating Performance

There are many developments that are accelerating performance while power consumption remains relatively static. Most of these trace their origin back to the granddaddy-rule of increasing returns from microelectronics: Moore's Law:

In 1965, Gordon Moore made a prediction that would set the pace for our modern digital revolution. From careful observation of an emerging trend, Moore extrapolated that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. [12]

To get some idea how revolutionary this law is, when compared to Intel's first microprocessor (the 4004), a current 14 NM microprocessor (like an Intel® Xeon® Processor used in many servers) has 3,500 times more performance, is more than 90,000 times more energy efficient and the price per transistor has fallen by more than 60,000 times.

Moore's Law pervades every type of electronics, so its impact is seen throughout a data center, even extending (to a lesser extent) to power electronics as is seen in uninterruptable power supplies.

Below we will look at each major component in a data center and how energy usage is decreasing.

3.1.              Server Trends

The server is the primary engine that produces work in a data center, and it also shown the greatest efficiency improvement. By moving workloads into the cloud and virtualizing processing, servers can accommodate more work. Average workload calculation in the table below is based on the virtualization rate of the servers and average number of Virtual Machines per virtualized server.[13]  The following is the average workload density for traditional and cloud data centers:








Traditional Data Center







Cloud Data Center







Several other technologies originally developed for other devices allowed the design of servers to be improved, and also allowed workloads to be pushed out to the servers from the original devices in a "virtuous circle":

·         Low-power devices were initially developed for laptop computers, and then smart phones and other portable computing devices.

·         These low-power devices were much more efficient traditional electronics and thus wasted much less (battery) energy and produced much less heat.

·         When these technologies were pushed out to server microprocessors and memory, this allowed:

o   More cores (processors) per microprocessor chips

o   More chips per processor board/blade

o   A much higher percentage of servers were able to be cooled with ambient air rather than requiring refrigerated air.

·         These improvements (and others) resulted in much less expensive workloads, especially in the cloud.

·         This allowed the original portable computing devices to cost-effectively push a higher percentage of their applications out to the cloud, thus improving their performance (again).

·         This resulted in many more workloads being executed in the cloud accelerating the growth of the cloud.

3.2.              Storage Trends

Data center storage is important because a large percentage of workloads rely on primary and/or backup storage in the cloud, even when the primary processing occurs in a corporate or colo data center. Virtually all workloads using cloud service-models also use cloud-storage. The ability to back-up storage in physically separate data centers provides greatly improved data-resiliency.

From 2015 to 2020 data center storage installed capacity is estimated to grow from 382 Exabyte (EB) to 1.8 Zettabyte (ZB) an increase of approximately five-times (EB = 1018 bytes, ZB = 1021 bytes).

By 2020, the total global installed data storage in cloud data centers will account for 88% of overall capacity (in 2015 this was 65%).[14]

Technologically, the amazing statistic is that hard disk drives still account for the majority of drives used in data center storage even though they are primarily electro-mechanical devices. Solid-state drives are taking an increasing share of this business as can be seen from the chart below:

U.S. Data Center Installed Base Drive Counts[15]

The overall capacity of storage is increasing exponentially as can be seen by the chart below.

U.S. Data Center Storage Installed Base in Total Capacity

However the total storage energy use is actually starting to decline (see chart below).

U.S. Data Center Drives Total Electricity Consumption

3.3.              Network Trends

Over the last few years network traffic has been forced to increase by many factors, but the move to the cloud is among the most important. Starting in 2008, a majority of Internet traffic has originated or terminated in data centers, overtaking peer-to-peer traffic which was dominant before then. It is projected that by 2020 more than 90 percent of data center traffic will be cloud traffic.[16]

Like with other data center components, especially those driven directly by Moore's Law, performance is increasing dramatically while electricity consumption has been decreasing. The main performance metric here is network bandwidth, with older, slower network ports being displaced by newer, faster ports. See the two figures below (from the LBNL Report referenced above):

U.S. Data Center Total Installed Base of Network Ports

U.S. Data Center Network Equipment Total Electricity Consumption

3.4.              Other Loads and Power Use Efficiency

The above subsections covered the three primary load-types in data centers. Other loads are included in the numerator of the Power Use Efficiency (PUE) calculation (see section 2.3). The IT load is primarily composed of the demand for the servers, storage and network components in the denominator. The table below defines the elements of the PUE by space-type.[17]

Space Type






Total PUE












































[1] "United States Data Center Energy Usage Report", LBNL-1005775, Arman Shehabi, Sarah Smith, Dale Sartor, Richard Brown, Magnus Herrlin, Environmental and Energy Impact Division, Lawrence Berkeley National Laboratory, 2016,

[6] "Amazon Posts Solid Quarter As AWS Drives Top-Line Growth, Profits", Forbes, Oct 27, 2017,

[8] Matt Stansberry & Julian Kudritzki, Uptime Institute 2014 Data Center Industry Survey.

[9] Jumie Yuventi & Roshan Mehdizadeh, A Critical Analysis of Power Usage Effectiveness and Its Use as Data Center Energy Sustainability Metrics, CIFE Working Paper #WP131, February 2013, Stanford University.

[10] Team, “Knowledge Base, Data Center Power Series #4 – Watts per square foot…what?!?”,

[11] " Global Forecast bright for the Data Center Construction Market", Datacenter Dynamics, January 2015,

[13] Cisco Communities thread "Workloads per Server",

[15] "United States Data Center Energy Usage Report", LBNL-1005775, Arman Shehabi, Sarah Smith, Dale Sartor, Richard Brown, Magnus Herrlin, Environmental and Energy Impact Division, Lawrence Berkeley National Laboratory, 2016,

[17] "United States Data Center Energy Usage Report", LBNL-1005775, Arman Shehabi, Sarah Smith, Dale Sartor, Richard Brown, Magnus Herrlin, Environmental and Energy Impact Division, Lawrence Berkeley National Laboratory, 2016,


No discussions yet. Start a discussion below.