Cooling the Colocation Data Center: Why Should You Care?

A closer look at controlling the costs of data center cooling.

Data centers use a lot of energy - more than 91 billion kilowatt hours of it by the most recent estimates, equivalent to about 34 large-scale, power plants. At the end of the day, someone is paying for that energy.

Enterprises won’t exactly retreat from information technology and cozy up to the abacus again; businesses need digital, and digital isn’t possible without data centers. And as far as cost-effectiveness goes, you can’t beat colocation data centers. There’s no property purchases, renovations, construction projects, networking maintenance, ongoing expenses for security staff and operations teams, and so on. You provide the IT equipment, the facility supplies everything else.

But that doesn’t change the fact that total cost of ownership varies substantially based on how much energy the facility consumes to support the IT load. Nor is it any secret that, in the data center, cooling is the most prolific source of non-IT energy consumption. By some estimates, cooling may account for as much as 40 percent of all the electricity used by data centers.

Cooling is expensive; so what?

Keeping servers operating at safe temperatures is critical in ensuring their performance and longevity but doing so directly contributes to a colocation data center tenant’s TCO.

More efficient cooling systems consume less power, ultimately translating to less money paid by the tenant. It’s really that simple.

If the actual power is also affordable, that further lowers the TCO. In other words, there are two core factors at play in the cost-effectiveness of a cooling system:

  • The actual cost of the energy, which varies from region to region.
  • The efficiency of the cooling system (the focus of this post) as defined by its ability to maintain an optimal temperature using as little energy as possible.

What does highly efficient cooling look like?

By the numbers, it looks like a low average annualized power efficiency usage (PUE) rating. This metric is a ratio of the total amount of power entering the facility against the amount being used to power IT equipment over the course of a year. The closer to 1, the better. An energy efficient data center will have a low average annualized PUE in the 1.1 to 1.2 range – even less in some cases.

However, hot-aisle containment is especially effective for the demands of modern IT equipment since it enables you to support a greater variation in load densities in a room. Cabinets are positioned back to back, so exhaust is expelled into a single aisle. That aisle is isolated so that heat generated by servers will rise into a return plenum so it can be directed to an air handler.

And once that exhaust goes up, it can be treated in one of several ways. In moderate-to-dry climates with access to plenty of water, indirect evaporative cooling is the way to go. Evaporative cooling is not unlike the human body’s own cooling mechanism. Only, instead of adding water to sweltering skin, you’re adding it to hot air that has been expelled from servers. As the added water evaporates, the ensuing vapor carries away heat. Treated air is then directed back into the cold aisle where it can be drawn in by servers–and so the cycle continues.

This combination of hot-aisle containment and indirect evaporative cooling has worked wonders at Sabey’s Quincy, WA data center campus, where the most energy-efficient facility boasts an impressive average annualized PUE of 1.13.

Remember, the cost of electricity is the colocation tenant’s burden. Make sure you work with a provider that will help you keep cool and carry on cost effectively.

Recent Blog Posts