Sabey Data Centers Blog

It’s no secret that rack kW is steadily increasing in the data center, nor is it any wonder why. Processing power is greater than ever and there’s only one direction for it to go: up.

However, the massive, sustained computational power required by machine learning workloads is anything but business as usual. Most data center operators can grapple with gradual increases in IT footprint, but high-density GPU clusters for machine learning raise the stakes, particularly where cooling is concerned.

Perhaps newer data centers, especially those using containment strategies, have the infrastructure to adequately cool what, in some cases, amounts to 30 kW per rack or more. Most older data centers, though, aren’t ready to sustain these requirements. This could prove problematic as artificial intelligence, machine learning and deep learning workloads become more commonplace.

Indeed, some colocation providers that operate older raised floor data centers without hot aisle containment, already serve customers that want to load up their cabinets but lack the ability to cool their desired densities. But the next wave of customers will have an even bigger ask: cooling infrastructure that can support machine learning workloads.

How can this be done efficiently, and cost-effectively?

Fighting fire with fire

If there’s one thing we’ve learned from Google in the past year or so, it’s that the solution to cooling high-density machine learning workloads may be more machine learning. The Mountain View giant spent several years testing an algorithm that can learn how to best adjust cooling infrastructure. Consequently, Google yielded a 40-percent reduction in the amount of energy used for cooling. Phase two of that deployment is to put the algorithm on auto-pilot rather than having it make recommendations to human operators.

Clearly, machine learning can and has been used to achieve greater data center cooling efficiency. While most data centers are not yet equipped to do the same, the theory behind how machine learning can optimize cooling efficiency is fairly well understood.

It starts with a PID (proportional integral derivative) loop. This tried and true method helps an industrial system (cooling infrastructure in this case) make real-time adjustments to thermostats by comparing the actual temperature of the data center to the desired temperature so as to calculate an error rate. It then uses that error rate to make a course correction that will yield the desired temperature with the lowest electricity consumption.

PID loops work well; however, they optimize based on a finite set of conditions, and when it comes to data center cooling, there are many conditions that are constantly in flux. This is where machine learning comes into play. Rather than tasking a person with optimizing and re-optimizing based on shifting conditions, an algorithm can monitor PID loops and constantly adjust as needed.

In other words, the PIDs are perpetually configured based on changing factors that influence cooling infrastructure efficiency. Everything from internal humidity, to external weather, to utilization fluctuations within the facility, to interactions between different elements within the cooling infrastructure can influence the desired temperature stability in a high-density data center, and also how efficiently that desired temperature is achieved. It is impractical and costly for a human to constantly optimize PID loops to ensure the most efficient configuration is always in place.

But a machine learning algorithm can. It can theoretically learn the optimal settings for each individual circumstance and apply these adjustments automatically, without human intervention, based on the real-time external and internal conditions. Think of it as auto pilot for data center cooling.

Turning concept into reality

Google building an application like this is one thing, but what about other data center operators?

The development and implementation for the type of application we’re describing could come with a colossal upfront cost, one that’s hard to justify for most data center operators – even those with many data centers.

However, developing and training software to act in this way could be a competitive advantage for forward-thinking controls companies. Arguably, in the future, it will be table stakes. Of course, even with such advanced cooling controls, the data center’s physical infrastructure is still important. Legacy data centers using raised floor and inefficient cooling infrastructure have lower limits to capacity and efficiency – regardless of how smart the controls program is.

The sense of urgency for this type of system is nascent. But we know with certainty that the majority of data center operators (67 percent, according to AFCOM) are seeing increasing densities. We also know that machine learning’s power requirements have potential to spur this growth on at a blistering pace in the years ahead.

While we don’t know yet is how we’ll handle this transformation, I suspect that the solution is already right under our noses.

Remember the scene in the latest Game of Thrones season where (spoiler alert) the undead dragon melts “The Wall,” releasing hordes of White Walkers? That’s kind of what downtime feels like to the modern enterprise—utterly disastrous.

There’s no shortage of data center downtime horror stories—airlines leaving passengers stranded, e-commerce sites parting with millions of dollars worth of missed opportunities, financial services being rendered useless. We’re living in an age of always-on technology. When IT equipment fails, productivity plummets and revenues recede.

It’s no wonder Grand View Research projects the disaster recovery market will grow at a compound annual rate of more than 36 percent through 2025. Disaster recovery is the last line of defense between business as usual and a dragon-like hit to your bottom line. It’s the wall behind the wall, so to speak.

As such, choosing a DR site is a big decision—one that should be based on the core factors we examine in this post. 

But first: Should you outsource completely?

Disaster recovery as a service (DRaaS) can look appealing since it puts everything in the hands of a third-party provider. However, it’s not always ideal for organizations that want to have greater control over their information systems and the ability to manage backup workflows internally. And depending on the scale of company, DRaaS isn’t necessarily the best option for everyone. 

Granted, the opposite extreme—building a new data center from scratch—takes a Herculean effort. Why buy land, construct a campus over it, connect it to the grid, etc. for a facility that you hopefully won’t have to rely on too often? 

Many enterprises have embraced the popular middle ground of leasing space from a colocation data center with existing infrastructure (electricity, cooling, security, etc.). This lets them deploy their own hardware and software and commission an in-house team of IT specialists to manage it. Think of it as a pre-built wall of ice stocked with rangers you trust and weaponry of your choice. 

Tips for choosing a DR site

With that in mind, let’s review a checklist to help you strategically select a backup site:

1. Connectivity

The DR site you select must support adequate connectivity to allow you to service your client base in the event of an outage. That is, after all, why you’re paying for the site—to support normal operations by keeping mission-critical servers online.

2. Safe distance from headquarters

Your DR site should be far enough away from HQ to be unaffected by any geographic disasters such as storms, flooding, etc. What’s more, remote management largely eliminates the need for proximity. The exception is active-active architecture, which should be situated within 30 miles or so to support ultra-low latency – e.g., Sabey’s Quincy and East Wenatchee facilities in Washington are close to business hubs in Seattle but still at a relatively safe distance from HQ.

3. Climate stability

Consider factors such as proximity to flood zones, geological fault lines, hurricane paths and other potential climate hazards. It’s also worth noting that cooling costs will likely be higher in lower-latitude sites.  

4. Uptime and availability

Backup servers aren’t “mission-critical” until, of course, they are. If you work with a colocation data center provider, make sure that they have an uninterruptible power supply (UPS) and redundant cooling. Your backup facilities need to be as resilient as your primary facilities. Also, make sure you ask about concurrent maintainability. This refers to a provider’s ability to perform maintenance on any facility equipment (electrical, mechanical, lighting, etc.) without disrupting power delivery to servers.

5. Security and monitoring

This almost goes without saying, but on-site operations and security teams should be staffed around the clock to handle any issues that arise and to safeguard the premises. Colocation providers are also responsible for implementing fire detection and suppression systems, temperature monitoring, video surveillance and other monitoring systems that ensure the safety and performance of facility infrastructure.

6. Total cost of ownership

Let’s face it: money matters, even in DR. In a colocation data center setup, tenants should primarily pay for the power that’s used to keep their IT equipment online, and as little as possible for the energy that supports it (cooling and other sources of overhead).

To this end, inquire about the provider’s average annualized power usage effectiveness (PUE). This metric is a ratio of power entering the facility to the amount used for the IT load. An average annualized PUE of 1 means that, over the course of a year, all energy used by the facility goes to servers and switches. The best recorded PUEs range between 1.1 and 1.2—but make sure the provider you choose gives you a number that reflects actual operations, rather than a calculation or theoretical value. 

Floods, hurricanes, cyberattacks, meltdowns, dragons—it’s an unpredictable world. Reclaim some peace of mind with a carefully selected DR site.

With each passing year, more organizations seek out green service providers both for the sake of reducing overhead and as a reflection of corporate sustainability values. A recent Green House Data survey revealed that 36 percent of respondents chose what they perceived to be green service providers in 2017, an 8 percent increase over 2015.

Data centers, though indispensable to organizations in every conceivable industry, are voracious energy consumers. This fact has generated interest in sustainable facilities – which raises an important question: What is a green data center?

In broad strokes, it’s a data center that maximizes energy efficiency. But to give you a more exact sense of what that means, we’ve identified some of the factors that contribute to data center sustainability.

Going Green Saving Green

The source of the energy

First, it’s worth asking where the energy comes from. Hydroelectricity, for example, is widely accepted as a renewable source of power. It also has the benefit of being affordable. A data center with hydroelectricity simultaneously minimizes its carbon footprint and its cost of services provided. In other words, the facility would be sustainable while also being cost-effective for the tenants just by virtue of running on clean energy.

Case in point, Sabey’s data center campuses in Washington State are powered by some of the cheapest hydroelectricity in the country, if not the world. This low cost of energy carries over to clients in the form of a low total cost of ownership for tenants. Just as crucially, the clean source of power boosts the facility’s sustainability before efficiency even enters the equation.

Power Usage Effectiveness rating

Data centers are projected to consume an astonishing one-fifth of the world’s energy by 2025, according to Data Economy. A green data center endeavors to conserve energy by using as little as possible on non-IT functions such as cooling. The Green Grid’s Power Usage Effectiveness (PUE) metric is central to this effort.

To calculate PUE, divide the total amount of electricity entering a facility by the amount specifically powering the IT load. A perfect PUE rating of 1 would mean all a facility’s energy goes to the IT load. This is more wishful thinking than a reality at this point; however, some of the most efficient data centers in the world boast an average annual PUE in the 1.1 range. One of Sabey’s most efficient Washington facilities, for example, has an average annual PUE of 1.13. Make sure that the PUE you examine is based on actual operating conditions, and is not a theoretical calculation.

Cooling methods

Cooling-related energy-saving methods will be captured in the average annual PUE, but they’re worth examining because they illuminate the types of efforts that go into managing a green data center.

Servers, network switches and other IT equipment inevitably generate heat. Heat must be rejected by a fluid (typically air), with that fluid treated and circulated in the most efficient manner possible. Enter hot-aisle containment. Rows of cabinets are positioned back-to-back so that they’re separated by a hot aisle (hence the name) and isolated from the cool aisles with blanking panels. Exhaust from the rear of servers is expelled into this aisle, where it naturally rises into overhead return plenums and is passively directed into air handlers. The air handlers reject heat and discharge the conditioned air back into the cool aisle,  where it can be pulled through the IT equipment, by the IT equipment’s fans, and blown into the hot aisle again.

That brings us to the actual cooling units, which should be selected based on the climate. Indirect evaporative cooling, for instance, is a highly efficient process in moderate-to-dry climates that have abundant access to water (e.g., Central Washington). It involves adding water to the air to lower its temperature. Efficient data centers also use economizers that, on chilly days, deactivate the compressor and instead use the cold outside air to provide cool air to the servers – either directly or through a heat exchanger of some kind. Think of it as turning the air conditioner off and putting a fan in the window.

Other key considerations

Other indicators of a green data center include Energy Star certification, as well as a general commitment to reduction of waste and recycling. It’s also worth noting that IT equipment efficiencies can contribute to a green data center, but it is the responsibility of the tenant in a colocation data center to select efficient technology, and to avoid running idle or “zombie” servers whenever possible. While comatose or inefficient IT loads won’t degrade PUE, they still contribute to waste and are therefore worth addressing for sustainability and cost-saving purposes.

For additional insights into green data centers, or for information about Sabey’s most energy-efficient facilities, contact us today.


How Adopting a Hybrid Cloud Strategy Can Enhance Enterprise IT Operations

Have you ever tried to balance a cloud?

That’s exactly what enterprises striving for digital transformation have realized that they need to do.

Digital transformation is a trend that is sweeping the world as organizations seek to enhance their performance and better serve their customers though technology. Moving mission-critical applications and electronic records from on-premises data centers to the cloud, for example, can increase business agility, ensure future scalability, and optimize daily operations. As these organizations have migrated workloads to meet the storage and connectivity demands of bandwidth-intensive applications, IoT devices, and user interfaces, they are developing multi-cloud strategies that effectively balance the scale and simplicity of the public cloud with the security and control of private cloud hosting for a cost-efficient result. In fact, many industry experts agree that most businesses will eventually employ multiple cloud platforms for the hosting of their data and applications, so let’s take a deeper look at what it actually means to have such a hybrid cloud strategy.

“What is hybrid cloud computing?” is a question likely to generate debate among members of the technology community, as the concept is relatively new and evolving. The National Institutes for Standards in Technology (NIST) states that “hybrid cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability.” In layman’s terms, a hybrid strategy allows you to utilize different types of cloud infrastructure to create a single solution that provides the best functions of each platform. An effective hybrid cloud strategy enables companies to assign distinct workloads to appropriate cloud platforms to ensure they can support each load’s individual requirements for security, speed, agility and flexibility without wasting unnecessary resources.

However, implementing and managing any hybrid cloud strategy are an extremely complex endeavor, and a high level of knowledge and experience is critical to ensure its functionality and reliability. This high bar means that going it alone simply isn’t an option for most companies.

Partnering with a progressive colocation services provider can help these companies clear this hurdle and establish a hybrid cloud environment that will enable the seamless and concurrent deployment of on-premises, colocation, and public cloud data and application hosting. The right provider can build a custom multi-cloud solution designed to suit a company’s unique requirements and take on the responsibility of cloud management and maintenance, ensuring its continued security, reliability, and compliance with various regulatory standards.

Whether you’re considering cloud solutions for the first time or seeking new ways to accelerate your company’s digital transformation through a hybrid cloud strategy, just remember that when it comes to developing an effective and customized solution: two clouds are better than one.

Advanced Colocation Solutions Bridge the Gap Between On-Premises and Hyperscale Data Center Environments

Traditional data centers are destined for extinction like the dinosaurs they are.

Our digital age, characterized by enterprise virtualization and cutting-edge trends such as IoT and Big Data analytics, has far outgrown the traditional data center’s characteristic patchwork of management tools for servers, storage, routers, network, and power. Yesterday’s solutions are simply unable to support this sudden and significant increase in data storage and virtualization needs.

A growing number of enterprises are looking ahead, eager to turn the page on their private, traditional data centers to an exciting new chapter of flexibility, scalability, and modularity. Increasingly they are looking to the cloud’s ease of deployment and maintenance to meet their evolving needs.

Many forward-looking companies who transition to the cloud, however, discover that the scalability and flexibility come hand-in-hand with an insidious downside: cloud creep. They quickly learn that a cloud footprint can be all but impossible to contain, leading to a lack of control, shocking expense, and unacceptable levels of security risk.

Modern enterprises must choose between outdated and unworkable legacy data centers and expensive, high-risk hyperscale providers.

Or do they?

Enterprises with modern requirements seeking the convenience of the cloud with the control and security of an in-house data center have found their ideal solution in colocation data centers.

By providing a new and innovative data center model, colocation providers are introducing a unique approach to the way facilities are designed, managed, and maintained with the goal to support increasingly complex workloads in a cost-effective manner. For example, data center architects construct colocation facilities with optimization in mind, utilizing free and cost-effective cooling methods as well as centralized UPS solutions.

Partnering with a progressive colocation services provider can even ease the transition, enabling an enterprise to establish a hybrid environment for seamless integration of existing on-premises and public cloud applications.

If you’re interested in learning more about this trend, be sure to join us for a weekly exploration of the key solutions and technologies that are impacting the data center industry. In our upcoming three-part series, we’ll explore the various practices and technical deployments that define an advanced colocation environment, including hybrid cloud enablement, high-density infrastructure, and top efficiency techniques that have the greatest impact on operational costs.

Energy efficiency is the holy grail for data center owners and operators seeking low operational expense and minimal environmental impact. However, according to the Natural Resources Defense Council, data centers in the U.S. alone are projected to consume 139 billion kilowatt-hours by 2020, placing potential strain on the environment as well as facilities’ bottom lines.

We recently had the opportunity to sit down with John Ford, Sabey Data Centers’ Vice President and General Manager of Intergate.Seattle, to learn more about the various strategies employed at the facility to conserve energy and reduce cost. During the interview, John shared information about the history of Sabey Data Centers and its position on energy conservation and environmental responsibility, as well as how today’s trends affect data center energy consumption.

Insider Perspective

Though many are quick to blame data centers for consuming excessive amounts of power, John strongly believes that they are the most energy-efficient option for data storage:

It’s an interesting paradox because there is so much data being generated and there’s an awful lot of concern about how power is being utilized to store this information. Data centers function as central repositories for the copious amounts of data produced by burgeoning industry trends such as the Internet of Things and Big Data.

While it is undisputed that these colocation facilities use a great deal of power, this approach is much more energy-efficient than dispersing data across a wide range of smaller buildings. Simply put, it is the best and most energy efficient way to store information.

Renewable Energy

When exploring the possibility of straining available energy resources, John says it’s important to turn to renewable sources and use them in a wise and effective manner:

There are so many renewable resources available that there’s no shortage of energy, and there won’t be for the foreseeable future. However, using that energy efficiently is incredibly important. It’s paramount that we take care to ensure the energy we have is being used in smart and effective ways.

Luckily for Intergate.Seattle, its proximity to multiple renewable energy resources in the Pacific Northwest places it in a prime position to take advantage of “free” cooling techniques:

In the Northwest, we are blessed with having a lot of hydro power, which is a low cost, renewable, energy alternative. However, just having the power available isn’t a positive thing unless you’re using it conservatively. To achieve this, we have implemented multiple energy conservation techniques and use more efficient cooling and UPS systems that run on the least amount of fuel possible.


Sabey has always approached data center power consumption from a place of innovation and environmental responsibility.

Our construction company started out building pretty unique facilities ever since the 1970s, and Dave and John Sabey have always looked for new ways to build infrastructure that is more efficient and has less of an impact on the environment. Sabey Data Centers also pioneered cooling research in the early 2000s as the industry discovered that many facilities were in fact being over cooled utilizing inefficient methods. By developing and implementing new cooling techniques that use thermodynamics to their advantage, we maintained the design philosophy of keeping it simple.

Intergate.Seattle is the largest, privately-owned multi-tenant data center complex on the West Coast and Sabey Data Centers’ flagship data center property. The facility comprises two campuses, eight buildings and more than 1.3 million square-feet of data center space.

As the industry continues to develop and utilize more innovative technologies and techniques that improve the energy efficiency of data center facilities, the future is looking bright.

This really is an exciting time to be involved in the data center industry. I look forward to seeing what the future will hold as new trends and technologies emerge.

A small farming town on Washington state’s high prairie is buzzing with life: restaurants are busy, businesses are growing, infrastructure is modernizing, and ground was broken last month for a new high school. The regional airport will soon feature a daily flight to and from San Francisco.

Welcome to Quincy

Founded as a railway station in the late 19th century, Quincy (and Grant County at large) was transformed into an agricultural center upon the completion of the Grand Coulee Dam in 1942. Decades later, Quincy had little reason to take note of technology as an internet was born, a dot-com bubble grew and burst, and smartphones took up residence in every pocket.

With the rise of data centers in recent years, however, technology took note of Quincy.

Today, it’s difficult to throw a rock in Quincy without hitting one of the dozens of impressive data centers that have been built in the last decade. Enterprises and colocation providers alike have made strategic decisions to build data centers in central Washington – but why?

In 2011, Sabey Data Centers opened the door to its own Intergate.Quincy data center campus, and it has remained one of our most successful properties to date. After seven years in the region, we’d like to share what we’ve learned, and what makes central Washington such a special place for data centers.

Central Washington Energy

Clean and inexpensive hydroelectric power is a hallmark of central Washington, with prices as low as $.027 driving the total cost of ownership (TCO) for data centers to among the lowest in the country.

It’s been said that with great power comes great responsibility, so sourcing natural, renewable, and reliable power is only half of our commitment to sustainability; using that power as responsibly as possible is the other half. As a testament to our commitment, Sabey Data Centers was recently cited by the U.S. Department of Energy as having achieved the highest level of energy savings among data centers in its 2017 Better Buildings Progress Report, due in part to the unprecedented 41% energy savings at Intergate.Quincy.

Saving 6 million kW of energy annually, the campus was also awarded the highest-ever Power Players Award from the Smart Electric Power Alliance (SEPA).

Central Washington Geography

In addition to inexpensive, renewable energy and peerless efficiency, our tenants at Intergate.Quincy (and the nearby Intergate.Columbia) also enjoy benefits of the region, such as seismic stability and a moderate climate that allows for 90% free cooling.

Available rural real estate also lends itself well to the large footprint of data centers, and central Washington has plenty. Intergate.Quincy provides 420,000 square feet of purpose-built colocation space across three strategically-designed buildings, offering modular efficiency for users of virtually any size. Built-to-suit powered shell, hybrid configurations, and wholesale turnkey colocation module options exist, all managed by an award-winning critical environment management team.

Quincy’s proximity to nearby, bursting-at-the-seams Seattle has also been a boon for its data centers, as skilled workers are all too eager to leave the rain and traffic jams behind for a smaller community and a lower cost of living. A short drive over the Cascades (or an even shorter flight from SeaTac airport) also makes central Washington data centers easily accessible by every company across the country.

As it turns out, what’s good for data centers is also good for people! Our employees, and those of our tenants, enjoy a small-town, mountain lifestyle in central Washington. Summer brings fishing, camping, and water recreation, while cold-weather sports such as skiing and snowmobiling round out the year. Data centers have breathed new life into the countryside, helping give rise to wineries, restaurants, cultural events, and conveniences that have in turn reinvigorated tourism in the region.

Central Washington Connectivity

Intergate.Quincy offers access to multiple network carriers, including Frontier Com­munications, CenturyLink, NoaNet, Noel Communications, and StarTouch. The facility also provides access to dark fiber providers such as Zayo, in addition to the SDN Next-Generation Network Platform, PacketFabric. To support the diverse connectivity requirements of tenants, Intergate.Quincy delivers multiple telecom services, diverse Points of Entry, dark fiber, Dedicated Internet Access (DIA), point-to-point, and on-ramps to the cloud.

Our remote hand services are available 24×7, providing customers with various services including cable and loopback testing, device reset, standby support, rack audits, and short- and long-term material storage.

About Sabey Data Centers

Leveraging 45 years of innovation, Sabey Data Centers has developed a wide footprint of multi-tenant data centers across North America, each designed to deliver performance, flexibility and scalability. A family-owned organization headquartered in Seattle, Sabey Data Centers boasts a portfolio of data centers from coast to coast.

If you’re interested in learning more about colocation at Sabey’s Intergate.Quincy facility, please click here or contact us.

How Hydroelectric Power Provides The Best Of Both Worlds

If you’ve ever stood along the shore listening to the persistent crashing of waves during high tide or marveled at the beauty of nature at the foot of a cascading waterfall, you’ve experienced the raw power that is water in motion.

Among the most notable innovations in history, humans have been harnessing the power of water to perform work for thousands of years, dating back to ancient Greece. This form of energy, known as hydropower, has grown and evolved over the course of time, resulting in modern-day hydroelectric power plants that harness the energy of flowing water to create electricity.

Why California-Based Companies Are Moving Their Data Center Operations To Washington State

Silicon Valley. It’s a name that conjures up tech giants such as Google, Apple and Facebook residing side-by-side in a magical, if somewhat vacuum-sealed, locale. It’s also a region where hyper- scale businesses and innovative start-ups alike base their operations in the hope that its tech-friendly environment will help drive major success. Despite its allure among technology-driven industry sectors, many California-based enterprises are beginning to realize that perhaps there’s a better place than Silicon Valley to host their data.