Data Centers from Edge to Cloud
Our Focus
We design, build, and operate data centers that are great for companies and their people... but better for the planet
Mission critical installations face a number of cooling system challenges in the modern data center. The requirements of today’s IT systems, combined with the way those IT systems are deployed, has created new cooling related problems. These are new problems which could not have been foreseen when the data center cooling principles were developed over 30 years ago.
Core challenges in the data center cooling process can be grouped in the following categories:
- Adaptability/Scalability
- Availability
- Lifecycle Costs
- Maintenance/Serviceability
- Manageability
For many companies, meeting adaptability requirements remains the biggest challenge regarding data center cooling systems. Specifically, this involves problems with the cooling of high density rack systems, and the uncertainty of the quantity, timing, and location of high density racks. Data center cooling is further complicated by IT refreshes that typically occur every 1.5 to 2.5 years.
The cooling system within a data center should be flexible and scalable with redundant cooling features to guarantee steady performance. The data center cooling requirements in regard to lifecycle cost challenges share many features in common with adaptability solutions. Pre-engineered, standardized, and modular solutions are typically needed.
PTS’ expertise is a valuable asset in this area as companies are often unable to predict if their data center cooling system will supply a future load, even when the characteristics of the load are known in advance. If your company is looking to establish a cooling system for your data center that will withstand system failures and load increases, contact PTS as the next step in your process.
PTS approaches data center cooling designs using our proven project process:

Design goals are established across the following categories:
- Adaptability
- Availability
- Maintainability
- Manageability
- Cost
Once appropriate design goals are established there are a number of additional steps recommended for data center cooling best practices.
- Determine the Critical Load and Heat Load. Determining the critical heat load starts with the identification of the equipment to be deployed within the space. However, this is only part of the entire heat load of the environment. Additionally, the lighting, people, and heat conducted from the surrounding spaces will also contribute to the overall heat load. As a very general rule-of-thumb, consider no less than 1-ton (12,000 BTU/Hr / 3,516 watts) per 400 square-feet of IT equipment floor space.
- Establish Power Requirements on a per RLU Basis. Power density is best defined in terms of rack or cabinet foot print area since all manufacturers produce cabinets of generally the same size. A definite Rack Location Unit (RLU) trend is that average RLU power densities are increasing every year. The reality is that a computer room usually deploys a mix of varying RLU power densities throughout its overall area. The trick is to provide predictable cooling for these varying RLU densities by using the average RLU density as a basis of the design while at the same time providing adequate room cooling for the peak RLU and non-RLU loads.
- Determine the CFM Requirements for each RLU. Effective cooling is accomplished by providing both the proper temperature and an adequate quantity of air to the load. As temperature goes, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) standard is to deliver air between the temperatures of 68 °F and 75 °F to the inlet of the IT infrastructure. Although electronics performs better at colder temperatures it is not wise to deliver lower air temperatures due to the threat of reaching the condensate point on equipment surfaces. Regarding air volume, a load component requires 160 cubic feet per minute (CFM) per 1 kW of electrical load. Therefore, a 5,000-watt 1U server cabinet requires 800 CFM.
- Perform Computational Fluid Dynamic (CFD) Modeling. CFD modeling can be performed for the under floor air area as well as the area above the floor. CFD modeling the airflow in a computer room provides information to make informed decisions about where to place CRAC equipment, IT-equipment, perforated tiles, high density RLUs, etc. Much of the software available today also allows mapping of both under floor and overhead airflow obstructions to more accurately represent the environment.
- Determine the Room Power Distribution Strategy. The two (2) main decisions in developing a room power distribution strategy are: (1) Where to place the power distribution units (PDUs)?, (2) Whether to run power cables overhead or under the floor?
- Determine the Cabinet Power Distribution Strategy. In deciding how power will be distributed through the cabinet, use of dual power supplies, and cabling approach, it is important to understand the impact of power distribution on cooling, particularly as it is related to air flow within the cabinet.
- Determine the Room & Cabinet Data Cabling Distribution Impact. Typically, there are three (3) choices in delivering network connectivity to an RLU. They are: (1) Home run every data port from a network core switch, (2) Provide matching port-density patch panels at both the RLU and the core switch with pre-cabled cross-connections between them, such that server connections can be made with only patch cables at both ends, (3) Provide an edge switch at every rack, row, or pod depending on bandwidth requirements. This approach is referred to as zone switching.
Establish a Cooling Zone Strategy. Recall that effective computer room cooling is as much about removing heat as it is about adding cold. Generally speaking, the three (3) equipment cooling methods along with their typical cooling potential can be determined from the following table:
Room Cooling ~2 kW per RLU Row Cooling ~8 kW per RLU Cabinet Cooling ~20 kW per RL It is also critical to consider high-density cooling and zone cooling requirements.
- Determine the Cooling Methodology. Upon determining what cooling zone will be required, the decision of what types of air conditioners will be needed, must be made. There are four (4) air conditioner types: (1) air cooled, (2) glycol cooled, (3) condenser water cooled, (4) chilled water.In addition, it is also important to determine how heat will be rejected within the system and what type of cooling redundancy is required and available for a particular methodology.
Determine the Cooling Delivery Methodology. Different architectural attributes affect cooling performance in different ways. For instance, designs should consider the location of the computer room within the facility (I.e. onside versus inside rooms), height of the raised floor, height of suspended ceiling, etc.
- Determine the Floor Plan. The ‘hot aisle / cold aisle’ approach is the accepted layout standard for RLUs for good reason. It works. It was developed by, Dr. Robert Sullivan, while working for IBM and it should be adapted for both new and retrofit projects. After determining the hot/cold aisles it is critical to place the CRAC units for peak performance. This may include room, row, or rack based cooling approaches. Each works well depending upon the IT infrastructure, power densities, CFM requirements, and other attributes previously discussed.
- Establish Cooling Performance Monitoring. It is vital to develop and deploy an environmental monitoring system capable of monitoring each room, row, and cabinet cooling zone. A given is that once effective cooling performance is established for a particular load profile, it will change rapidly. It is important to compile trending data for all environmental parameters for the site such that moves, adds, and changes can be executed quickly.
Contact PTS for your Next Data Center Project
You shouldn’t have to compromise when it comes to your data center. Our expert team assesses your unique needs, and then employs a proven data center plan that reduces the amount of support infrastructure you will need.
The result is higher efficiency, reduced complexity and better resiliency at lower cost… all at the same time.
Contact us for a quick chat to discuss how a smarter design for your data center can save you money.
Contact PTS today:
(201) 337-3833

Designing for the Edge Webinar Recording Available
DICE National: Data Center Operations and Cooling Digital Summit recording available: Designing for the Edge with Pete Sacco, President of PTS Data Center Solutions, and other Data Center Professionals.

Commissioning Services
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

About PTS Data Center Solutions
Data Centers from Edge to Cloud PTS is an adaptive, coaching and supportive workplace focused on development, relationships, purposeful work,

Server Room Raised Floor
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are greatfor companies and

The Key to Cutting Data Center Construction Costs? Use Minimal Infrastructure
The average enterprise data center costs between $10 million and $12 million per megawatt to build, with costs typically front-loaded onto the first few megawatts of deployment. Seasoned data center builder Peter Sacco, founder and CEO of PTS Data Center Solutions, claims that it is possible to drive down costs by around 25 per cent or more, simply by being more disciplined on design.

Micro Data Centers
Minimize On-Site Construction for Faster Deployment Our Focus The transition to “cloud-first” doesn’t mean “cloud-only” — your local data center

Facility Maintenance & Operations Management
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

Architecture
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

Building Management Systems (BMS/BAS) Design
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

Server Room Air Conditioning
Heat removal is an essential environmental component of server rooms, yet poor design choices prevent many air conditioning systems from

The Data Center Facility Reimagined
Lower Cost to Build and Operate your Facility Our Focus Helping companies implement a multi-data center, multi-cloud hybrid approach to

Design and Engineering Services
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

NASA Case Study
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

Demand Fueling Data Center Building Boom
By Ann Bednarz, NetworkWorld.com, 03/06/2006 Developers are set to break ground in July on a project in Austin, Texas, that’s

Data Center Design by PTS
Experts for Your Always Available Data Center With everything PTS does, it believes in challenging the status quo. PTS chooses

Master Planning and Feasibility Analysis Services
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

IT Design, Installation, & Support Services
Data Centers from Edge to Cloud Our Focus We design, build, and operate data centers that are great for companies

Data Centers from Edge to Cloud
Data Centers from Edge to Cloud Read More Data Center Data Center Facility Reimagined AlwaysON Primary Power Micro Data Centers