Consolidating servers has been done for years by very large companies. Amazon.com, eBay and hundreds of other companies that rely on computing power have hundreds and thousands of servers. Their goal is to consolidate as many of these servers as possible to save space, reduce energy consumption, save money and other benefits.
What about your small business? Maybe you’re using only three servers or five servers – consolidating servers is for you, too. Here is an overview of why, plus a checklist to evaluate your IT structure for consolidation.
Q&A with Jeff Sturgeon, vice president of marketing, Liebert products, Emerson Network Power
Why does server room equipment have to be cool?
Electronics are extremely sensitive to variations in temperature, humidity and air quality, so typical comfort-cooling environments can create problems. If IT equipment is not cooled properly, there is an increased risk for premature failure of protected equipment and higher than necessary operating costs. Unfortunately, business growth is causing the data center to outgrow its support systems. Couple this with a lack of understanding of the cooling requirements of sensitive electronics and that’s why small businesses end up trying to use comfort cooling for precision electronics.
When initial costs and operating costs are both taken into account, precision cooling systems actually represent the most cost-effective solution to cooling sensitive electronics. A properly cooled environment will:
– Improve the resilience of the IT system,
– Reduce downtime,
– Shorten recovery times, and
– Eliminate disruptions when new technologies are added.
I thought virtualization and consolidation were supposed to take care of server expansion – hence cooling?
Consolidation and virtualization change the power and cooling profile of the data center. The benefits of consolidation and virtualization may not be realized if power and cooling strategies are not adjusted.
Simply put, denser, more powerful servers generate more heat than the systems they are replacing. Blade servers in particular concentrate heat within a small space. Room air conditioning is unlikely to be sufficient for cooling consolidated environments, so precision cooling needs to be added. Hot spots that arise from higher density servers may need specialized high-density cooling.
What’s the smallest “data center” that would be affected by cooling? Two servers? Three servers?
IT equipment often requires 24×7 dedicated cooling and may require more precise temperature, humidity and air filtration control at levels provided only by precision cooling. While many of the smallest data centers rely upon building air conditioning for IT spaces, there’s a good chance they’ll need precision cooling to ensure reliable operation of any new equipment.
Today, a typical 1U server uses between 250 W to 500 W. If a rack were to have at least two to four servers, it would need dedicated cooling. At 5kW and above, high-density cooling is often required to adequately protect equipment. So, one blade server alone should have a high-density cooling solution as it commonly uses at least 6kW of power.
ASHRAE – the American Society of Heating and Refrigeration Engineers – recommends a stable operating range of 68 to 77 degrees F for data centers, with a maximum allowable range of 59 to 99 degrees F. These conditions make the cooling system far more critical, as data center operators recognize how availability can be compromised with the rise in heat densities. This is compounded if you are cooling in a closed environment.
What are some suggestions for keeping the server room cool without purchasing technology solutions? Better planning?
Before anyone runs out to buy precision cooling technologies, they should work with their IT reseller to conduct an assessment of their data center. This should help to calculate power and cooling availability, capacity and redundancy for the consolidated data center to identify risks and vulnerabilities that could adversely affect the outcome of the project.
As a specific example, a Computational Fluid Dynamics (CFD) assessment can show exactly how airflow will occur in the consolidated environment and where hot spots and other cooling challenges will exist. The room and rack power loads can help determine levels of cooling capacity.
And once the consolidation or virtualization project is complete, don’t just leave it alone – monitor it! Data centers should be monitored for assured equipment utilization and environmental conditions, such as temperature, humidity and water leaks.
Finally, lack of scheduled maintenance and service can cause unplanned downtime in the increasingly critical environment created by consolidation and virtualization. Extending the useful service life of the power and cooling equipment through proper maintenance, predictive monitoring, and keeping the systems current reduce the likelihood of downtime and investment in more technologies.
What is the Emerson Network Power URL which has the cooling solutions?
This will take you directly to the precision cooling section of the site: http://www.liebert.com/product_pages/MainCategory.aspx?id=4&hz=60.
Checklist for Reviewing IT Infrastructure for Consolidation / Virtualization
Below is a checklist to help you evaluate your customer’s IT infrastructure for consolidation and virtualization. Be sure to work with your local Liebert Representative to review customer needs and determine exact equipment requirements.
Have you calculated your cost of downtime for IT equipment to help determine your desired availability levels for your power and cooling infrastructure?
Is your IT equipment secure from unauthorized access?
Do you have monitoring in place so you can be alerted when cabinets are opened or equipment is added or changed?
Have you sized UPSs for a combination of actual power usage and planned expansion?
Are you using full loads and not nominal loads to size UPSs?
Are your line-drawings up to date so you can identify single points of failure?
For dual-corded redundancy, is equipment connected to two PDUs, UPSs and circuits?
Are you using online UPSs to provide the highest levels of reliability for your critical consolidated environment?
If you are using a generator, are your UPSs compatible with generators?
Have you calculated the amount of time your IT equipment can operate without cooling in the event of an outage?
Are you using dedicated or precision cooling for critical IT systems?
Are your racks arranged in hot aisle / cold aisle configuration to facilitate heat removal?
Do you have adequate cooling redundancy with loads distributed between multiple cooling systems?
Are you adding high density servers that increase rack loads beyond 5kW, which then may require dedicated or supplemental high density cooling systems?
Do you inspect your racks routinely for hot spots and document temperature measurements for trending?
Monitoring & Management
Do you use network communications software?
Do you want to monitor power and cooling equipment via your network?
Do you want to be able to send alerts, initiate graceful shutdowns of equipment or and control power usage within the rack?
Do you monitor for heat, smoke, humidity, and water leakage in your IT spaces?
Do you have UPS battery monitoring systems in place and a preventative maintenance program?
Do you routinely review your monitoring logs or do you need a remote monitoring service to do this for you?
Preventive Maintenance & Rapid Response Service
Have you evaluated your outside service level agreements in light of your consolidated environment?
Are your service providers factory certified?
Do you have immediate phone support for service – 7 x 24 x 365?
Do you conduct UPS and battery checks or other types of UPS preventative maintenance?
Do you know the MTBF and expected life span of your UPS and cooling equipment to ensure you have performed adequate preventative maintenance?
Do you need a long-term warranty and service package to provide preventive maintenance and repair?