|When Harold Sirls surveys his data center, he doesn’t just look at indicator lights and wiring; he also makes sure the air is crisp but not frosty. Like every other data center manager, Sirls has seen the challenges of cooling growing within the past few years. In running St. Luke’s Health System in Lee’s Summit, Mo., he introduced blade servers two years ago and since then has had to be more diligent about checking his cooling systems. "It would be great if we had the ability to create a room just for blades," he says. "But since we can’t build that yet, I have to tweak the temperature every day."
Although data center managers are familiar with the difficulty of keeping their facilities at a nice cool temperature, the rise of blade servers and other high-powered equipment is compounding the problem, and experts say it’s only going to get worse. "This is one area where there is no free lunch," says Kenneth Brill, consultant at The Uptime Institute, a firm specializing in uptime management. "IT people will have to become experts in cooling whether they want to or not. The physical layer is going to be a basic driver in the choices IT makes in the years ahead."
In addition to the demands of blade servers, IT managers are facing a numberof other obstacles when it comes to cooling, experts note. "The challenge is increased by the fact that most facilities are a multivendor environment, making it even harder to discover unbiased, vendor-neutral solutions," says Don Beaty, chair of the ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers) Technical Committee.
Beaty believes that discrepancies exist between the power requirements that are written on the nameplates on the back of equipment and the configuration specifications that vendors such as IBM and Sun prepare for the equipment when they manufacture and test them. "The discrepancy means that managers should not rely on equipment nameplates as a good source for characterizing the power and cooling load," he says. This type of difficulty in gauging power requirements leads to overprovisioning, which in turn can damage equipment and put a company at risk.
Another problem is that what was once a good, solid data center space is often no longer up to the task. Data centers built even five years ago are often not able to cool effectively, according to Bill Hunter, chief engineer at Cingular Wireless. "Traditional data centers aren’t up to the challenge," he notes. "You can’t push enough cool air through perforated tile when you have something like a blade center." Because of this, IT managers have begun to investigate alternatives such as cabinets with additional cooling built in, watercooled systems, and the "room within a room" approach, Hunter has observed.
For small and midsized enterprises in particular, revamping data centers can be a difficult, if not impossible, change. In large data centers, equipment can be spread out, with plenty of empty space to encourage cool airflow. But with smaller enterprises, floor space is at a premium, Hunter notes.
Perhaps one of the most frustrating challenges for data center managers with cooling has nothing to do with the room itself, or even the equipment, but with the perspective of colleagues and executives who don’t quite understand why the room needs such a nip in the air or why there’s so much "unused" space. "Build it, and it will get filled," says Hunter. "End users, CEOs, and even system administrators see that empty space and ask why there can’t be more equipment or why it shouldn’t be used for storage."
Often, the data center room might be employed to store paper or even used as a printer room, which Hunter says is one of the most detrimental situations to cooling. "Printers are incredibly dirty machines, considering the toner and paper that get into the airstream," he says. "Those paper fragments get into computer boards and make a heck of an insulator. All the heat you’re trying to push out will stay in and cook your machines."
With all the emphasis on making the data center air crisp and cool, it might be tempting to simply push the thermostat needle down by a few degrees, but that’s not always the best strategy, says Rick Sawyer, director of data center technology at AFCOM, an association for data center professionals. "What you need is a stable environment," he notes. "If you can have a data center that’s stable at 74 degrees vs. one that’s unstable at 65 degrees, go with the higher temperature."
In general, one of the first big steps toward proper cooling is simply acknowledging the depth and importance of the issue, says Sawyer. "The layout of data centers has changed, and equipment has changed," he says. "That means companies of every size need to be more realistic about what’s happening and to look at their cooling as critical, not just one more task on the list."
Strategies For Keeping Your Cool
- Factor in cooling costs when buying equipment.
- Don’t buy more machines than you need or run machines that are not necessary.
- Explain to executives and staff why there can’t be any office supplies or other materials stored in the data center.
- Implement temperature controls that issue alerts by email or pager to keep on top of temps during the weekend and holidays.
- Do not put air-dirtying machines, especially printers, in the data center.
Tips For Evaluating Cooling Needs
- Don’t rely on nameplate info: Get the thermal reports that companies such as EMC and Cisco use to complete their products.
- Check out ASHRAE’s (American Society of Heating, Refrigerating, and Air-Conditioning Engineers) books on data center cooling, especially "Thermal Guidelines for Data Processing Environments" and "Datacom Power Trends and Cooling Applications."
- Raise the temperature a few degrees and see whether it makes a difference; sometimes cooling systems become more efficient when the overall room temp is slightly higher rather than lower.