Your guide to avoiding server rack setup issues | TechTarget

Racks are meant to be standard, but IT equipment dimensions vary, and the volume of cabling needed to power and network modern servers, switches and storage cause unexpected deployment problems.

Most data center equipment is installed in racks, which come in an array of standard size increments. Rack units — or Us — accommodate a range of air handling, cable management and accessibility features, such as removable doors or sliding rails.

When you rack servers, check the dimensions and physical interoperability between the rack, rail assembly, servers and IT equipment that you plan to deploy. Perform due diligence to check the fit in advance before you start any server rack setup.

Hidden problems in server racking

A rack typically accommodates servers that are 19 inches wide. Height is some multiple of 1.752 inches, which is a standard rack unit. A 42U rack provides an opening 19 inches wide and 73.6 inches high.

A 19-inch rack will fit any 19-inch server, rack switch, power distribution unit (PDU) or uninterruptable power supply (UPS). Add up the vertical height and make sure that the rack opening is high enough to accommodate all planned gear.

If you must deploy 14 new 2U servers, a 4U PDU and an 8U UPS, you need a 19-inch rack at least 40U high. Taller racks are fine — especially, if there’s a chance you’ll reconfigure later.

The biggest problem with server racking is depth. Server vendors put out models that range from 28.5 inches to 29.125 inches deep. Newer servers may not slide all the way into older racks. You must replace them with deeper models to safely accommodate newer gear.

Some servers fit but leave no room for power and network cabling after server rack setup. The problem multiplies when you consider that a 42U rack has room for 42 1U components, with power and network cables. Without enough clearance, the mass of wiring may obstruct airflow or the rack’s rear door.

Most server vendors offer a variety of power cord options for rack server models, or you can buy low-profile power cords from third-party vendors to free up space.

Two plus two

A two-post rack is a pair of vertical rails. Each rail attaches to the middle of each server or piece of equipment. Two-post racks suit small deployments for easy access without airflow containment issues.

A four-post rack is a four-corner box or cabinet. Each server or device installs along horizontal rails, secured with screws or quick-disconnect latches through holes in the front panel. Four-post racks enclose IT gear to add equipment security, protect power and network cabling and shroud airflow for hot/cold containment.

Cable management arms keep power and network cables in mechanical trays behind each rack server. Arms neaten up the rack, but they can make cable location, troubleshooting or replacement more difficult. Look for smaller or low-profile arms from the rack vendor, reduce the server count in the rack to free up space, or replace the entire rack with a deeper model for adequate rear clearance.

Cable bundles pose serious problems for technicians. It can be almost impossible to accurately follow a desired cable within a cable trunk, which leads to cabling mistakes, accidental cable disconnections and wasted IT staff time. Proper, well-planned cable markings and the prudent use of cross-connect panels simplifies cable troubleshooting and changes without unnecessary work within the rack.

You should also plan and execute the cable trunk with care to avoid obstructions to ventilation and cooling airflow through the rack. A large, well-organized, properly marked cable trunk can pose a serious problem when it’s run along a rack of servers directly behind all the servers’ exhaust fans.

Rack width, height and depth vary between vendors, so confirm how those dimensions fit within your existing floor plan. Oversized racks can disrupt aisle containment enclosures or shift other racks and airflow ductwork. Subtle size differences may not seem significant, but any increase in rack size can wreak havoc during server rack setup in a tightly configured data center layout.

Address server mounts for server rack setup

Racks have a series of holes to mount servers, switches and other equipment. However, rack manufacturers do not deploy any standard mounting hole type; threaded, unthreaded or square mounting holes can cause problems.

Threaded mounting holes use relatively thick metal posts with mounting holes threaded for common screw sizes, including 10-32, 12-24 or metric M6 threads. Deeper, longer boxes without rails are difficult to mount on threaded holes, but it’s easier for relatively small boxes.

Because threaded holes can strip and require thicker metal for threading, many rack manufacturers use the unthreaded hole approach. This works for equipment that relies primarily on rails or mechanical elements for support — the box slides in on a rail that holds it up. The server can easily slide out again for inspection or service. For unthreaded holes, use clip nuts.

Ultimately, never assume that racks and IT gear are automatically compatible. Check dimensions and review the data center layout and test-fit new gear before you buy.

It is cumbersome to screw the server to the rack and adds time to service tasks. Most modern racks use an array of square holes with locking slides so technicians can quickly add or move rails. If the system owner wants to fix the servers into place with screws, square cage nuts can be snapped into the square holes.

Alternatives to screws

Use caution during server rail kit selection or installation. Some accommodate multiple hole types in the same assembly and require rotating portions of the rail. Rails and racks are not universally interchangeable and such multipurpose rail assemblies sometimes encounter obstructions. Check the rail length; a rail assembly can be too long for the rack — especially, older, shallow racks.

You may find a variety of tool-less rack options designed to mount gear to a rack structure without the deliberate use of screws. Tool-less racks and servers provide faster mounting and setup time and can be relocated within racks far quicker than traditional permanent mounting schemes.

One option for server rack setup is the button mount, which allows gear to be mounted in the sides of a rack. Button mounting points are usually found along the side of the rack’s vertical supports. Button mounts can support servers, but this scheme is most often used as a tool-less method for mounting lighter devices including PDUs and vertical cable management troughs.

Other emerging rack options include the Open Rack developed through Facebook’s Open Compute Project. Open Rack is based on the traditional 19-inch standard rack, but it accommodates wide equipment and centralized power distribution.

Ultimately, never assume that racks and IT gear are automatically compatible. Check dimensions and review the data center layout and test-fit new gear before you buy.

Vibration: A silent killer

Some racking problems are related to the physical location of the racks within the building or server room. Several rack factors that are often overlooked are floor load and vibration. Racks — and their installed gear — can produce considerable floor load. Data centers are specifically designed to reinforce the floor to support the enormous weight of servers.

Vibration can cause premature faults and failures for magnetic disk drives, which leads to bit errors and even drive failures. Simple oversights like inadequate floor design and placing racks too close to high-vibration areas such as loading docks, machinery rooms and hallways can lead to server problems.

Account for cooling systems

Finally, racks face a new generation of power and cooling challenges. Racks provide for an even distribution of physical hardware and are designed to support a relatively even flow of cooled air to computing devices.

The power most hardware consumes — and the cooling required to handle the resulting heat — widely varies. Common 1U white box servers with single CPU deployments can require far less power and cooling than super-high-density compute systems.

Packing gear with a diverse assortment of power and cooling demands into the same rack may cause hot spots that are impossible to adequately cool, which leads to premature system faults and costly troubleshooting.

As you deploy a wider array of server and storage subsystems, it’s important to consider the power density and cooling implications that arise within the rack and during server rack setup. The racks themselves cannot affect power and cooling; you may need to adjust the equipment distribution within racks to mitigate power and cooling hot spots and install supplemental cooling gear.