This question is not asked enough by data center designers, owners or managers as they build-out new whitespace. Cabinets are the foundation of the data center’s physical infrastructure, used throughout the life cycle of the facility. IT equipment that runs the applications are contained within them, the cabling that connects the equipment to the users and the LAN/SANs are terminated and managed in them, power is distributed within them, and cooling is channeled through them. They are also the most visible infrastructure element, and how they look and fit together is often an indicator of how a data center is run and managed.
Why then are they frequently taken for granted, simply considered “big metal boxes”? Why isn’t there more emphasis on cabinets being considered an asset that helps reduce operational costs?
In Part 1 of “Adding New Physical Infrastructure” I reviewed three typical approaches taken by managers of small and mid-sized data centers to add new physical infrastructure: (1) build-it-yourself using in-house resources to design and integrate all elements of the infrastructure, (2) rely on a single supplier for design and integration, or (3) entrust multiple best-of-breed vendors to get it done.
We have a different take. As discussed in Part 1, you are likely to face significant risks and expense as you attempt to manage a wide range of technical details, complex project management issues, and multiple vendor relationships. Leveraging physical infrastructure expertise and partnerships with best of breed power and cooling suppliers, Panduit offers an Integrated Infrastructureapproach that combines the benefits of both the single-source and best-of-breed approaches with the ease of managing a single supplier.
How do you build out a new data center physical infrastructure?
Under the best of circumstances, building out new data center capacity is complex, expensive, time consuming and fraught with risk. Experts, engineers and consultants are needed for everything from designing the building shell, planning power and cooling systems, to commissioning. These are just the major categories. Think about the expertise needed to manage all the details that cascade from them!
If you are responsible for a small to mid-sized data center you may be faced with doing more of this yourself given the available resources. Increased complexity makes it difficult to find and retain people who possess all the essential skills needed to design and integrate the power, cooling, racks, cabling and other components necessary to complete the build correctly, and on-time. Taking on the coordination of the build-out in addition to normal responsibilities can be overwhelming.
I recently had the opportunity to discuss an application for a retrofit containment systeminstalled into an existing data center with a sales person. Not an uncommon story, given the effectiveness of separating cold and hot air streams in the data center to reduce cooling energy consumption. The part of the story that stood out for me was that the sales person enthusiastically related how the end user realized an instant payback on the containment system and had money left over. It sounded too good to be true. My first thought was just how badly is this data center being operated that the retrofitting of a containment system would yield an instant payback and still have money left over???
When it comes to running an efficient operation, small data centers have many of the same concerns and challenges as their larger counterparts. One of the greatest challenges that managers of small data centers have is that they typically have limited resources in terms of technology, staffing, and financial support.
This can leave a small data center more vulnerable to inefficiencies, inflexibility for growth, and the potential for system failures. One example we run into on a regular basis occurs when the manager of a legacy data center needs to obtain power consumption and environmental data as a result of a cost reduction initiative, or difficulty finding capacity for new applications. This typically occurs in data centers that are older, may have between 20 and 30 racks, and have grown, despite best intentions, in unintended ways.
This year’s Cisco Live, being held at the Moscone Center in San Francisco, promises to be another exciting event. As a Platinum Sponsor, Panduit will be exhibiting in booth #1521 and will be featuring our Intelligent Data Center Solutions, Enterprise Solutions and Industrial Automation Solutions.
We are particularly excited about Cisco’s Application Centric Infrastructure (ACI) architecture that promises to deliver fast application provisioning and simplified operations. ACI networks will be built upon a flatter 2 tier network architecture that requires some new ways of thinking about how an optimal physical infrastructure should be built. Panduit has been working with Cisco to understand the differences between traditional three tier physical architectures and the ACI architecture, and will be presenting the “ACI Impact on Physical Infrastructure Design and Deployment” in the general session on Tuesday May, 20th at 2:00 p.m. PDT. Examples of cabinets configured with Spine/Leaf network topologies including Top of Rack (ToR), End of Rack (EoR) and Middle of Rack (MoR).
Whether it is power, space, or cooling, stranded capacity can strangle your data center’s efficiency, blow-up your budget and put the brakes on new applications implementation. We have encountered many approaches to freeing stranded capacity ranging from the expensive…redeployment or reconfiguration of devices, or adding power or cooling capacity in an operational data center, to the ones requiring lower investment…additional perforated floor tiles, fans, or “meat locker” curtains to help improve cooling capacity utilization.
Frequently, we are asked to help reclaim stranded data center capacity. One approach that is relatively low risk and economical is to improve the utilization of existing cooling capacity. Installing blanking panels and sealing gaps in the raised floor is typically our first recommendation. Fast, simple, and inexpensive to implement, it is typically a first step and may not provide the level of separation needed to concentrate cooling air to accommodate higher densities. The next step is hot or cold aisle containment.
When developing a new networking standard, several attributes need to be balanced to optimize its implementation. To optimize the implementation of 40GBASE-T, the task force developing the standard (IEEE P802.3bq) appears to have settled on a reach of 30 meters. This is a tradeoff between power dissipation of the silicon physical layer (PHY) IC driving the cable, the complexity of the PHY which would impact cost, the implementation of the channel, and the reach of the link.
The question is: Is 30 meters long enough? Let’s take a look.
The devil is in the details. This is true for many endeavors, particularly when building out a data center’s physical infrastructure. Given the scope and investment of the entire data center project, the physical infrastructure can seem relatively minor. Missing some important details however, can have a significant impact on installation schedules, and your job…who wants to explain why a new service or application is delayed because a minor component doesn’t fit right or didn’t arrive on schedule? Missing details can also impact network performance when work-arounds, done for the sake for expediency, lead to operational problems or worse….after the data center has been commissioned.
Statistics, multiple analysts, and research reports indicate that data centers are often overprovisioned with power and cooling capacity to maintain service levels regardless of actual IT equipment utilization. As you are well aware, this approach has proven to be expensive and inefficient. As data center energy consumption grows it is drawing the attention of CFO’s and corporate responsibility managers who are concerned with the impact of the data center’s operation on the environment and of course, the impact on the bottom line. So how can you improve your data center’s efficiency?