Whether it is power, space, or cooling, stranded capacity can strangle your data center’s efficiency, blow-up your budget and put the brakes on new applications implementation. We have encountered many approaches to freeing stranded capacity ranging from the expensive…redeployment or reconfiguration of devices, or adding power or cooling capacity in an operational data center, to the ones requiring lower investment…additional perforated floor tiles, fans, or “meat locker” curtains to help improve cooling capacity utilization.
Frequently, we are asked to help reclaim stranded data center capacity. One approach that is relatively low risk and economical is to improve the utilization of existing cooling capacity. Installing blanking panels and sealing gaps in the raised floor is typically our first recommendation. Fast, simple, and inexpensive to implement, it is typically a first step and may not provide the level of separation needed to concentrate cooling air to accommodate higher densities. The next step is hot or cold aisle containment.
When developing a new networking standard, several attributes need to be balanced to optimize its implementation. To optimize the implementation of 40GBASE-T, the task force developing the standard (IEEE P802.3bq) appears to have settled on a reach of 30 meters. This is a tradeoff between power dissipation of the silicon physical layer (PHY) IC driving the cable, the complexity of the PHY which would impact cost, the implementation of the channel, and the reach of the link.
The question is: Is 30 meters long enough? Let’s take a look.
A converged fabric based on Fibre Channel over Ethernet (FCoE) helps data center architects and managers reduce CAPEX, OPEX, while simplifying the network infrastructure. Up until recently, there was something hindering the adoption of FCoE: 10GBASE-T.
Historically, deploying FCoE on the links between servers and aggregation switches meant that one had to use optical fiber or Direct Attach Copper (DAC) cable assemblies. The first generation of aggregation switches that supported 10GBASE-T did not support FCoE. Additionally, 10GBASE-T Ethernet server adapters did not support FCoE as well, and FCoE was only available with Converged Network Adapters (CAN) that supported the SFP+ form factor. That meant one could implement ToR architectures with FCoE using DAC cable assemblies or other architectures using optical fiber for longer distances.
The devil is in the details. This is true for many endeavors, particularly when building out a data center’s physical infrastructure. Given the scope and investment of the entire data center project, the physical infrastructure can seem relatively minor. Missing some important details however, can have a significant impact on installation schedules, and your job…who wants to explain why a new service or application is delayed because a minor component doesn’t fit right or didn’t arrive on schedule? Missing details can also impact network performance when work-arounds, done for the sake for expediency, lead to operational problems or worse….after the data center has been commissioned.
Statistics, multiple analysts, and research reports indicate that data centers are often overprovisioned with power and cooling capacity to maintain service levels regardless of actual IT equipment utilization. As you are well aware, this approach has proven to be expensive and inefficient. As data center energy consumption grows it is drawing the attention of CFO’s and corporate responsibility managers who are concerned with the impact of the data center’s operation on the environment and of course, the impact on the bottom line. So how can you improve your data center’s efficiency?
New research information from Data Center Dynamics indicates that global data center energy consumption in 2013 has slowed down to 7% growth as compared to 19% between 2011 and 2012. This reduction is attributed to energy efficiency measures, consolidation projects and outsourcing, primarily in mature markets.
So, does this mean data center managers and operators can breathe a sigh of relief? Not necessarily. Once energy efficiency improvement goals have been attained, how do you maintain that level of efficiency over the lifecycle of your data center?
In early November, Cisco launched its Application Centric Infrastructure (ACI). ACI includes a new line of Nexus 9000 series switches, a new version of NX-OS and a policy controller called Application Policy Infrastructure Controller (APIC). We at Panduit were proud to be a part of the launch.
As a part of that launch, Cisco announced a new technology for deploying 40G Ethernet that has so far, received little attention. Cisco calls that technology BiDi.
We frequently work with data center managers who need help optimizing data center white space. They are fully aware that data centers are among the costliest facilities to build and operate. Excluding power and cooling, Gartner estimates that the space a single server cabinet occupies costs $4,900 per year. This figure is based on the annualized cost of the building structure itself, racks, building maintenance, 24/7 security and staff and property taxes, etc.
Dynamic, virtualized work-loads, the need to provision new applications quickly, and lack of insights into available power and cooling leads to over provisioning of power and cooling, which results in higher operating costs. On the contrary, space is often under provisioned and results in average cabinet space utilization between 50% and 65%, an obvious visual indicator that improvements are needed. However, historically, deploying additional capacity has been complicated, expensive and time consuming.
Factor in the time and effort, associated with justifying capital for a data center expansion and optimizing the existing space becomes an attractive alternative. Increasing cabinet density can be like venturing into unknown space. Here are some considerations as you embark on your mission–optimizing data center white space:
I’d like to bend your ear a bit today on the topic of bend insensitive multimode optical fiber (BIMMF). When bend insensitive multimode fiber made its debut a few years ago, we urged end-users to use caution if they were going to adopt it in their data centers.
Since the existing fiber and measurement procedures were not designed to accommodate the intentional improvements in BIMMF’s performance nor the unintentional side-effects caused by the revised fiber design, the providers of BIMMF could have differences in how they define what BIMMF is and how they measure the various parameters of the fiber. Our concerns centered on whether differences in how the various providers of BIMMF measured its numerical aperture, core diameter, and Differential Mode Delay (DMD) would cause compatibility issues if BIMMF were used with non-BIMMF.
Hello and welcome to the Intelligent Data Center Solutions blog. We are looking forward to exchanging ideas and insights that can help you focus on improving the design and performance your data center’s physical infrastructure. Continue reading →