The devil is in the details. This is true for many endeavors, particularly when building out a data center’s physical infrastructure. Given the scope and investment of the entire data center project, the physical infrastructure can seem relatively minor. Missing some important details however, can have a significant impact on installation schedules, and your job…who wants to explain why a new service or application is delayed because a minor component doesn’t fit right or didn’t arrive on schedule? Missing details can also impact network performance when work-arounds, done for the sake for expediency, lead to operational problems or worse….after the data center has been commissioned.
Statistics, multiple analysts, and research reports indicate that data centers are often overprovisioned with power and cooling capacity to maintain service levels regardless of actual IT equipment utilization. As you are well aware, this approach has proven to be expensive and inefficient. As data center energy consumption grows it is drawing the attention of CFO’s and corporate responsibility managers who are concerned with the impact of the data center’s operation on the environment and of course, the impact on the bottom line. So how can you improve your data center’s efficiency?
New research information from Data Center Dynamics indicates that global data center energy consumption in 2013 has slowed down to 7% growth as compared to 19% between 2011 and 2012. This reduction is attributed to energy efficiency measures, consolidation projects and outsourcing, primarily in mature markets.
So, does this mean data center managers and operators can breathe a sigh of relief? Not necessarily. Once energy efficiency improvement goals have been attained, how do you maintain that level of efficiency over the lifecycle of your data center?
In early November, Cisco launched its Application Centric Infrastructure (ACI). ACI includes a new line of Nexus 9000 series switches, a new version of NX-OS and a policy controller called Application Policy Infrastructure Controller (APIC). We at Panduit were proud to be a part of the launch.
As a part of that launch, Cisco announced a new technology for deploying 40G Ethernet that has so far, received little attention. Cisco calls that technology BiDi.
We frequently work with data center managers who need help optimizing data center white space. They are fully aware that data centers are among the costliest facilities to build and operate. Excluding power and cooling, Gartner estimates that the space a single server cabinet occupies costs $4,900 per year. This figure is based on the annualized cost of the building structure itself, racks, building maintenance, 24/7 security and staff and property taxes, etc.
Dynamic, virtualized work-loads, the need to provision new applications quickly, and lack of insights into available power and cooling leads to over provisioning of power and cooling, which results in higher operating costs. On the contrary, space is often under provisioned and results in average cabinet space utilization between 50% and 65%, an obvious visual indicator that improvements are needed. However, historically, deploying additional capacity has been complicated, expensive and time consuming.
Factor in the time and effort, associated with justifying capital for a data center expansion and optimizing the existing space becomes an attractive alternative. Increasing cabinet density can be like venturing into unknown space. Here are some considerations as you embark on your mission–optimizing data center white space:
I’d like to bend your ear a bit today on the topic of bend insensitive multimode optical fiber (BIMMF). When bend insensitive multimode fiber made its debut a few years ago, we urged end-users to use caution if they were going to adopt it in their data centers.
Since the existing fiber and measurement procedures were not designed to accommodate the intentional improvements in BIMMF’s performance nor the unintentional side-effects caused by the revised fiber design, the providers of BIMMF could have differences in how they define what BIMMF is and how they measure the various parameters of the fiber. Our concerns centered on whether differences in how the various providers of BIMMF measured its numerical aperture, core diameter, and Differential Mode Delay (DMD) would cause compatibility issues if BIMMF were used with non-BIMMF.
Hello and welcome to the Intelligent Data Center Solutions blog. We are looking forward to exchanging ideas and insights that can help you focus on improving the design and performance your data center’s physical infrastructure. Continue reading