In Part 1 of “Adding New Physical Infrastructure” I reviewed three typical approaches taken by managers of small and mid-sized data centers to add new physical infrastructure: (1) build-it-yourself using in-house resources to design and integrate all elements of the infrastructure, (2) rely on a single supplier for design and integration, or (3) entrust multiple best-of-breed vendors to get it done.
We have a different take. As discussed in Part 1, you are likely to face significant risks and expense as you attempt to manage a wide range of technical details, complex project management issues, and multiple vendor relationships. Leveraging physical infrastructure expertise and partnerships with best of breed power and cooling suppliers, Panduit offers an Integrated Infrastructureapproach that combines the benefits of both the single-source and best-of-breed approaches with the ease of managing a single supplier.
How do you build out a new data center physical infrastructure?
Under the best of circumstances, building out new data center capacity is complex, expensive, time consuming and fraught with risk. Experts, engineers and consultants are needed for everything from designing the building shell, planning power and cooling systems, to commissioning. These are just the major categories. Think about the expertise needed to manage all the details that cascade from them!
If you are responsible for a small to mid-sized data center you may be faced with doing more of this yourself given the available resources. Increased complexity makes it difficult to find and retain people who possess all the essential skills needed to design and integrate the power, cooling, racks, cabling and other components necessary to complete the build correctly, and on-time. Taking on the coordination of the build-out in addition to normal responsibilities can be overwhelming.
The other day I was participating in a conversation with a customer about LAN and SAN speeds greater than 10G. It was a good conversation and the customer had numerous questions about migrating to 40G Ethernet; what is happening with 100G Ethernet, using multiple fibers for Fibre Channel, etc.
Toward the end of the conversation I asked them about their plans regarding deploying 40G Ethernet. They replied that they had no immediate plans for deploying 40G and that the reason they wanted to talk about it was to make sure that their LAN infrastructure could support it in the future. They plan on deploying 10G Ethernet in the new data center.
That revelation hit me with the same impact as participating in an ice bucket challenge.
I recently had the opportunity to discuss an application for a retrofit containment systeminstalled into an existing data center with a sales person. Not an uncommon story, given the effectiveness of separating cold and hot air streams in the data center to reduce cooling energy consumption. The part of the story that stood out for me was that the sales person enthusiastically related how the end user realized an instant payback on the containment system and had money left over. It sounded too good to be true. My first thought was just how badly is this data center being operated that the retrofitting of a containment system would yield an instant payback and still have money left over???
There is something lurking about in today’s data centers that is not mentioned in polite company and quite frankly, is ignored. Although it will not go away, one hopes that it will not rise up and wreak havoc, bringing the enterprise to a halt.
One of the frequent questions we hear from our customers has to do with choosing the right media type for their data center. On the surface, it would seem the answer is obvious: use copper between the servers and first tier of switches and use optical fiber everywhere else. Although you might find yourself nodding in agreement, that answer does not really address the real question.
The real questions is: what is the right media type for maximizing what is important to you or minimizing what is costing you?
Let’s take a look at just one of the factors you might consider when looking at the various media types: latency.
When it comes to running an efficient operation, small data centers have many of the same concerns and challenges as their larger counterparts. One of the greatest challenges that managers of small data centers have is that they typically have limited resources in terms of technology, staffing, and financial support.
This can leave a small data center more vulnerable to inefficiencies, inflexibility for growth, and the potential for system failures. One example we run into on a regular basis occurs when the manager of a legacy data center needs to obtain power consumption and environmental data as a result of a cost reduction initiative, or difficulty finding capacity for new applications. This typically occurs in data centers that are older, may have between 20 and 30 racks, and have grown, despite best intentions, in unintended ways.
Historically, MPO connectors had to be ordered with the correct gender and polarity because they could not be changed in the field. The PanMPO connector changes that, allowing installers to change both polarity and gender quickly and easily, simplifying the migration to 40G Ethernet while maintaining standards compliance. Because of this, data center operators only need to purchase one type of MPO patch cord reducing costs and improving efficiency.
This year’s Cisco Live, being held at the Moscone Center in San Francisco, promises to be another exciting event. As a Platinum Sponsor, Panduit will be exhibiting in booth #1521 and will be featuring our Intelligent Data Center Solutions, Enterprise Solutions and Industrial Automation Solutions.
We are particularly excited about Cisco’s Application Centric Infrastructure (ACI) architecture that promises to deliver fast application provisioning and simplified operations. ACI networks will be built upon a flatter 2 tier network architecture that requires some new ways of thinking about how an optimal physical infrastructure should be built. Panduit has been working with Cisco to understand the differences between traditional three tier physical architectures and the ACI architecture, and will be presenting the “ACI Impact on Physical Infrastructure Design and Deployment” in the general session on Tuesday May, 20th at 2:00 p.m. PDT. Examples of cabinets configured with Spine/Leaf network topologies including Top of Rack (ToR), End of Rack (EoR) and Middle of Rack (MoR).
Data center networks are becoming more and more complex making it more difficult to trouble shoot and balance traffic within LANs and SANs. That is why more network architects and data center managers are deploying Tapped Fiber optic Cassettes (TFC).
TFCs give network analyzers, packet brokers, and other pieces of test equipment access to the fiber link.