Last week I posted a blog about what is driving the adoption of high density fiber enclosures. High density fiber enclosures can help reduce the high cost of real estate. Possibly, one might find themselves with a data center where space is constrained so a high density fiber enclosure can help ease those space constraints. I also said that high density fiber enclosures are used in data centers that are revenue generators because they make it possible to include more revenue-generating active equipment.
So a high density fiber enclosure helps add more equipment to a finite amount of space, but, as they say, there is no free lunch.
Real estate is one of the primary reasons that high density fiber enclosures are deployed in the data center. In some parts of the world, real estate is very expensive. One way to save cap ex is to try to use the smallest data center possible. The smaller the data center, the less square area required, and therefore, lower cap ex. This would certainly be the case if one is using a co-lo facility. Of course, a smaller data center also means lower op ex, e.g., less cooling, etc.
Another reason, also driven by real estate, is that the data center’s size is fixed. The data center cannot be enlarged. This might be the situation in dense urban areas where a larger space does not exist. The only way to add more functionality to the data center is to try and find a way to cram in more equipment. Hence, using a high density fiber enclosure.
Another less obvious reason for using a high density fiber enclosure is the trend towards data centers becoming profit centers. Historically, data centers were perceived as a cost of doing business. Depending on the business you are in, that may no longer be the case.
You are ready to deploy 10 gigabit Ethernet, but what media type should you use? As you might suspect, that is not a straightforward question to answer. There are several things you need to consider before making the right choice, and some of the choices may be contradictory.
Does you data center require using a structured cabling solution? If so, then you will most like stay away from Direct Attach Cable (DAC) assemblies used for 10GBASE-CR because that is a point-to-point solution, and lean toward 10GBASE-T.
The other day I was participating in a conversation with a customer about LAN and SAN speeds greater than 10G. It was a good conversation and the customer had numerous questions about migrating to 40G Ethernet; what is happening with 100G Ethernet, using multiple fibers for Fibre Channel, etc.
Toward the end of the conversation I asked them about their plans regarding deploying 40G Ethernet. They replied that they had no immediate plans for deploying 40G and that the reason they wanted to talk about it was to make sure that their LAN infrastructure could support it in the future. They plan on deploying 10G Ethernet in the new data center.
That revelation hit me with the same impact as participating in an ice bucket challenge.
There is something lurking about in today’s data centers that is not mentioned in polite company and quite frankly, is ignored. Although it will not go away, one hopes that it will not rise up and wreak havoc, bringing the enterprise to a halt.
One of the frequent questions we hear from our customers has to do with choosing the right media type for their data center. On the surface, it would seem the answer is obvious: use copper between the servers and first tier of switches and use optical fiber everywhere else. Although you might find yourself nodding in agreement, that answer does not really address the real question.
The real questions is: what is the right media type for maximizing what is important to you or minimizing what is costing you?
Let’s take a look at just one of the factors you might consider when looking at the various media types: latency.
Historically, MPO connectors had to be ordered with the correct gender and polarity because they could not be changed in the field. The PanMPO connector changes that, allowing installers to change both polarity and gender quickly and easily, simplifying the migration to 40G Ethernet while maintaining standards compliance. Because of this, data center operators only need to purchase one type of MPO patch cord reducing costs and improving efficiency.
Data center networks are becoming more and more complex making it more difficult to trouble shoot and balance traffic within LANs and SANs. That is why more network architects and data center managers are deploying Tapped Fiber optic Cassettes (TFC).
TFCs give network analyzers, packet brokers, and other pieces of test equipment access to the fiber link.
When developing a new networking standard, several attributes need to be balanced to optimize its implementation. To optimize the implementation of 40GBASE-T, the task force developing the standard (IEEE P802.3bq) appears to have settled on a reach of 30 meters. This is a tradeoff between power dissipation of the silicon physical layer (PHY) IC driving the cable, the complexity of the PHY which would impact cost, the implementation of the channel, and the reach of the link.
The question is: Is 30 meters long enough? Let’s take a look.
A converged fabric based on Fibre Channel over Ethernet (FCoE) helps data center architects and managers reduce CAPEX, OPEX, while simplifying the network infrastructure. Up until recently, there was something hindering the adoption of FCoE: 10GBASE-T.
Historically, deploying FCoE on the links between servers and aggregation switches meant that one had to use optical fiber or Direct Attach Copper (DAC) cable assemblies. The first generation of aggregation switches that supported 10GBASE-T did not support FCoE. Additionally, 10GBASE-T Ethernet server adapters did not support FCoE as well, and FCoE was only available with Converged Network Adapters (CAN) that supported the SFP+ form factor. That meant one could implement ToR architectures with FCoE using DAC cable assemblies or other architectures using optical fiber for longer distances.