One of the frequent questions we hear from our customers has to do with choosing the right media type for their data center. On the surface, it would seem the answer is obvious: use copper between the servers and first tier of switches and use optical fiber everywhere else. Although you might find yourself nodding in agreement, that answer does not really address the real question.
The real questions is: what is the right media type for maximizing what is important to you or minimizing what is costing you?
Let’s take a look at just one of the factors you might consider when looking at the various media types: latency.
For this discussion, I am going to use a very simple definition of latency: the amount of time it takes to traverse a particular link, including the physical layer devices. By physical layer devices, I mean the optical modules, or the 10GBASE-T silicon devices, etc.
Some of you may be deploying data centers that support critical, low-latency applications. An often used example is high-frequency trading of stocks, commodities, futures, options, etc. The motivation to get to the lowest latency possible is being driven by competitive advantage. If one trading organization can execute a trade faster than other traders, they have the potential to make a profit because of their latency advantage. This is one of the reasons why high-frequency traders try to locate their data centers as close as possible to the trading exchange’s data center.
For 10G Ethernet copper links, there are two choices: 10GBASE-T deployed using CAT6A cabling and RJ45 connectivity; or, Direct Attach Copper (DAC) cable assemblies using the SFP+ form factor.
If one is interested in the absolute lowest latency, DAC assemblies are the clear winner. The latency through a DAC cable assembly is on the order of 0.300μs, while it is about 2μs for 10GBASE-T. That is almost a 10X difference.
Why the difference?
DAC cable assemblies do not really do anything to the electrical signal that is sent to the cable. A PCB inside the SFP+ shell takes the data from the Ethernet switch port and puts it into a controlled impedance transmission line, in this case twin axial cable, and sends it to the other end where the reverse happens. The latency is very low because not much is going on. The exception would be active DAC cable assemblies where amplifiers and equalizers are used to extend the length and they add a bit to the latency.
With 10GBASE-T, it is not that straight forward. The data to be transmitted is sent to an Ethernet PHY IC where it is scrambled, encoded, mapped, converted to an analog signal, and then sent through some analog interface circuitry that puts the signal on the CAT6A cable. At the receiving end, the signal goes through another interface, and is digitized, equalized, demapped, descrambled, and then sent on its merry digital way. All of that math, or digital signal processing, takes time- about 2μs worth.
So, you might say that DAC cable assemblies are the clear winner. Well, not so fast. DAC cable assemblies are far more expensive than a 10GBASE-T link. They can only reach out to about 7m (passive) while 10GBASE-T using CAT6A can reach out to 100m. DAC cable assemblies are not field terminable and they are not part of a structured cabling solution, whereas, 10GBASE-T takes advantage of the many years of experience terminating RJ45 connectivity and is part of a structured cabling solution. The choice is not that straight forward after all.
Working with Panduit, we can help you sort through the various architectures and cabling options for your application and data center. Please visit our network architectures page on our web site for white papers, technology briefs, and design guides to help you make the right media choice. You can also visit our advisory services page to get more information on how Panduit can help you further optimize your data center’s physical infrastructure.