Historically, there has been little convergence between manufacturing and enterprise in the plant network. Instead, there are multiple, separate networks – one network may run fieldbus protocol at the device level, another network may run ControlNet protocol for machine-to-machine
communications, while a third protocol, such as Ethernet, or a proprietary network, links the machines to data acquisition and storage units for reporting or archiving. Meanwhile, a separate network, often an extension of the office Ethernet network, is on the plant floor, enabling workstation access to work orders and task instructions.
Keeping your communications and power cables properly and safely installed in harsher environments such as Shipbuilding, Oil & Gas, and Chemical processing plants, as well as other similar applications, can be a challenge. However, it needs to be taken very seriously, otherwise the facility and personnel are being placed at elevated risk of injury or other adverse effects.
In order to ensure that proper cable fastening solutions are being implemented in these harsh environments, many times stainless steel cable ties will be specified for use. However, not all stainless cable ties are created equal, and it is critical to thoroughly evaluate available options in order to choose the one that is safest and most reliable.
The Wyr-Grid tray is “upside down” because the design is based on a strong wire mesh platform reinforced with 1-1/2” high wire mesh walls that are oriented downward giving the appearance of an up-side down wire mesh tray. While appearing unconventional, this design combines the best attributes of cable runway with the flexibility and utility of wire mesh pathways.
The NFPA 70E Standard provides guidelines for electrical safety in the workplace. Recently this standard has been updated to provide consistency of terms with other standards that address hazards and risk.
Some of these changes introduced new terms such as arc flash risk assessment to replace arc flash analysis and shock risk assessment to replace shock hazard analysis.
Determining the Arc Flash Risk Assessment and Shock Risk Assessment for electrical devices provides important information to warn of the specific risks associated with an energized piece of equipment. This information is communicated to workers through the use of equipment labels.
In Section 130.5(D) of the 2015 NFPA 70E Standard new requirements for Arc Flash Warning Labels are explained.
Last week I posted a blog about what is driving the adoption of high density fiber enclosures. High density fiber enclosures can help reduce the high cost of real estate. Possibly, one might find themselves with a data center where space is constrained so a high density fiber enclosure can help ease those space constraints. I also said that high density fiber enclosures are used in data centers that are revenue generators because they make it possible to include more revenue-generating active equipment.
So a high density fiber enclosure helps add more equipment to a finite amount of space, but, as they say, there is no free lunch.
Real estate is one of the primary reasons that high density fiber enclosures are deployed in the data center. In some parts of the world, real estate is very expensive. One way to save cap ex is to try to use the smallest data center possible. The smaller the data center, the less square area required, and therefore, lower cap ex. This would certainly be the case if one is using a co-lo facility. Of course, a smaller data center also means lower op ex, e.g., less cooling, etc.
Another reason, also driven by real estate, is that the data center’s size is fixed. The data center cannot be enlarged. This might be the situation in dense urban areas where a larger space does not exist. The only way to add more functionality to the data center is to try and find a way to cram in more equipment. Hence, using a high density fiber enclosure.
Another less obvious reason for using a high density fiber enclosure is the trend towards data centers becoming profit centers. Historically, data centers were perceived as a cost of doing business. Depending on the business you are in, that may no longer be the case.
This question is not asked enough by data center designers, owners or managers as they build-out new whitespace. Cabinets are the foundation of the data center’s physical infrastructure, used throughout the life cycle of the facility. IT equipment that runs the applications are contained within them, the cabling that connects the equipment to the users and the LAN/SANs are terminated and managed in them, power is distributed within them, and cooling is channeled through them. They are also the most visible infrastructure element, and how they look and fit together is often an indicator of how a data center is run and managed.
Why then are they frequently taken for granted, simply considered “big metal boxes”? Why isn’t there more emphasis on cabinets being considered an asset that helps reduce operational costs?
You are ready to deploy 10 gigabit Ethernet, but what media type should you use? As you might suspect, that is not a straightforward question to answer. There are several things you need to consider before making the right choice, and some of the choices may be contradictory.
Does you data center require using a structured cabling solution? If so, then you will most like stay away from Direct Attach Cable (DAC) assemblies used for 10GBASE-CR because that is a point-to-point solution, and lean toward 10GBASE-T.
In Part 1 of “Adding New Physical Infrastructure” I reviewed three typical approaches taken by managers of small and mid-sized data centers to add new physical infrastructure: (1) build-it-yourself using in-house resources to design and integrate all elements of the infrastructure, (2) rely on a single supplier for design and integration, or (3) entrust multiple best-of-breed vendors to get it done.
We have a different take. As discussed in Part 1, you are likely to face significant risks and expense as you attempt to manage a wide range of technical details, complex project management issues, and multiple vendor relationships. Leveraging physical infrastructure expertise and partnerships with best of breed power and cooling suppliers, Panduit offers an Integrated Infrastructureapproach that combines the benefits of both the single-source and best-of-breed approaches with the ease of managing a single supplier.
How do you build out a new data center physical infrastructure?
Under the best of circumstances, building out new data center capacity is complex, expensive, time consuming and fraught with risk. Experts, engineers and consultants are needed for everything from designing the building shell, planning power and cooling systems, to commissioning. These are just the major categories. Think about the expertise needed to manage all the details that cascade from them!
If you are responsible for a small to mid-sized data center you may be faced with doing more of this yourself given the available resources. Increased complexity makes it difficult to find and retain people who possess all the essential skills needed to design and integrate the power, cooling, racks, cabling and other components necessary to complete the build correctly, and on-time. Taking on the coordination of the build-out in addition to normal responsibilities can be overwhelming.