3 Ways Edge Computing Stimulates IoT Technology Capabilities

3 Ways Edge Computing Enriches IoT Technology

There are three ways edge computing enhances IoT deployments. These areas are key to increasing data gathering capabilities in a real-time world.

For IoT deployments, going to the edge may be the best choice when it comes to helping businesses deploy IoT technology across their network infrastructures.

Panduit’s white paper, “Edge Computing: Behind the Scenes of IoT,” explains the difference between the cloud and edge computing and three ways the edge can help IoT technology deployments.

It also discusses the following key areas for consideration when deploying edge computing: real-time requirements, environmental conditions, space limitations, and security.

Edge Computing

Edge computing is the opposite of cloud computing. With edge computing, the compute, storage, and application resources are located close to the user of the data, or the source of the data.

This is in contrast to a cloud deployment where those resources are in some distant data center owned by the cloud provider.

Although edge computing may appear to be a new concept, it is just the computing pendulum swinging to one side of the computing continuum.

Computing started with the advent of mainframes in the late 1950s. Mainframes are an example of centralized computing; they were too large and expensive for one to be on every user’s desk.

In the late 1960s, minicomputers appeared, which moved compute power away from centralized control and into research labs where they controlled experiments, the factory floor for process control, and many other use cases.

The pendulum moved all the way to the distributed side with the arrival of the PC in the mid-1980s. With the PC, individuals had computing power at their fingertips.

The computing pendulum swings back and forth, and today, it is swinging towards edge computing, which puts the processing and storage resources closer to where they are used and needed.

Why Edge Computing for IoT?

IoT deployments can benefit from edge computing in three ways:

  1. Reduced Network Latency

The latency in an IoT deployment is the amount of time between when an IoT sensor starts sending data and when an action is taken on the data.

Several factors impact network latency: The propagation delay through the physical media of the network; the amount of time it takes to route data through the networking equipment (switches, routers, servers, etc.); and the amount of time it takes to process the data. Implementing edge computing for IoT offers a reduction in network latency and improves real-time response.

  1. Reduced Network Jitter

The jitter in a network is the variation of latency over time. Some real-time IoT applications may not be tolerant of network jitter, if that jitter causes the latency to lengthen such that it prevents the system to act in the required time frame.

  1. Enhanced Security

Edge computing offers the opportunity to provide a more secure environment regardless of how one would deploy: co-location or directly owning the equipment.

Co-location facilities are physically secure locations. If one owns the edge computing equipment, it can be in the factory where the IoT sensors are located or in another company-owned facility.

To learn more about edge computing and why it is important for IoT, download Panduit’s “Edge Computing: Behind the Scenes of IoT”  white paper – or subscribe to our blog to access all the papers in our IoT “101” white paper series.

4 Factors Impacting IIoT Technology Right Now

Bandwidth has a major impact on IIoT technology and your IoT network – it’s one of four requirements that have enabled IIoT applications to flourish.

4 Factors Impacting IIoT Technology

There are four factors that are currently contributing to the growth of IIoT technology. Bandwidth is an underlying component that affects this growth.

Panduit’s white paper, “The Ubiquity of Bandwidth” discusses four reasons IIoT is trending now and how bandwidth plays an integral role in IT/OT data gathering and analytics.

Why is IIoT Happening Now?

What has occurred to propel the IIoT into one of the most popular concepts in IT/OT?

1. Smartphone/Tablet — The widespread adoption of smartphones and tablets has made us comfortable with small devices that provide information and interact with us.

2. The Internet — The Internet, or more specifically, the World Wide Web, is an intricate part of our lives; it is no longer a novelty. We have become accustomed to having our devices access vast amounts of data or upload our personal data to the cloud.

3. Cost — The cost of computing and communications has dropped to a level that makes IoT affordable.

4. Bandwidth — We are used to the increasing speeds of our communication networks but there is another aspect of communications-bandwidth is everywhere.

The Ubiquity of Bandwidth

At the dawn of the computer era, there was only one way to connect devices: wires. Times have changed.

Today, network connections can take many forms: DSL, cable TV plant (FTTx, cable modem), wired Ethernet, Fibre Channel, or Industrial Ethernet for the factory floor.

More impressive is the number of ways to connect wirelessly including Bluetooth, LTE, 5G, satellite, ZigBee, and Wi-Fi.

We now take these connections for granted. Today’s smartphone seamlessly switches between the cellular data network and Wi-Fi.

A decade ago, it would have been unthinkable to see passengers on a commuter train passing the time by streaming their favorite TV program to their hand-held device.

Another aspect of today’s communications links is that they are always on— ever-present. Having to wait for the dial-up modems to train themselves and synchronize is ancient history.

Bandwidth is everywhere. It is this ubiquity of bandwidth that is a necessary component for making the IoT possible.

To learn more about how bandwidth affects your IIoT network, download Panduit’s “The Ubiquity of Bandwidth” white paper – or subscribe to our blog to access all the papers in our IoT “101” white paper series.

 

Which Optical Fiber Should You Use: OM4+ or OM5 Fiber?

Since the TIA ratified the specification for OM5, a wideband multimode optical fiber (WB-MMF), customers that are thinking about upgrading their existing infrastructure, or building out new, are asking a question: Should they deploy OM5 fiber?

I’ll get to the answer in a bit.  First, let’s talk about what OM5 is.

OM5 is essentially an OM4 fiber that has an additional bandwidth specification at 953nm.  Both OM4 and OM5 have bandwidths specified as 4,700MHz•km at 850nm, and OM5 has a bandwidth specification of 2,450MHz•km at 953nm.  OM4 does not have a bandwidth specified at 953nm.

OM5 was designed to be used with optical modules that employ Shortwave Wavelength Division Multiplexing (SWDM).  These new SWDM modules use four wavelengths that span from 850nm through 953nm, to implement 100Gbps links.

Each wavelength is modulated at 25Gbps and by multiplexing them together, one attains 100Gbps.  See figure 1.  Given what wavelengths are used in SWDM optical modules, it is easy to see why the OM5 standard was developed.

OM5 signature core fiber

OM5 was designed to be used with optical modules that employ Shortwave Wavelength Division Multiplexing

Figure 1 – Implementing SWDM

Back to the question.

You only need to consider using OM5 if you plan on deploying 100Gbps links using SWDM optical modules AND need to reach out past 100m.

The interest in using SWDM optical modules is that they allow deploying a 100Gbps link over duplex MMFs, rather than taking up eight parallel fibers required when using 100GBASE-SR4.  SWDM allows reusing the existing duplex fiber infrastructure.

However, there are many more ideal alternatives for deploying 100Gbps over duplex fibers, such as 100G BiDi, or using PAM4 modulation to achieve the higher data rate.

The other alternatives do not suffer from SWDM’s shortcomings, such as higher cost, higher operating temperatures, and the inability to support breakout applications.  If you still are thinking about using SWDM 100G optical modules, and the reach is under 100m, then one would be better off using standard OM3 or OM4, as it is less expensive than OM5.

If extended reach is needed, say for 40G BiDi, the better alternative to OM5 fiber would be our OM4 Signature Core MMF.  Our OM4 Signature Core MMF can reach out to 200m using 40G BiDi, while OM5 will only reach out to 150m, the same as OM4.

That is because at the wavelengths used by BiDi modules, OM5 fiber is no better than OM4.  In fact, OM4 Signature Core has outperformed standard OM5 fiber in several head-to-head competitions conducted at end-user sites.

If the decision is to use 100G SWDM modules AND you need to reach longer than 150m, the better fiber to use would be our OM5 Signature Core MMF.  Our OM5 Signature Core MMF uses the same reach-enhancing technology as our OM4 Signature Core, so you can take advantage of reaches greater than the standard by 20%.

For an in-depth explanation on how our OM4 Signature Core and OM5 Signature Core MMFs are able to achieve extended distances, please visit our Signature Core landing page, where you will find everything you need to know about Signature Core MMFs.

Better yet, view the recorded webinar, Where Do We Go From Here? A Fork in the Road for Multimode Fiber, presented by Robert Reid, our senior technical manager with our Data Center business unit.  In the webinar, not only does Robert talk about our Signature Core MMF, but also OM5, SWDM, and other topics surrounding multimode optical fiber and modules.

Finally, you can download our ebook for a comparison of the various fiber type.

3 Technology Advances Drive IIoT — and its Demand for Real-Time Data

 

Real-Time Data White Paper

What is the impact on the enterprise data center when it tries to process real-time data from IIoT devices?

Deploying IIoT generates data that needs to be collected, analyzed, and acted on in real time.

What exactly is real time and how does it affect your network’s infrastructure?

Panduit’s latest white paper, “What is the Impact of Real-Time Data?”  explains the relationship between process control and real-time data.

What is Real Time?

The definition varies, but generally, a real-time system is one that provides a smooth, seamless user experience.

This is certainly the case when watching HDTV or listening to streaming music. The video frames and audio samples arrive quickly enough and at the right time.

This allows the viewer or listener to integrate them into a smooth experience rather than discrete samples.

This definition also applies to digital control systems implemented on the factory floor or a flight control system. In those applications, if the digital control system does not respond fast enough, bad things can happen.

Process Control is Generating Real-Time Data

End users and manufacturers of IIoT technology are using three concurrent technological advances to deploy IIoT: sensors, Moore’s Law, and the ubiquity of bandwidth.

Without them, the IIoT and the linkage of the factory floor to the enterprise data center would not be possible.

  1. Sensors—Sensors like microelectromechanical systems (MEMS) accelerometers, gyroscopes, and inertial measurement units (IMU), have become small enough with a reduced cost, making wide deployment practical.
  2. Moore’s Law—Doubling the number of transistors in an integrated circuit every two years has resulted in small, cheap CPUs and memories.  The Raspberry Pi single board computer is an example.
  3. The Ubiquity of Bandwidth—IIoT devices that gather data need to send that data upstream for analysis. The ability to connect to a network is available everywhere. There is a wide range of ways IIoT devices can connect to the network, for example, copper or fiber optic cabling, Wi-Fi, ZigBee, and cellular, to name a few.

Deploying IIoT devices generates large amounts of data that must be analyzed and acted upon in real time.

To learn more about the impact of real-time requirements on your network’s infrastructure, download Panduit’s “What is the Impact of Real-Time Data?  white paper – or subscribe to our blog to receive our complete 4-part series of IoT 101 white papers.

 

Good Packets Gone Bad: How Packet Loss Occurs In Network Infrastructure

Causes of Packet Loss

Packet loss reduces network throughput and adds to latency.

 

Packet loss impacts a network in two ways: it reduces throughput and adds to latency.

But why does packet loss occur in the first place?

The following excerpt from Panduit’s “What is the Impact of Packet Loss?” white paper focuses on the root causes of packet corruption and its prevention.

Corrupted packets can occur when they encounter a bit error as the packet moves from one end of the network to the other. Bit errors almost always occur in the lowest layer of a protocol stack, the physical layer. The job of the physical layer is to move information from one end of the network to the other.

Typically this information is represented by a stream of 0s and 1s. The physical layer does not assign any meaning to the stream of 0s and 1s because the upper layers handle that task.

Causes of Bit Errors

Copper Cabling/Wireless Connection: Outside interference such as lightning or other electrical noise can cause the bit error if the physical layer uses copper cabling or wireless connection.

Optical Networks: In optical networks, a bit error could occur if the optical module fails, causing it to have difficulty determining the stream of 0s and 1s. Other causes could be improperly terminated cabling, dirty fiber optic connectors, or water penetrating the cable.

Preventing Packet Loss

Proper Installation and Maintenance of the Network:
When installing RJ45 jacks, you may untwist the copper pairs more than needed. This could unbalance the pair, allowing electromagnetic interference (EMI) to impact link performance. Cleaning the end-face of fiber optic connectors is always important, but even more so at higher network speeds.

Proper grounding and bonding eliminate differing ground potentials between different pieces of networking equipment. These are examples that impact the receiver’s ability to distinguish the transmitted bit sequence that leads to corrupted packets.

Media Type: Media type, for example, copper or fiber, should also be considered. CAT6A unshielded twisted pair copper cabling is ideal for new installations, as it provides the best performance for most applications without the added expense of shielded cable. For harsh environments where EMI is present, you may need to install shielded copper cable or fiber cabling, which are immune to EMI.

To learn more about how you can prevent good packets from going bad, download Panduit’s “What is the Impact of Packet Loss” white paper – or subscribe to our blog to receive our complete 4-part series of IoT 101 white papers.