Consolidation – The pros and cons of putting your eggs in one basket

Consolidating key facilities like data centers has obvious advantages. Chief amongst these are reduced costs, licensing, energy consumption, and maintenance.

But consolidating facilities also consolidates risk. If your single, major global data center goes down, then the company goes down with it.

So, how do you decide how to play a consolidated strategy, and put in place the correct procedures, policies, and technologies to mitigate risk?

Here are some quick pointers.

Designing a consolidated hub

Firstly, consolidating physical resources can cut your overhead, operational, and energy costs, so consider doing more with less hardware.

Software-based ‘server-less’ computing can help you consolidate onto fewer machines using cloud computing and technology such as virtualization and software-defined networking.

Consolidating software onto fewer hardware platforms can also reduce software licensing fees.

However, to pull off a move like this you will need to acquire the required expertise, or find a service provider who can design and manage your next-generation data center.

Mitigating risk through infrastructure design

Modern networking infrastructure design can help mitigate risk through techniques such as application orchestration and policy-based actions.

This model takes a bird’s eye view of your business software and services, and ensures everything runs smoothly, shifting resources where they’re needed in a timely way.

Underpinned by a modern network, this can be very effective in maintaining uptime, with cloud technologies such as ‘containerized micro-services’ offering self-managing, self-healing applications that run automatically, and scale up and down via the cloud.

All of this helps mitigate the risk of consolidation.

The downside is that, again, these cutting-edge technologies require technical know-how, investment, and a different approach to managing and monitoring your IT system.

Future-proofed technology

You want to make sure you have the necessary technology to ensure your new crown jewel data center won’t be rendered obsolete before the construction crew even breaks ground.

So, try to make sure you choose open-standards technologies with resources that can be re-purposed and extended without excessive development effort and cost.

Always-on infrastructure

Finally, how do you spec a data center for the always-on needs of a modern financial institution, that can guarantee near-constant uptime, and without breaking the bank?

When consolidating resources, you need to build in flexibility, and avoid technology silos by ensuring that your resources are transparent, networked, and shared.

Virtualization is a key element of a successful data center consolidation strategy, and it can help you achieve these things.

A converged network architecture that simplifies, accelerates, and utilizes resources, is also a must-have.

This could include fast, Fiber Channel or iSCSI networks to connect servers and storage, plus network and storage virtualization, which pools and optimizes your network and storage resources in a cost-effective and efficient way.

There are two major procedures for implementing change. Either rip and replace an old platform, particularly if it’s failing. Or develop the two side-by-side, transitioning applications step-by-step.

The latter may be more prudent for you, carrying less risk. Either way, as an experienced networking architecture partner, Panduit can help you plan and implement a next-generation network infrastructure.

Find out more at: https://pages.panduit.com/finance-all.html

Panduit Solutions Can Help Prepare Your Network Infrastructure for IIoT Technology

How Can Your Network’s Performance Impact Deploying IIoT Technology?

Use Panduit’s Insights to Build a Robust Network Foundation for Your IIoT Deployment

How can you prepare your network infrastructure to successfully accommodate IIoT technology? Get answers to potential IIoT technology deployment issues that may impact your network infrastructure.

For example, here are some questions you may ask when deploying IIoT technology:

What is the impact of real-time data?

Most networks were not designed to react to and process data in real-time. From self-driving cars to digital control systems on factory floors, real-time data is a big part of IIoT deployments. Not being able to act on data in real-time can result in catastrophic results.

How does edge computing affect network performance?

Edge computing allows the compute, storage, and application resources to be located close to the user or the source of the data. With cloud deployment, these resources are in a distant data center owned by the cloud provider. Deploying IIoT solutions using the cloud makes it difficult to manage latency. Today, IIoT deployments can benefit more from edge computing than cloud computing.

How important is the data gathered from sensors?

IIoT data from predictive analytics provided by sensors can improve operational efficiency, reduce downtime, and save money for your business. The many types and characteristics of sensors are important to consider when deploying IIoT technology.

How important is bandwidth for helping IIoT technology extract information from data?

Bandwidth is everywhere. It is this ubiquity of bandwidth that allows devices to seamlessly switch between networks. As a result, connected devices no longer require endless cables and wires. Bandwidth allows us to communicate quickly and effectively, which makes IIoT possible.

What is the impact of packet loss?

IT network managers dislike packet loss because it steals valuable bandwidth, reducing the link’s available throughput. For OT network managers trying to deploy IIoT, a network’s latency is more important than bandwidth or throughput. Despite their differences, the minimization of corrupted packets requires both IT and OT to work together as they transform their network to leverage IIoT technology.

Panduit has developed a series of white papers describing the challenges surrounding the IIoT’s impact on the typical data center, why IT and OT managers may look at the same problems differently, how they can successfully resolve those problems, and the importance of IT/OT convergence to your network’s performance. In addition, you will learn the following:

  • The importance of IT and OT network infrastructures
  • Why IIoT process controls demand real-time data
  • The relationship between IIoT technology and bandwidth
  • The ways IIoT deployments can benefit from edge computing
  • How to determine the importance of sensor specifications

Access all the papers in our IIoT white paper series.

5 Mega Trends Driving the Future Direction of Data Centers

2018 was a spectacular year for change around the data centre environment. While researching my new paper – ‘Light into Money – The Future of Fibre Optics in Data Centre  Networks’, There have been various bubbling under technologies that have broken through and are providing the impetus to some radical cloud environments.

  1. Edge Computingless edgy more mainstream – We are seeing leading businesses and organisations heavily invest in technology that will demand ‘both’ growth of centralised cloud data centre services and driving the requirement for a whole new breed of Edge data centres placing compute capability where it’s needed. Placing analysis and response processing close to the source allows data users to optimise response times. The Edge is driving efficient bandwidth utilisation and minimising connections and physical reach (distance) that introduce latency into the infrastructure. Together with other data growth areas, Edge Computing applications will generate petabytes of data, daily, by 2020. Systems that intelligently process data to create business advantage will be essential to our customers’ future prosperity.
  2. Hyperscale – data centre investment – Efficiency gained on the coat tails of giants – Industry titans, Google, Amazon, Microsoft, Facebook, Apple and Asian public cloud players Alibaba and Ten Cent are investing heavily, not only in new facilities, but the technology platforms that are enabling ever faster data transport and processing. The global hyperscale data centre market size is expected to grow from $25.08 billion in 2017 to $80.65 billion by 2022. Established businesses competing with the web scale firms cannot afford to be constricted by legacy technologies, to remain competitive you must build new platforms and invest in the next generation Internet Protocol (IP) infrastructure.
  3. Solid State StorageNo Flash in the pan – Flash storage is replacing disk drives across the industry for high performance compute environments. Flash technology is on trend with the demand for higher bandwidth and low latency requirements of big data workloads. As our customers’ data volumes increase, new access and storage techniques such as Serial Storage Architecture (SSA) delivers to eliminate data bottlenecks in the data centre and Edge environments. Flash offers a more efficient cabinet and rack footprint and far greater power efficiency over disk drives. As the requirement for storage space multiples this is a significant advantage.
  4. Artificial Intelligence (AI) – disruption driving growth – AI together with Machine Learning (ML) require machine to machine communications at network speeds and the data volumes that have serious implications for network topologies and connectivity. An example of this is seen in the Ethernet switch market, which has seen incredible growth of 25 and 100Gigabit Ethernet (GE) ports shipments. These and new higher speed Ethernet ports will be essential to the growth of AI and Machine Learning applications, as the volume of data required are in the petabyte scale. We are working with partners on high speed and high-quality infrastructure and the next generation topologies to support this data volume growth. Read more on this subject in the report – Light into Money.
  5. Converged technologysimplify to clarify – To build more efficient data centres it is agreed that simplified designs on flexible infrastructure platforms are required to achieve more agile organisations. We are witnessing increased automation, more integrated solutions and software defined capabilities that are reducing the reliance on silo-systems. This allows users to taking advantage of highly flexible infrastructure to drive more capacity, monitoring and analysis and increase efficiency within the data centre. Converged and hyper-converged infrastructure are taking advantage of the many of the topics discussed above to build the future cloud.

Understanding how leaders in the market are moving forward provides stepping stones for all of us to develop our platforms and data centres to take advantage of new developments. However, we must not follow blindly, it is essential that our designs and solutions create the most effective and efficient solution for our needs, and we can only do this when we step out of the silo and view the wider opportunities.

Bandwidth Bottleneck – How to De-stress the Data Center Infrastructure

The IT industry does an excellent job in advance positioning the next great innovation. We have been just a step away from the internet of things (IoT) for over 20-years, AI (Artificial Intelligence) has been around for as long as I can remember, and solid-state memory is set to take over from disk drives and tape, speeding access, saving space, energy and resources. Maturity of technology can be mapped using a ‘hype cycle’ concept model, in simple terms… as time moves forward the ‘hype’ becomes reality and ‘quantum leaps’ are ever closer.
Explosive data growth and need for ubiquitous storage and processing is undisputed, which leaves the question – is it time to believe the hype?

Preparing for tomorrow’s future is crucial for business survival

In data center network communications, multiple technologies are converging to deliver growth of emerging, data intensive applications from e-health and media and content delivery, to sensor connected devices and automotive vehicles.

With volumes of data set to grow exponentially, the method of gathering, storing, processing and transmitting across the data center will be seriously hindered without infrastructure that meets latency and bandwidth performance requirements now, and for the foreseeable future.

Indeed, when technologies such as AI and Machine Learning (ML) become mainstream, individual data sets will run to 100s of terabytes. Meanwhile M2M data is expected to outstrip enterprise and personal data within the next five years. This increase in data traffic is already creating bottlenecks within legacy data centers, with every gateway and connection reducing the overall performance potential of the system.

My latest research white paper, ‘Light into Money – The Future of Fibre Optics in the Data Centre Networks’ investigates the drivers for the current and next generation infrastructure needed to support the data center industry and facilitate the high bandwidth, low latency platforms required in the multi-petabyte traffic era.

With an understanding of the opportunities available and the technologies influencing change we can plan better and prepare our structures to operate at the most appropriate levels. We can learn from the hyperscale designers who are designing systems with equipment manufacturers to optimize requirements for use, to attract these fast-growing applications into the cloud.

Each of these technology advances reflects the rapid growth of the global digital economy which is creating demand for greater network speed and performance from the internet backbone right into the core of the data center.

Key challenges for the infrastructure network are the ever-growing demand for faster speed – 10GE, 25GE, 40GE, 50GE and 100GE today, with 200GE – 400GE with predicted rollout as early as 2019. Together with new network architectures designed to maximise performance, the physical infrastructure must be designed to enable rapid and seamless deployment of new switching technologies.

Data bottlenecks will continue to be a growing problem if infrastructure and data center businesses focus on short term fixes. Network infrastructure is as vital as data center power and cooling, without appropriate investment it could significantly reduce both the life cycle and ROI.

My white paper – Light into Money – The Future of Fibre Optics in the Data Centre Networks is free to download @ Light into Money – The Future of Fibre Optics in the Data Centre Networks’

PANDUIT PRESENTS OPTICAL CONNECTIVITY PAPERS AT IWCS

Panduit participated at the 67th International Cable & Connectivity Symposium (IWCS) at the Rhode Island Convention Center, Providence, October 14-17, 2018. During the technical conference, Panduit contributed two presentations that summarized recent advancements made in the field of data center communications.

Asher Novick presents at Advances in Optical Connectivity session

Asher Novick, Optical Research Engineer

At the technical conference, Asher Novick, optical research engineer, on behalf of our Fiber Optics Research Group, presented two papers at the Advances in Optical Connectivity session and the Multimode Fiber session. Novick‘s first presentation, titled “Performance Improvement of Single Mode Signal Transmission in Multimode Fiber using Ultra Low Loss Connector,” described a collaborative work with Cailabs (http://www.cailabs.com/) to enable single mode transmission over multimode fiber. Our research shows that utilizing novel optical phase masks technology and Panduit’s Ultra Low Loss connector can enable transmission of single mode lasers over links longer than 300 meters with multiple connectors at data rates of 100 Gbps. This investigation enables the use of installed multimode infrastructure with single mode transceivers. This is important to cope with needs of higher data rates in data centers without replacing fiber infrastructure.

His second presentation, titled “Correlation OTDR for Accurate Measurements of Optical Length of Fiber Optic Cables under Diverse Environmental Temperatures,” presents a novel technique to characterize optical length of cables, which is critical to equalize latency in emerging Financial Technology applications such as algorithmic trading. This paper has already been invited to be presented in the 2018 / 2019 IWCS Webinar Series program.

“Enabled by our world-class fiber optics laboratory, these contributions summarize our most recent research on the current factors limiting the reaches of multimode fiber communication systems,” said Panduit’s Chief Technology Officer Brett Lane. “They demonstrate our longstanding commitment to connecting original research to meeting our customers most demanding challenges of tomorrow.”

Investing in the future: collective thinking in facility design






Future-proofing facilities while leveraging previous investments

A new generation of facilities are being designed and constructed around the globe. A key facility design challenge is ensuring the systems and infrastructure involved will not only deliver new advantage but also function seamlessly with (and add value to) the other parts of a company’s ecosystem, including legacy systems and existing capital projects. Old and new primary investments need to work together harmoniously to deliver a more productive and profitable future.

Future-Proofed Facility Design White Paper

READ THE WHITE PAPER: Why state-of-the-art facilities require state-of-the-art infrastructure

In this age of digital transformation, data underpins modern business, connectivity is key, and operational scaling is a fact of life. This is why corporate facilities in banking, finance, and any other sector are being conceived to take advantage of the opportunities offered by this new landscape. Getting the infrastructure right, the strongest underpinning, is crucial. Continuing with the banking example, companies such as HSBC, JP Morgan Chase, Crédit Suisse and CitiBank (or their outsourcing partners) are doing precisely that.

The data center, now evolving into next-gen digital infrastructure architecture, has provided the core of banking operations for generations. Today, such data centers are expected to work smarter and do more to process and store vastly increased volumes of data, globally, quicker than ever. They must be always available, with no delays.

As a result, global heads of facilities and real estate want assurances they are investing in the right technical infrastructure, maximizing the ability of the organization’s IT to, for instance, deploy workload in the right places, and deliver the right services to users and customers at the right time (and at the right price) – integrating with still-valuable legacy systems where necessary. This requires technology that is both reliable and flexible, based on global standards, as well as working with acknowledged leaders in the field.

At a basic level, it can mean tried-and-tested cabling – the strongest physical foundations – and ensuring an overall standards-based approach that is not only optimized for interoperability and performance but also addresses a multitude of other facilities (and cost) requirements, from energy efficiency to cooling optimization, even space considerations. By looking at the bigger picture and applying joined-up thinking when making technology choices that affect facility design, facilities and real estate leaders – in partnership with IT and procurement teams – can ensure both connectivity and investment protection. This, in turn, can have a real impact on the bottom line as infrastructure converges, data volumes increase exponentially, and the pace of business continues to speed up.

To find out more about how you can future-proof your facilities while leveraging previous investments, read our report, “Why State-of-the-art Facilities Require State-of-the-art Infrastructure.”

Building the next-gen data centre: global, connected, ready for business






With modern business defined by data and by connectivity, tomorrow’s data centre will bear little resemblance to today’s models.

What we currently think of as a data centre is being superseded by next-gen digital infrastructure architecture: global in scale and defined by the business services it delivers and the user/consumer requirements that it satisfies. According to a recent Gartner, Inc. report, infrastructure and operations people tasked with data centres will have to focus on “enabling rapid deployment of business services and deploying workloads to the right locations, for the right reasons, at the right price”.

These super-charged requirements, and that unstoppable focus on data, mean the most robust, reliable and flexible infrastructure – physical, electrical and network – will be paramount. Gartner also added that, by 2025, eighty percent of enterprises will have shut down their traditional data centre versus ten percent today. The key word is “traditional”.

With the rise of next-gen digital infrastructure architecture, workload placement becomes a critical driver of successful digital delivery. That, in turn, is underpinned by performance, availability, latency, scalability, and so on. Indeed, Gartner suggests an “ecosystem” is required to enable “scalable, agile infrastructures”.

What’s the best way to engage with this era of digital transformation, interconnect services, cloud, edge services and Internet of Things (IoT) if you’re planning or preparing to replace your data centre? The optimum digital infrastructure architecture (aka modern data centre) to meet requirements for the next five, ten or 15 years will, as ever, depend on each organisation’s priorities. There’s no simple answer. For some, a major step will be to ensure the strongest physical foundations including cabling, pathways and security. Many organisations will need an effective way to “bridge the gap” from old-world data centre and stacks into converged networks and infrastructure. At the same time, data centre infrastructure management tools can help improve energy efficiency and reduce costs. Perhaps a through line in all situations is ensuring the right connectivity solutions: to increase network throughput, reduce latency, improve agility, ensure scalability, and so on. That way, you’re not only ready for opportunities presented by the Internet of Things – you’ll be ready for the Internet of Everything.

To learn more about ensuring you have the right connectivity solutions at your core, read the report: https://pages.panduit.com/finance-all.html

Elmhurst Memorial Healthcare Uses Technology to Provide Superior Patient Care

A Robust Network Infrastructure Allows for Patient-Centered Care

A robust network infrastructure allows for patient-centered care.

 

The future is here – but not all hospitals have the infrastructure to embrace it. So, when Elmhurst Memorial Healthcare rebuilt with a commitment to patient-centered care, they turned to Panduit for network infrastructure and connectivity solutions.

Challenge

The hospital needed to design a future-forward backbone for its enterprise to accommodate the 178,000-square-foot, four-story main hospital and to connect:

  • physician offices
  • outpatient healthcare services
  • surgical suites
  • the medical office building (80,000 square foot)

Solution

To accomplish this task, Elmhurst Memorial Healthcare relied on Panduit’s enterprise and data center network infrastructure solutions to create a campus-wide network that places the most advanced equipment and techniques in the hands of top medical talent.

Panduit enabled:

  • On-site telecom rooms and data center
  • Fast and secure data transmission
  • Efficient Power over Ethernet
  • Reliable wireless capabilities

Panduit’s TX6A™ 10Gig copper and Opticom® fiber backbone ensure that the entire care team can securely view medical records and test results simultaneously, regardless of location.

In addition, Panduit’s cabinet and cable management products organize and protect critical equipment and cabling from environmental hazards such as dust, heat, and humidity. Panduit’s FiberRunner® cable management system enables customers to manage, organize, and properly route their cables, saving space and ensuring optimal network operation.

Result

With Panduit’s help, Elmhurst Memorial Healthcare now makes technology decisions based on medical and business needs, not infrastructure limitations.

See the infographic case study.

 

 

Digitizing History for Future Preservation with Data Center Solutions

How the Vatican Apostolic Library Preserved its Manuscript Collection

The Vatican Apostolic Library preserves its invaluable documents with the help of a robust, highly available network infrastructure.

Undergoing a massive data transfer process is not easy, but the Vatican Apostolic Library did just that. Panduit’s previous success in enhancing the connectivity and performance for the Vatican Apostolic Library’s main data center earned it the trust to help digitize and protect more than 80,000 priceless historical manuscripts.

Founded in 1451, the Vatican Apostolic Library’s collection includes precious material from as far back as Michelangelo and Galileo. To preserve the collection and continue to contribute to the worldwide sharing of knowledge, the 15th-century library decided to digitize its antiquated and increasingly delicate manuscripts.

To successfully complete this project, the library’s Belvedere Court building needed a more efficient data center infrastructure to support document storage. The library also needed solutions to address power and energy usage challenges, capacity constraints, environmental and connectivity issues, and security and access control requirements.

Adapting to the constraints of the ancient structure, Panduit developed a solution with security, storage, and power management.

The building now uses hot-aisle containment with hot/cold air separation inside the cabinets for improved airflow – delivering a power savings of nearly 30% compared to the previous system.

SmartZone solutions simplified the library’s network infrastructure, managing, and monitoring rack power distribution units and environmental sensors through a single IP address. For enhanced data center security, the gateways support access via intelligent handles on cabinets.

The Vatican Apostolic Library now has the capability to support the vast amount of data generated by the digitization project, ensuring high reliability and elevated transmission speed. Because of Panduit’s network, people around the world have online access to these invaluable treasures.

Read the full article here.

Which Optical Fiber Should You Use: OM4+ or OM5 Fiber?

Since the TIA ratified the specification for OM5, a wideband multimode optical fiber (WB-MMF), customers that are thinking about upgrading their existing infrastructure, or building out new, are asking a question: Should they deploy OM5 fiber?

I’ll get to the answer in a bit.  First, let’s talk about what OM5 is.

OM5 is essentially an OM4 fiber that has an additional bandwidth specification at 953nm.  Both OM4 and OM5 have bandwidths specified as 4,700MHz•km at 850nm, and OM5 has a bandwidth specification of 2,450MHz•km at 953nm.  OM4 does not have a bandwidth specified at 953nm.

OM5 was designed to be used with optical modules that employ Shortwave Wavelength Division Multiplexing (SWDM).  These new SWDM modules use four wavelengths that span from 850nm through 953nm, to implement 100Gbps links.

Each wavelength is modulated at 25Gbps and by multiplexing them together, one attains 100Gbps.  See figure 1.  Given what wavelengths are used in SWDM optical modules, it is easy to see why the OM5 standard was developed.

OM5 signature core fiber

OM5 was designed to be used with optical modules that employ Shortwave Wavelength Division Multiplexing

Figure 1 – Implementing SWDM

Back to the question.

You only need to consider using OM5 if you plan on deploying 100Gbps links using SWDM optical modules AND need to reach out past 100m.

The interest in using SWDM optical modules is that they allow deploying a 100Gbps link over duplex MMFs, rather than taking up eight parallel fibers required when using 100GBASE-SR4.  SWDM allows reusing the existing duplex fiber infrastructure.

However, there are many more ideal alternatives for deploying 100Gbps over duplex fibers, such as 100G BiDi, or using PAM4 modulation to achieve the higher data rate.

The other alternatives do not suffer from SWDM’s shortcomings, such as higher cost, higher operating temperatures, and the inability to support breakout applications.  If you still are thinking about using SWDM 100G optical modules, and the reach is under 100m, then one would be better off using standard OM3 or OM4, as it is less expensive than OM5.

If extended reach is needed, say for 40G BiDi, the better alternative to OM5 fiber would be our OM4 Signature Core MMF.  Our OM4 Signature Core MMF can reach out to 200m using 40G BiDi, while OM5 will only reach out to 150m, the same as OM4.

That is because at the wavelengths used by BiDi modules, OM5 fiber is no better than OM4.  In fact, OM4 Signature Core has outperformed standard OM5 fiber in several head-to-head competitions conducted at end-user sites.

If the decision is to use 100G SWDM modules AND you need to reach longer than 150m, the better fiber to use would be our OM5 Signature Core MMF.  Our OM5 Signature Core MMF uses the same reach-enhancing technology as our OM4 Signature Core, so you can take advantage of reaches greater than the standard by 20%.

For an in-depth explanation on how our OM4 Signature Core and OM5 Signature Core MMFs are able to achieve extended distances, please visit our Signature Core landing page, where you will find everything you need to know about Signature Core MMFs.

Better yet, view the recorded webinar, Where Do We Go From Here? A Fork in the Road for Multimode Fiber, presented by Robert Reid, our senior technical manager with our Data Center business unit.  In the webinar, not only does Robert talk about our Signature Core MMF, but also OM5, SWDM, and other topics surrounding multimode optical fiber and modules.

Finally, you can download our ebook for a comparison of the various fiber type.