Can your infrastructure meet the requirements of MiFID II?

With GDPR still a prevalent concern across the financial services industry, financial institutions face another major regulatory challenge in the form of the Markets in Financial Instruments Directive II (MiFID II). In the UK alone, the Financial Conduct Authority received 1,335 notifications of inaccurate transaction reporting under (MiFID II during 2018*).

The directive is multi-faceted. Ostensibly, the EU designed it to offer more protection to investors by introducing greater transparency to asset classes, whether they’re equities, fixed income, exchange traded funds or foreign exchange.

But this has consequences for your underlying networking infrastructure, which is required to support greater and more timely data transactions. This is especially pertinent for trading firms in the High Frequency Trading (HFT) sector, where trimming network latency by nanoseconds results in increased profits and competitive advantage.

With this in mind, MiFID II mandates latency standards across global banking networks. It also requires communication across those networks to be captured and recorded in real-time, and time-stamped accordingly.

Time stamping is a critical factor, requiring correct handling, with uniform latency across a network helping to create a consolidated view of network transactions which all carry accurate time-stamps.

There are certain technical standards for time-stamping that firms must meet under the new directive. Among these are: choosing the clock that you will use as a reference; indicating the type of organizations involved in a trade; defining the type of trade; and the level of time-stamp granularity -e.g. microseconds or nanoseconds. If you, as a trader, are dealing with a dual-listed, cross-border stock that covers two time zones, your infrastructure needs to be sufficiently uniform so you can document well and timestamp accurately. Once again, latency is the key.

The consequences are even fiercer than with GDPR, as non-compliant companies risk fines of up to €5m, or up to 10% of global turnover**. This is a concern for the 65% of capital market firms across Europe who stated in a 2018 survey that they had no adequate or systematic method in place to monitor trades in accordance with best execution criteria***.

Read this blog to find out how else you should be equipping your network infrastructure to ensure efficiency.  

*  https://www.ftadviser.com/regulation/2019/04/10/more-than-1-000-mifid-ii-breaches-reported-to-fca/

**  https://www.pwc.ch/en/publications/2018/solgari-industry-report.pdf 

***    https://www.finextra.com/blogposting/16488/mifid-ii—one-year-on

Latency is only the start of the challenge

There’s a clear need for a latency standard that can be applied globally across financial institutions. But that’s just one step. The real challenge emerges when you ask why this standard is necessary, and what it means for the future success of your business.

Latency is key to your success because if it isn’t perfectly calibrated, it’ll cost you. According to a study by the Tabb Group, if your infrastructure allows even 5ms of lag, you could lose an astounding $4m per millisecond across transactions.*

The reality is that the demand on your digital infrastructure has never been higher. We live in a world of high-speed financial trading. Data needs to be processed, analyzed, and transmitted at lightning speeds to meet the global, mobile, and 24/7 demands for instantaneous transactions and transfers.

Moreover, when positions change in an instant, latency isn’t just a matter of efficiency. It’s a matter of profitability. Which means that your infrastructure must be up to task if your institution is to remain viable over the coming years.

That’s why it’s vital to have a next-gen digital infrastructure architecture that’s robust and reliable. Joe Skorupa, VP Distinguished Analyst at Gartner Data Centre Convergence, recently commented*, “I have known major financial organizations make multi-million dollar investments only to rip-and-replace them the very next day if a technology comes along that improves their competitive edge.

However, the network hasn’t really changed in the last few decades because network folk are conservative. The reasons are quite clear: if a server in a data center fails, your application goes down; but if your network goes down your entire data center goes down.”

Skorupa highlights the latency issue right here. In order to benefit from super-speed transactions, and make the most of your digital transformation, you need to equivalize latency across your entire network. This involves taking an in-depth look at your existing physical infrastructure, and determining where change is required.

Upgrading and consolidating your data centre infrastructure can also help to mitigate risk, and future-proof the business, as this blog post explains [http://panduitblog.com/2019/04/29/datacenter/consolidation-the-pros-and-cons-of-putting-your-eggs-in-one-basket/].

As a trusted infrastructure partner, Panduit can help you tackle your latency issues, and ensure the right networking technologies are underpinning your financial services.

##

*Source: https://datacentrenews.eu/story/opinion-automating-the-data-center-with-ibn, October 2018

*Source: The Value of a Millisecond: Finding the Optimal Speed of a Trading Infrastructure, April 2008

Consolidation – The pros and cons of putting your eggs in one basket

Consolidating key facilities like data centers has obvious advantages. Chief amongst these are reduced costs, licensing, energy consumption, and maintenance.

But consolidating facilities also consolidates risk. If your single, major global data center goes down, then the company goes down with it.

So, how do you decide how to play a consolidated strategy, and put in place the correct procedures, policies, and technologies to mitigate risk?

Here are some quick pointers.

Designing a consolidated hub

Firstly, consolidating physical resources can cut your overhead, operational, and energy costs, so consider doing more with less hardware.

Software-based ‘server-less’ computing can help you consolidate onto fewer machines using cloud computing and technology such as virtualization and software-defined networking.

Consolidating software onto fewer hardware platforms can also reduce software licensing fees.

However, to pull off a move like this you will need to acquire the required expertise, or find a service provider who can design and manage your next-generation data center.

Mitigating risk through infrastructure design

Modern networking infrastructure design can help mitigate risk through techniques such as application orchestration and policy-based actions.

This model takes a bird’s eye view of your business software and services, and ensures everything runs smoothly, shifting resources where they’re needed in a timely way.

Underpinned by a modern network, this can be very effective in maintaining uptime, with cloud technologies such as ‘containerized micro-services’ offering self-managing, self-healing applications that run automatically, and scale up and down via the cloud.

All of this helps mitigate the risk of consolidation.

The downside is that, again, these cutting-edge technologies require technical know-how, investment, and a different approach to managing and monitoring your IT system.

Future-proofed technology

You want to make sure you have the necessary technology to ensure your new crown jewel data center won’t be rendered obsolete before the construction crew even breaks ground.

So, try to make sure you choose open-standards technologies with resources that can be re-purposed and extended without excessive development effort and cost.

Always-on infrastructure

Finally, how do you spec a data center for the always-on needs of a modern financial institution, that can guarantee near-constant uptime, and without breaking the bank?

When consolidating resources, you need to build in flexibility, and avoid technology silos by ensuring that your resources are transparent, networked, and shared.

Virtualization is a key element of a successful data center consolidation strategy, and it can help you achieve these things.

A converged network architecture that simplifies, accelerates, and utilizes resources, is also a must-have.

This could include fast, Fiber Channel or iSCSI networks to connect servers and storage, plus network and storage virtualization, which pools and optimizes your network and storage resources in a cost-effective and efficient way.

There are two major procedures for implementing change. Either rip and replace an old platform, particularly if it’s failing. Or develop the two side-by-side, transitioning applications step-by-step.

The latter may be more prudent for you, carrying less risk. Either way, as an experienced networking architecture partner, Panduit can help you plan and implement a next-generation network infrastructure.

Find out more at: https://pages.panduit.com/finance-all.html

Panduit Solutions Can Help Prepare Your Network Infrastructure for IIoT Technology

How Can Your Network’s Performance Impact Deploying IIoT Technology?

Use Panduit’s Insights to Build a Robust Network Foundation for Your IIoT Deployment

How can you prepare your network infrastructure to successfully accommodate IIoT technology? Get answers to potential IIoT technology deployment issues that may impact your network infrastructure.

For example, here are some questions you may ask when deploying IIoT technology:

What is the impact of real-time data?

Most networks were not designed to react to and process data in real-time. From self-driving cars to digital control systems on factory floors, real-time data is a big part of IIoT deployments. Not being able to act on data in real-time can result in catastrophic results.

How does edge computing affect network performance?

Edge computing allows the compute, storage, and application resources to be located close to the user or the source of the data. With cloud deployment, these resources are in a distant data center owned by the cloud provider. Deploying IIoT solutions using the cloud makes it difficult to manage latency. Today, IIoT deployments can benefit more from edge computing than cloud computing.

How important is the data gathered from sensors?

IIoT data from predictive analytics provided by sensors can improve operational efficiency, reduce downtime, and save money for your business. The many types and characteristics of sensors are important to consider when deploying IIoT technology.

How important is bandwidth for helping IIoT technology extract information from data?

Bandwidth is everywhere. It is this ubiquity of bandwidth that allows devices to seamlessly switch between networks. As a result, connected devices no longer require endless cables and wires. Bandwidth allows us to communicate quickly and effectively, which makes IIoT possible.

What is the impact of packet loss?

IT network managers dislike packet loss because it steals valuable bandwidth, reducing the link’s available throughput. For OT network managers trying to deploy IIoT, a network’s latency is more important than bandwidth or throughput. Despite their differences, the minimization of corrupted packets requires both IT and OT to work together as they transform their network to leverage IIoT technology.

Panduit has developed a series of white papers describing the challenges surrounding the IIoT’s impact on the typical data center, why IT and OT managers may look at the same problems differently, how they can successfully resolve those problems, and the importance of IT/OT convergence to your network’s performance. In addition, you will learn the following:

  • The importance of IT and OT network infrastructures
  • Why IIoT process controls demand real-time data
  • The relationship between IIoT technology and bandwidth
  • The ways IIoT deployments can benefit from edge computing
  • How to determine the importance of sensor specifications

Access all the papers in our IIoT white paper series.

5 Mega Trends Driving the Future Direction of Data Centers

2018 was a spectacular year for change around the data centre environment. While researching my new paper – ‘Light into Money – The Future of Fibre Optics in Data Centre  Networks’, There have been various bubbling under technologies that have broken through and are providing the impetus to some radical cloud environments.

  1. Edge Computingless edgy more mainstream – We are seeing leading businesses and organisations heavily invest in technology that will demand ‘both’ growth of centralised cloud data centre services and driving the requirement for a whole new breed of Edge data centres placing compute capability where it’s needed. Placing analysis and response processing close to the source allows data users to optimise response times. The Edge is driving efficient bandwidth utilisation and minimising connections and physical reach (distance) that introduce latency into the infrastructure. Together with other data growth areas, Edge Computing applications will generate petabytes of data, daily, by 2020. Systems that intelligently process data to create business advantage will be essential to our customers’ future prosperity.
  2. Hyperscale – data centre investment – Efficiency gained on the coat tails of giants – Industry titans, Google, Amazon, Microsoft, Facebook, Apple and Asian public cloud players Alibaba and Ten Cent are investing heavily, not only in new facilities, but the technology platforms that are enabling ever faster data transport and processing. The global hyperscale data centre market size is expected to grow from $25.08 billion in 2017 to $80.65 billion by 2022. Established businesses competing with the web scale firms cannot afford to be constricted by legacy technologies, to remain competitive you must build new platforms and invest in the next generation Internet Protocol (IP) infrastructure.
  3. Solid State StorageNo Flash in the pan – Flash storage is replacing disk drives across the industry for high performance compute environments. Flash technology is on trend with the demand for higher bandwidth and low latency requirements of big data workloads. As our customers’ data volumes increase, new access and storage techniques such as Serial Storage Architecture (SSA) delivers to eliminate data bottlenecks in the data centre and Edge environments. Flash offers a more efficient cabinet and rack footprint and far greater power efficiency over disk drives. As the requirement for storage space multiples this is a significant advantage.
  4. Artificial Intelligence (AI) – disruption driving growth – AI together with Machine Learning (ML) require machine to machine communications at network speeds and the data volumes that have serious implications for network topologies and connectivity. An example of this is seen in the Ethernet switch market, which has seen incredible growth of 25 and 100Gigabit Ethernet (GE) ports shipments. These and new higher speed Ethernet ports will be essential to the growth of AI and Machine Learning applications, as the volume of data required are in the petabyte scale. We are working with partners on high speed and high-quality infrastructure and the next generation topologies to support this data volume growth. Read more on this subject in the report – Light into Money.
  5. Converged technologysimplify to clarify – To build more efficient data centres it is agreed that simplified designs on flexible infrastructure platforms are required to achieve more agile organisations. We are witnessing increased automation, more integrated solutions and software defined capabilities that are reducing the reliance on silo-systems. This allows users to taking advantage of highly flexible infrastructure to drive more capacity, monitoring and analysis and increase efficiency within the data centre. Converged and hyper-converged infrastructure are taking advantage of the many of the topics discussed above to build the future cloud.

Understanding how leaders in the market are moving forward provides stepping stones for all of us to develop our platforms and data centres to take advantage of new developments. However, we must not follow blindly, it is essential that our designs and solutions create the most effective and efficient solution for our needs, and we can only do this when we step out of the silo and view the wider opportunities.

Bandwidth Bottleneck – How to De-stress the Data Center Infrastructure

The IT industry does an excellent job in advance positioning the next great innovation. We have been just a step away from the internet of things (IoT) for over 20-years, AI (Artificial Intelligence) has been around for as long as I can remember, and solid-state memory is set to take over from disk drives and tape, speeding access, saving space, energy and resources. Maturity of technology can be mapped using a ‘hype cycle’ concept model, in simple terms… as time moves forward the ‘hype’ becomes reality and ‘quantum leaps’ are ever closer.
Explosive data growth and need for ubiquitous storage and processing is undisputed, which leaves the question – is it time to believe the hype?

Preparing for tomorrow’s future is crucial for business survival

In data center network communications, multiple technologies are converging to deliver growth of emerging, data intensive applications from e-health and media and content delivery, to sensor connected devices and automotive vehicles.

With volumes of data set to grow exponentially, the method of gathering, storing, processing and transmitting across the data center will be seriously hindered without infrastructure that meets latency and bandwidth performance requirements now, and for the foreseeable future.

Indeed, when technologies such as AI and Machine Learning (ML) become mainstream, individual data sets will run to 100s of terabytes. Meanwhile M2M data is expected to outstrip enterprise and personal data within the next five years. This increase in data traffic is already creating bottlenecks within legacy data centers, with every gateway and connection reducing the overall performance potential of the system.

My latest research white paper, ‘Light into Money – The Future of Fibre Optics in the Data Centre Networks’ investigates the drivers for the current and next generation infrastructure needed to support the data center industry and facilitate the high bandwidth, low latency platforms required in the multi-petabyte traffic era.

With an understanding of the opportunities available and the technologies influencing change we can plan better and prepare our structures to operate at the most appropriate levels. We can learn from the hyperscale designers who are designing systems with equipment manufacturers to optimize requirements for use, to attract these fast-growing applications into the cloud.

Each of these technology advances reflects the rapid growth of the global digital economy which is creating demand for greater network speed and performance from the internet backbone right into the core of the data center.

Key challenges for the infrastructure network are the ever-growing demand for faster speed – 10GE, 25GE, 40GE, 50GE and 100GE today, with 200GE – 400GE with predicted rollout as early as 2019. Together with new network architectures designed to maximise performance, the physical infrastructure must be designed to enable rapid and seamless deployment of new switching technologies.

Data bottlenecks will continue to be a growing problem if infrastructure and data center businesses focus on short term fixes. Network infrastructure is as vital as data center power and cooling, without appropriate investment it could significantly reduce both the life cycle and ROI.

My white paper – Light into Money – The Future of Fibre Optics in the Data Centre Networks is free to download @ Light into Money – The Future of Fibre Optics in the Data Centre Networks’

Investing in the future: collective thinking in facility design






Future-proofing facilities while leveraging previous investments

A new generation of facilities are being designed and constructed around the globe. A key facility design challenge is ensuring the systems and infrastructure involved will not only deliver new advantage but also function seamlessly with (and add value to) the other parts of a company’s ecosystem, including legacy systems and existing capital projects. Old and new primary investments need to work together harmoniously to deliver a more productive and profitable future.

Future-Proofed Facility Design White Paper

READ THE WHITE PAPER: Why state-of-the-art facilities require state-of-the-art infrastructure

In this age of digital transformation, data underpins modern business, connectivity is key, and operational scaling is a fact of life. This is why corporate facilities in banking, finance, and any other sector are being conceived to take advantage of the opportunities offered by this new landscape. Getting the infrastructure right, the strongest underpinning, is crucial. Continuing with the banking example, companies such as HSBC, JP Morgan Chase, Crédit Suisse and CitiBank (or their outsourcing partners) are doing precisely that.

The data center, now evolving into next-gen digital infrastructure architecture, has provided the core of banking operations for generations. Today, such data centers are expected to work smarter and do more to process and store vastly increased volumes of data, globally, quicker than ever. They must be always available, with no delays.

As a result, global heads of facilities and real estate want assurances they are investing in the right technical infrastructure, maximizing the ability of the organization’s IT to, for instance, deploy workload in the right places, and deliver the right services to users and customers at the right time (and at the right price) – integrating with still-valuable legacy systems where necessary. This requires technology that is both reliable and flexible, based on global standards, as well as working with acknowledged leaders in the field.

At a basic level, it can mean tried-and-tested cabling – the strongest physical foundations – and ensuring an overall standards-based approach that is not only optimized for interoperability and performance but also addresses a multitude of other facilities (and cost) requirements, from energy efficiency to cooling optimization, even space considerations. By looking at the bigger picture and applying joined-up thinking when making technology choices that affect facility design, facilities and real estate leaders – in partnership with IT and procurement teams – can ensure both connectivity and investment protection. This, in turn, can have a real impact on the bottom line as infrastructure converges, data volumes increase exponentially, and the pace of business continues to speed up.

To find out more about how you can future-proof your facilities while leveraging previous investments, read our report, “Why State-of-the-art Facilities Require State-of-the-art Infrastructure.”

Building the next-gen data centre: global, connected, ready for business






With modern business defined by data and by connectivity, tomorrow’s data centre will bear little resemblance to today’s models.

What we currently think of as a data centre is being superseded by next-gen digital infrastructure architecture: global in scale and defined by the business services it delivers and the user/consumer requirements that it satisfies. According to a recent Gartner, Inc. report, infrastructure and operations people tasked with data centres will have to focus on “enabling rapid deployment of business services and deploying workloads to the right locations, for the right reasons, at the right price”.

These super-charged requirements, and that unstoppable focus on data, mean the most robust, reliable and flexible infrastructure – physical, electrical and network – will be paramount. Gartner also added that, by 2025, eighty percent of enterprises will have shut down their traditional data centre versus ten percent today. The key word is “traditional”.

With the rise of next-gen digital infrastructure architecture, workload placement becomes a critical driver of successful digital delivery. That, in turn, is underpinned by performance, availability, latency, scalability, and so on. Indeed, Gartner suggests an “ecosystem” is required to enable “scalable, agile infrastructures”.

What’s the best way to engage with this era of digital transformation, interconnect services, cloud, edge services and Internet of Things (IoT) if you’re planning or preparing to replace your data centre? The optimum digital infrastructure architecture (aka modern data centre) to meet requirements for the next five, ten or 15 years will, as ever, depend on each organisation’s priorities. There’s no simple answer. For some, a major step will be to ensure the strongest physical foundations including cabling, pathways and security. Many organisations will need an effective way to “bridge the gap” from old-world data centre and stacks into converged networks and infrastructure. At the same time, data centre infrastructure management tools can help improve energy efficiency and reduce costs. Perhaps a through line in all situations is ensuring the right connectivity solutions: to increase network throughput, reduce latency, improve agility, ensure scalability, and so on. That way, you’re not only ready for opportunities presented by the Internet of Things – you’ll be ready for the Internet of Everything.

To learn more about ensuring you have the right connectivity solutions at your core, read the report: https://pages.panduit.com/finance-all.html

Digitizing History for Future Preservation with Data Center Solutions

How the Vatican Apostolic Library Preserved its Manuscript Collection

The Vatican Apostolic Library preserves its invaluable documents with the help of a robust, highly available network infrastructure.

Undergoing a massive data transfer process is not easy, but the Vatican Apostolic Library did just that. Panduit’s previous success in enhancing the connectivity and performance for the Vatican Apostolic Library’s main data center earned it the trust to help digitize and protect more than 80,000 priceless historical manuscripts.

Founded in 1451, the Vatican Apostolic Library’s collection includes precious material from as far back as Michelangelo and Galileo. To preserve the collection and continue to contribute to the worldwide sharing of knowledge, the 15th-century library decided to digitize its antiquated and increasingly delicate manuscripts.

To successfully complete this project, the library’s Belvedere Court building needed a more efficient data center infrastructure to support document storage. The library also needed solutions to address power and energy usage challenges, capacity constraints, environmental and connectivity issues, and security and access control requirements.

Adapting to the constraints of the ancient structure, Panduit developed a solution with security, storage, and power management.

The building now uses hot-aisle containment with hot/cold air separation inside the cabinets for improved airflow – delivering a power savings of nearly 30% compared to the previous system.

SmartZone solutions simplified the library’s network infrastructure, managing, and monitoring rack power distribution units and environmental sensors through a single IP address. For enhanced data center security, the gateways support access via intelligent handles on cabinets.

The Vatican Apostolic Library now has the capability to support the vast amount of data generated by the digitization project, ensuring high reliability and elevated transmission speed. Because of Panduit’s network, people around the world have online access to these invaluable treasures.

Read the full article here.

Innovation 2.0

At Panduit, we take pride in finding new solutions to old problems (and new onescim_generic, too!). And, when we  work with the best customers around to help them find solutions to their problems, that’s even better. Last week, Cabling Installation & Maintenance presented their annual Cabling Innovators Awards. And, for the second year, several Panduit projects were recognized as being the best of the best. Without further ado, I’m proud to present a snapshot of our honorees and their cabling innovations.

Purdue University

panduit-purdue-university-innovators-award-photo

CI&M Chief Editor Patrick McLaughlin (left) and Group Publisher Alan Bergstein (right), present a Gold Cabling Innovators Award to (from left) Tom Kelly, Director of Business Development, Enterprise Solutions, Panduit; Daniel Pierce, Telecommunications Design Engineer, Purdue University; and Dennis Renaud, Vice President, Enterprise Solutions, Panduit.

Purdue embarked on a project during the 2014-15 school year, to update and expand their wireless coverage on campus. For today’s students, wireless access isn’t a luxury, it’s a necessity. Information Technology at Purdue tackled the upgrade project in two phases: One to add coverage in residence halls; a second to add density in academic buildings and common areas. The residence hall project caught the judges’ eyes for innovation, as the university relied on Panduit’s surface raceway and 28 AWG patch cords, along with Cisco 702 access points to deliver wireless throughout the residence halls. The raceway/patch cord/AP solution provided the wireless performance they needed while keeping the aesthetic already in place for their wired connections.

The Purdue wireless project was named a “Boilermaker” Gold honoree by the CI&M judges.

Global Insurance and Financial Services Firm

Panduit’s small-diameter cabling is at the heart of the solution installed by a global insurance and financial services firm, to optimize the space in their telecommunications rooms. Switch harnesses with 28-AWG patch cabling has provided four main benefits:

  1. Time: the quick-connect feature cuts installation time from about an hour per RU to 20 minutes per RU … and we all know that time means money!
  2. Space savings and cable management: Because of the small size, more cabling fits, saving rack space for equipment rather than cable management; it also simplifies cable management, making moves, adds, and changes simple.
  3. Single length: The company uses one length of patch cord everywhere, which eliminates ordering and installation errors.
  4. Standardization: Every telecommunication room at all of their sites are deployed with the same footprint, making installation and management easier for everyone involved.

CI&M’s judges awarded this project a gold award.

CenterPoint Energy

Texas-based CenterPoint Energy presented a Texas-sized issue: they wanted to unify their IT physical infrastructure platforms across their internal business units, and within each facility. Multiple vendors, multiple sites, and multiple cities equals multiple headaches. CenterPoint standardized its data center operations around a Panduit Intelligent Data Center solution, including Data Center Infrastructure Management (DCIM) software, hardware, and infrastructure offerings. This solution was end-to-end Panduit: fiber and copper cabling, dual cable pathways, PDUs, overhead patching, cooling optimization, grounding and bonding, and thermal containment. “We required a solutions provider that could deliver comprehensive technological advancements while helping us ensure business continuity,” said CenterPoint’s Tom Tanous, senior manager of Business Reliance and Data Center Management.

centerpoint5-cropped

CenterPoint Energy was recognized with a Silver Cabling Innovators Award.

The CenterPoint project was named a silver honoree by CI&M judges.

CyrusOne

panduit-cyrusone-innovators-award-photo

Winner of a Silver Cabling Innovators Award was data center provider CyrusOne. CI&M Chief Editor Patrick McLaughlin (left) and Group Publisher Alan Bergstein (right) presented the award to (from left) Dennis Renaud, Vice President of Enterprise Solutions, Panduit; CyrusOne Chief Information Officer Blake Hankins; and Panduit’s Tom Kelly, Director of Business Development, Enterprise Solutions.

With more than 3 million square feet of rentable data center space, CyrusOne is one of the largest data center providers in the U.S., with global customers relying on CyrusOne’s colocation services. Their new Austin Data Center II has been optimized with Panduit’s SynapSense software, delivering energy savings and increased efficiency by continuously aligning cooling capacity with changes in IT load.

“Panduit has enabled our customers to essentially keep tabs on their servers in CyrusOne’s facility with a level of data access and detail comparable to operating a data center of their own,” said Amaya Souarez, vice president of CyrusOne’s Data Center Systems & Security. “Plus, we’ve experienced both operational and power efficiencies. It’s quite incredible!”

CyrusOne was recognized as a silver honoree by the CI&M judges.

The Innovators Awards were judged based on the following criteria:

  • Innovative
  • Value to the User
  • Sustainability
  • Meeting a Defined Need
  • Collaboration
  • Impact

Alan Bergstein, publisher of Cabling Installation & Maintenance (http://www.cablinginstall.com) said “This prestigious program allows Cabling Installation & Maintenance to celebrate and recognize the most innovative products and services in the structured cabling industry. Our 2016 Honorees are an outstanding example of companies who are making an impact in the industry.”

Congratulations to all of these outstanding customers for their efforts. Panduit is proud to share these awards with all of you.