Infrastructure Talent Needs for Cutting-Edge Data Centers

Part 2: Insights from industry expert Peter Kazella

In the second of our two-part blog series with industry expert Peter Kazella of Pkaza, a 12-year Data Center Facilities recruiter, we discuss what it takes to go live with a newly built data center and what to look out for when building your team in an ultra-tight market. 

As more data centers are getting constructed and going online, what staffing needs contribute to going live?

Having the right team on board including partnering with the right vendors is crucial as you need a team who is constantly staying current as new technology is introduced.

Right now for Pkaza, one of our highest demand jobs is that of the Commissioning Agent. It is their job to test the many mechanical (HVAC), electrical, and building controls systems of the data center to make sure they are operating to specs before the data center goes live. Many data center operators (i.e. end users), will contract third party commissioning firms with electrical, mechanical, and controls engineering expertise to test and inspect these systems to make sure they operate and perform to spec. before they flip the “On” switch.

They will test the backup power system equipment like generators and uninterruptible power supplies (UPS’s) as well as the components that make up the massive cooling systems like the computer room air conditioning units (CRAC), chillers and cooling towers.

Many of these professionals are degreed mechanical and electrical engineers, but don’t have to be.

Very bright and experienced power and cooling technicians with expertise in equipment repair and maintenance are very good candidates for these rolls. Military veterans from the Navy’s Nuclear Engineering program (EMN’s, ETN’s and MMN’s are the most sought after) or any other branch that supports power generation are typically solid candidates post active duty.

Their background in a critical environment that revolves around stringent operational procedures is a good match for these roles. Besides the expertise that is needed for this job, a large amount of travel is required for this role which makes it a challenge to find the right people.

Many data centers will also start to hire their facility operations teams during this process. These are the managers and critical facilities technicians that will be monitoring and maintaining the equipment (electrical, mechanical, and controls) once the data center is up and running.

By observing the commissioning process, these technicians will have a deeper understanding of the procedures needed to keep the equipment running and what to do in the unlikely event of equipment failure. These techs are also able to give suggestions on equipment if they observe issues in the initial startup phase. They create the MOPs and SOPs to maintain and operate the equipment which is a very important part of being a commissioning agent as well.

What potential challenges and opportunities exist for data centers looking to hire as their infrastructure modernizes?

Having the right team on board including partnering with the right vendors is crucial as you need a team who is constantly staying current as new technology is introduced.

The data center industry has a shortage of specialized training / education programs that focuses on the data center market. Over the last 15 years or so, many training and educational programs have been developed to offer content with a focus on data center management. Some examples are The Marist College Institute for Data Center Professionals (IDCP) that was founded in 2004 and offers a college-level accredited education designed specifically for those who wish to advance their data center careers. It’s a 100% online learning program and includes important areas like cybersecurity and data center infrastructure. Another recognized program is Uptime Institute’s Accredited Tier Design program for licensed professional engineers. We also like what we are seeing with a brand new data center educational program called CMCO (Certified Mission Critical Operator). This is the core curriculum being used at North Virginia Community College and other community colleges and universities. This new degreed program offering is called Engineering Technology: Data Center Operations Specialization.

Attending data center industry events like DCD, AFCOM’s Data Center World or 7×24 Exchange Conference is also a great way to stay current with new technology that is constantly evolving. These events offer the opportunity to discuss changes in the industry with peers and the chance to see firsthand, the new technologies developed by manufacturers that support our industry. Otherwise, the opportunity exists with companies such as Pkaza that specialize in placing these types of data center experts with vendors and colocation providers for both full time or consulting for the critical facilities industry.

Conclusion

A big thank you again to Peter Kazella for all his insight on current trends and keeping us informed on what look for in the future. At Panduit, we know that redundancy in electrical power components and cooling backups are the core of reliability for data center. For more information about improving your operation through wireless monitoring, check out our white paper, Improved Reliability Through Wireless Monitoring and Control.

Thanks for checking out our new expert Q&A series. Follow us on LinkedIn and Facebook or sign up for Panduit’s mailing list to get alerted when our next conversation with an expert goes live.

Data Center Infrastructure Trends & Talent Needs

Part 1: Insights from industry expert Peter Kazella

As you modernize, upgrade, and invest in your data center, your team needs skilled technicians and leaders if you want to take full advantage of the new trends and technology available to you. Hiring and retaining people who have the skills and experience to keep pace with the evolution of technology is crucial.

We spoke with industry expert Peter Kazella of Pkaza, a twelve-year veteran Data Center Facilities recruiter, to learn about the most relevant opportunities and challenges on the horizon in data center solutions. In the first of a two-part blog series, we discuss recent innovations, the shortage of specialized talent, and how to find the best people for your data center.

Pkaza’s recruiting niche is staffing for the mission critical facilities market. Pkaza has a focus on the facilities side as it pertains to the power and cooling systems within the data center. This includes engineering design, commissioning, construction, field service, as well as facilities operations of these critical environments that “allows the IT side to operate ceaselessly without experiencing any type of outage.”

Let’s dive into our recent Q&A:

What kind of infrastructure innovations are you seeing your data center clients moving toward?

First off, thank you for the opportunity to discuss hiring needs in the data center industry.

We have been seeing a steady movement of enterprise users migrating towards the colocation / cloud market as the cost of maintaining their data center continues to rise and keeping up with changing technologies is getting more challenging and expensive. It’s easier for companies to realize these technical advances through a data center colocation provider.

A colocations data center is typically able to implement these innovations since their expertise is providing uninterruptible power and cooling and the network infrastructure to send and receive data. Whereas in the enterprise market (companies that own the data center, but the data center is not their primary business), implementing new technologies might be harder to gain traction as the data center supports the primary business of that company. We are also seeing a big play on hyperscale, custom modular builds, with BMS (building management systems) controls taking a bigger part in optimizing cooling and power efficiency.

Describe why you’re seeing an increase in data centers being constructed and going online, yet you’re seeing a shortage of available talent.

The short answer is a supply and demand issue in that our data center clients require professionals that have deep experience with constructing these critical facilities. Building a data center is very unique because of the enormous amount of electrical and cooling equipment that is installed.

This requires someone with expert MEP (Mechanical, Electrical and Plumbing) experience and project experience in the 50-500 million dollar range. Since the market requires such specialized talent, where the talent pool is low, it has created a talent vacuum which has driven up the salary levels for candidates in this market.

Most companies realize this and still have trouble sourcing talent since actually finding these candidates, regardless if you are open to paying more, is still hard. This is obviously a good thing if you are a recruiter in my shoes, as this issue keeps my company extremely busy.

You help hire for a variety of data center positions, ranging from field service, CF operations, construction, commissioning, etc. What are some of the infrastructure pain points leading to these hires? What new skills do you look for in new hires?

One of the biggest challenges that the data center industry has to deal with is controlling and monitoring these critical facilities to ensure continuous reliability at a competitive price point. The equipment needed to run the data centers is expensive and products tend not to communicate with each other which is why controls and BMS / BAS Systems are increasingly sought after.

There is not “ERP” or Enterprise Software available that allows companies to monitor and control all their equipment on a single platform as is available on the IT side with products such as SAP – hence the push for Controls and Automation hiring.

Our clients are hiring people with BAS (Building Automation System) or EPMS (Electrical Power Management Systems) expertise. When optimized, these systems can significantly bring down the cost of powering the building. Any cost savings found will contribute to a company’s bottom line.

Conclusion

Panduit would like to thank Peter for taking the time to chat with us and our readers, and to help us see beyond the horizon of this evolving industry. To learn more about the use of both on-premises and hosted data centers, check out our white paper, Optimizing Infrastructure for Hybrid Data Center Strategies.

There’s certainly a lot to consider when finding the right talent for your growing business, and we hope that Peter’s insights helped you to better understand what your next move should be. Join us next time with Peter when we discuss what it takes to go live and new opportunities and challenges.

Thanks for checking out our new expert Q&A series. Follow us on LinkedIn and Facebook or sign up for Panduit’s mailing list to get alerted when our next conversation with an expert goes live.

Can your infrastructure meet the requirements of MiFID II?

With GDPR still a prevalent concern across the financial services industry, financial institutions face another major regulatory challenge in the form of the Markets in Financial Instruments Directive II (MiFID II). In the UK alone, the Financial Conduct Authority received 1,335 notifications of inaccurate transaction reporting under (MiFID II during 2018*).

The directive is multi-faceted. Ostensibly, the EU designed it to offer more protection to investors by introducing greater transparency to asset classes, whether they’re equities, fixed income, exchange traded funds or foreign exchange.

But this has consequences for your underlying networking infrastructure, which is required to support greater and more timely data transactions. This is especially pertinent for trading firms in the High Frequency Trading (HFT) sector, where trimming network latency by nanoseconds results in increased profits and competitive advantage.

With this in mind, MiFID II mandates latency standards across global banking networks. It also requires communication across those networks to be captured and recorded in real-time, and time-stamped accordingly.

Time stamping is a critical factor, requiring correct handling, with uniform latency across a network helping to create a consolidated view of network transactions which all carry accurate time-stamps.

There are certain technical standards for time-stamping that firms must meet under the new directive. Among these are: choosing the clock that you will use as a reference; indicating the type of organizations involved in a trade; defining the type of trade; and the level of time-stamp granularity -e.g. microseconds or nanoseconds. If you, as a trader, are dealing with a dual-listed, cross-border stock that covers two time zones, your infrastructure needs to be sufficiently uniform so you can document well and timestamp accurately. Once again, latency is the key.

The consequences are even fiercer than with GDPR, as non-compliant companies risk fines of up to €5m, or up to 10% of global turnover**. This is a concern for the 65% of capital market firms across Europe who stated in a 2018 survey that they had no adequate or systematic method in place to monitor trades in accordance with best execution criteria***.

Read this blog to find out how else you should be equipping your network infrastructure to ensure efficiency.  

*  https://www.ftadviser.com/regulation/2019/04/10/more-than-1-000-mifid-ii-breaches-reported-to-fca/

**  https://www.pwc.ch/en/publications/2018/solgari-industry-report.pdf 

***    https://www.finextra.com/blogposting/16488/mifid-ii—one-year-on

Latency is only the start of the challenge

There’s a clear need for a latency standard that can be applied globally across financial institutions. But that’s just one step. The real challenge emerges when you ask why this standard is necessary, and what it means for the future success of your business.

Latency is key to your success because if it isn’t perfectly calibrated, it’ll cost you. According to a study by the Tabb Group, if your infrastructure allows even 5ms of lag, you could lose an astounding $4m per millisecond across transactions.*

The reality is that the demand on your digital infrastructure has never been higher. We live in a world of high-speed financial trading. Data needs to be processed, analyzed, and transmitted at lightning speeds to meet the global, mobile, and 24/7 demands for instantaneous transactions and transfers.

Moreover, when positions change in an instant, latency isn’t just a matter of efficiency. It’s a matter of profitability. Which means that your infrastructure must be up to task if your institution is to remain viable over the coming years.

That’s why it’s vital to have a next-gen digital infrastructure architecture that’s robust and reliable. Joe Skorupa, VP Distinguished Analyst at Gartner Data Centre Convergence, recently commented*, “I have known major financial organizations make multi-million dollar investments only to rip-and-replace them the very next day if a technology comes along that improves their competitive edge.

However, the network hasn’t really changed in the last few decades because network folk are conservative. The reasons are quite clear: if a server in a data center fails, your application goes down; but if your network goes down your entire data center goes down.”

Skorupa highlights the latency issue right here. In order to benefit from super-speed transactions, and make the most of your digital transformation, you need to equivalize latency across your entire network. This involves taking an in-depth look at your existing physical infrastructure, and determining where change is required.

Upgrading and consolidating your data centre infrastructure can also help to mitigate risk, and future-proof the business, as this blog post explains [http://panduitblog.com/2019/04/29/datacenter/consolidation-the-pros-and-cons-of-putting-your-eggs-in-one-basket/].

As a trusted infrastructure partner, Panduit can help you tackle your latency issues, and ensure the right networking technologies are underpinning your financial services.

##

*Source: https://datacentrenews.eu/story/opinion-automating-the-data-center-with-ibn, October 2018

*Source: The Value of a Millisecond: Finding the Optimal Speed of a Trading Infrastructure, April 2008

Consolidation – The pros and cons of putting your eggs in one basket

Consolidating key facilities like data centers has obvious advantages. Chief amongst these are reduced costs, licensing, energy consumption, and maintenance.

But consolidating facilities also consolidates risk. If your single, major global data center goes down, then the company goes down with it.

So, how do you decide how to play a consolidated strategy, and put in place the correct procedures, policies, and technologies to mitigate risk?

Here are some quick pointers.

Designing a consolidated hub

Firstly, consolidating physical resources can cut your overhead, operational, and energy costs, so consider doing more with less hardware.

Software-based ‘server-less’ computing can help you consolidate onto fewer machines using cloud computing and technology such as virtualization and software-defined networking.

Consolidating software onto fewer hardware platforms can also reduce software licensing fees.

However, to pull off a move like this you will need to acquire the required expertise, or find a service provider who can design and manage your next-generation data center.

Mitigating risk through infrastructure design

Modern networking infrastructure design can help mitigate risk through techniques such as application orchestration and policy-based actions.

This model takes a bird’s eye view of your business software and services, and ensures everything runs smoothly, shifting resources where they’re needed in a timely way.

Underpinned by a modern network, this can be very effective in maintaining uptime, with cloud technologies such as ‘containerized micro-services’ offering self-managing, self-healing applications that run automatically, and scale up and down via the cloud.

All of this helps mitigate the risk of consolidation.

The downside is that, again, these cutting-edge technologies require technical know-how, investment, and a different approach to managing and monitoring your IT system.

Future-proofed technology

You want to make sure you have the necessary technology to ensure your new crown jewel data center won’t be rendered obsolete before the construction crew even breaks ground.

So, try to make sure you choose open-standards technologies with resources that can be re-purposed and extended without excessive development effort and cost.

Always-on infrastructure

Finally, how do you spec a data center for the always-on needs of a modern financial institution, that can guarantee near-constant uptime, and without breaking the bank?

When consolidating resources, you need to build in flexibility, and avoid technology silos by ensuring that your resources are transparent, networked, and shared.

Virtualization is a key element of a successful data center consolidation strategy, and it can help you achieve these things.

A converged network architecture that simplifies, accelerates, and utilizes resources, is also a must-have.

This could include fast, Fiber Channel or iSCSI networks to connect servers and storage, plus network and storage virtualization, which pools and optimizes your network and storage resources in a cost-effective and efficient way.

There are two major procedures for implementing change. Either rip and replace an old platform, particularly if it’s failing. Or develop the two side-by-side, transitioning applications step-by-step.

The latter may be more prudent for you, carrying less risk. Either way, as an experienced networking architecture partner, Panduit can help you plan and implement a next-generation network infrastructure.

Find out more at: https://pages.panduit.com/finance-all.html

Panduit Solutions Can Help Prepare Your Network Infrastructure for IIoT Technology

How Can Your Network’s Performance Impact Deploying IIoT Technology?

Use Panduit’s Insights to Build a Robust Network Foundation for Your IIoT Deployment

How can you prepare your network infrastructure to successfully accommodate IIoT technology? Get answers to potential IIoT technology deployment issues that may impact your network infrastructure.

For example, here are some questions you may ask when deploying IIoT technology:

What is the impact of real-time data?

Most networks were not designed to react to and process data in real-time. From self-driving cars to digital control systems on factory floors, real-time data is a big part of IIoT deployments. Not being able to act on data in real-time can result in catastrophic results.

How does edge computing affect network performance?

Edge computing allows the compute, storage, and application resources to be located close to the user or the source of the data. With cloud deployment, these resources are in a distant data center owned by the cloud provider. Deploying IIoT solutions using the cloud makes it difficult to manage latency. Today, IIoT deployments can benefit more from edge computing than cloud computing.

How important is the data gathered from sensors?

IIoT data from predictive analytics provided by sensors can improve operational efficiency, reduce downtime, and save money for your business. The many types and characteristics of sensors are important to consider when deploying IIoT technology.

How important is bandwidth for helping IIoT technology extract information from data?

Bandwidth is everywhere. It is this ubiquity of bandwidth that allows devices to seamlessly switch between networks. As a result, connected devices no longer require endless cables and wires. Bandwidth allows us to communicate quickly and effectively, which makes IIoT possible.

What is the impact of packet loss?

IT network managers dislike packet loss because it steals valuable bandwidth, reducing the link’s available throughput. For OT network managers trying to deploy IIoT, a network’s latency is more important than bandwidth or throughput. Despite their differences, the minimization of corrupted packets requires both IT and OT to work together as they transform their network to leverage IIoT technology.

Panduit has developed a series of white papers describing the challenges surrounding the IIoT’s impact on the typical data center, why IT and OT managers may look at the same problems differently, how they can successfully resolve those problems, and the importance of IT/OT convergence to your network’s performance. In addition, you will learn the following:

  • The importance of IT and OT network infrastructures
  • Why IIoT process controls demand real-time data
  • The relationship between IIoT technology and bandwidth
  • The ways IIoT deployments can benefit from edge computing
  • How to determine the importance of sensor specifications

Access all the papers in our IIoT white paper series.

5 Mega Trends Driving the Future Direction of Data Centers

2018 was a spectacular year for change around the data centre environment. While researching my new paper – ‘Light into Money – The Future of Fibre Optics in Data Centre  Networks’, There have been various bubbling under technologies that have broken through and are providing the impetus to some radical cloud environments.

  1. Edge Computingless edgy more mainstream – We are seeing leading businesses and organisations heavily invest in technology that will demand ‘both’ growth of centralised cloud data centre services and driving the requirement for a whole new breed of Edge data centres placing compute capability where it’s needed. Placing analysis and response processing close to the source allows data users to optimise response times. The Edge is driving efficient bandwidth utilisation and minimising connections and physical reach (distance) that introduce latency into the infrastructure. Together with other data growth areas, Edge Computing applications will generate petabytes of data, daily, by 2020. Systems that intelligently process data to create business advantage will be essential to our customers’ future prosperity.
  2. Hyperscale – data centre investment – Efficiency gained on the coat tails of giants – Industry titans, Google, Amazon, Microsoft, Facebook, Apple and Asian public cloud players Alibaba and Ten Cent are investing heavily, not only in new facilities, but the technology platforms that are enabling ever faster data transport and processing. The global hyperscale data centre market size is expected to grow from $25.08 billion in 2017 to $80.65 billion by 2022. Established businesses competing with the web scale firms cannot afford to be constricted by legacy technologies, to remain competitive you must build new platforms and invest in the next generation Internet Protocol (IP) infrastructure.
  3. Solid State StorageNo Flash in the pan – Flash storage is replacing disk drives across the industry for high performance compute environments. Flash technology is on trend with the demand for higher bandwidth and low latency requirements of big data workloads. As our customers’ data volumes increase, new access and storage techniques such as Serial Storage Architecture (SSA) delivers to eliminate data bottlenecks in the data centre and Edge environments. Flash offers a more efficient cabinet and rack footprint and far greater power efficiency over disk drives. As the requirement for storage space multiples this is a significant advantage.
  4. Artificial Intelligence (AI) – disruption driving growth – AI together with Machine Learning (ML) require machine to machine communications at network speeds and the data volumes that have serious implications for network topologies and connectivity. An example of this is seen in the Ethernet switch market, which has seen incredible growth of 25 and 100Gigabit Ethernet (GE) ports shipments. These and new higher speed Ethernet ports will be essential to the growth of AI and Machine Learning applications, as the volume of data required are in the petabyte scale. We are working with partners on high speed and high-quality infrastructure and the next generation topologies to support this data volume growth. Read more on this subject in the report – Light into Money.
  5. Converged technologysimplify to clarify – To build more efficient data centres it is agreed that simplified designs on flexible infrastructure platforms are required to achieve more agile organisations. We are witnessing increased automation, more integrated solutions and software defined capabilities that are reducing the reliance on silo-systems. This allows users to taking advantage of highly flexible infrastructure to drive more capacity, monitoring and analysis and increase efficiency within the data centre. Converged and hyper-converged infrastructure are taking advantage of the many of the topics discussed above to build the future cloud.

Understanding how leaders in the market are moving forward provides stepping stones for all of us to develop our platforms and data centres to take advantage of new developments. However, we must not follow blindly, it is essential that our designs and solutions create the most effective and efficient solution for our needs, and we can only do this when we step out of the silo and view the wider opportunities.

Bandwidth Bottleneck – How to De-stress the Data Center Infrastructure

The IT industry does an excellent job in advance positioning the next great innovation. We have been just a step away from the internet of things (IoT) for over 20-years, AI (Artificial Intelligence) has been around for as long as I can remember, and solid-state memory is set to take over from disk drives and tape, speeding access, saving space, energy and resources. Maturity of technology can be mapped using a ‘hype cycle’ concept model, in simple terms… as time moves forward the ‘hype’ becomes reality and ‘quantum leaps’ are ever closer.
Explosive data growth and need for ubiquitous storage and processing is undisputed, which leaves the question – is it time to believe the hype?

Preparing for tomorrow’s future is crucial for business survival

In data center network communications, multiple technologies are converging to deliver growth of emerging, data intensive applications from e-health and media and content delivery, to sensor connected devices and automotive vehicles.

With volumes of data set to grow exponentially, the method of gathering, storing, processing and transmitting across the data center will be seriously hindered without infrastructure that meets latency and bandwidth performance requirements now, and for the foreseeable future.

Indeed, when technologies such as AI and Machine Learning (ML) become mainstream, individual data sets will run to 100s of terabytes. Meanwhile M2M data is expected to outstrip enterprise and personal data within the next five years. This increase in data traffic is already creating bottlenecks within legacy data centers, with every gateway and connection reducing the overall performance potential of the system.

My latest research white paper, ‘Light into Money – The Future of Fibre Optics in the Data Centre Networks’ investigates the drivers for the current and next generation infrastructure needed to support the data center industry and facilitate the high bandwidth, low latency platforms required in the multi-petabyte traffic era.

With an understanding of the opportunities available and the technologies influencing change we can plan better and prepare our structures to operate at the most appropriate levels. We can learn from the hyperscale designers who are designing systems with equipment manufacturers to optimize requirements for use, to attract these fast-growing applications into the cloud.

Each of these technology advances reflects the rapid growth of the global digital economy which is creating demand for greater network speed and performance from the internet backbone right into the core of the data center.

Key challenges for the infrastructure network are the ever-growing demand for faster speed – 10GE, 25GE, 40GE, 50GE and 100GE today, with 200GE – 400GE with predicted rollout as early as 2019. Together with new network architectures designed to maximise performance, the physical infrastructure must be designed to enable rapid and seamless deployment of new switching technologies.

Data bottlenecks will continue to be a growing problem if infrastructure and data center businesses focus on short term fixes. Network infrastructure is as vital as data center power and cooling, without appropriate investment it could significantly reduce both the life cycle and ROI.

My white paper – Light into Money – The Future of Fibre Optics in the Data Centre Networks is free to download @ Light into Money – The Future of Fibre Optics in the Data Centre Networks’

Investing in the future: collective thinking in facility design






Future-proofing facilities while leveraging previous investments

A new generation of facilities are being designed and constructed around the globe. A key facility design challenge is ensuring the systems and infrastructure involved will not only deliver new advantage but also function seamlessly with (and add value to) the other parts of a company’s ecosystem, including legacy systems and existing capital projects. Old and new primary investments need to work together harmoniously to deliver a more productive and profitable future.

Future-Proofed Facility Design White Paper

READ THE WHITE PAPER: Why state-of-the-art facilities require state-of-the-art infrastructure

In this age of digital transformation, data underpins modern business, connectivity is key, and operational scaling is a fact of life. This is why corporate facilities in banking, finance, and any other sector are being conceived to take advantage of the opportunities offered by this new landscape. Getting the infrastructure right, the strongest underpinning, is crucial. Continuing with the banking example, companies such as HSBC, JP Morgan Chase, Crédit Suisse and CitiBank (or their outsourcing partners) are doing precisely that.

The data center, now evolving into next-gen digital infrastructure architecture, has provided the core of banking operations for generations. Today, such data centers are expected to work smarter and do more to process and store vastly increased volumes of data, globally, quicker than ever. They must be always available, with no delays.

As a result, global heads of facilities and real estate want assurances they are investing in the right technical infrastructure, maximizing the ability of the organization’s IT to, for instance, deploy workload in the right places, and deliver the right services to users and customers at the right time (and at the right price) – integrating with still-valuable legacy systems where necessary. This requires technology that is both reliable and flexible, based on global standards, as well as working with acknowledged leaders in the field.

At a basic level, it can mean tried-and-tested cabling – the strongest physical foundations – and ensuring an overall standards-based approach that is not only optimized for interoperability and performance but also addresses a multitude of other facilities (and cost) requirements, from energy efficiency to cooling optimization, even space considerations. By looking at the bigger picture and applying joined-up thinking when making technology choices that affect facility design, facilities and real estate leaders – in partnership with IT and procurement teams – can ensure both connectivity and investment protection. This, in turn, can have a real impact on the bottom line as infrastructure converges, data volumes increase exponentially, and the pace of business continues to speed up.

To find out more about how you can future-proof your facilities while leveraging previous investments, read our report, “Why State-of-the-art Facilities Require State-of-the-art Infrastructure.”

Building the next-gen data centre: global, connected, ready for business






With modern business defined by data and by connectivity, tomorrow’s data centre will bear little resemblance to today’s models.

What we currently think of as a data centre is being superseded by next-gen digital infrastructure architecture: global in scale and defined by the business services it delivers and the user/consumer requirements that it satisfies. According to a recent Gartner, Inc. report, infrastructure and operations people tasked with data centres will have to focus on “enabling rapid deployment of business services and deploying workloads to the right locations, for the right reasons, at the right price”.

These super-charged requirements, and that unstoppable focus on data, mean the most robust, reliable and flexible infrastructure – physical, electrical and network – will be paramount. Gartner also added that, by 2025, eighty percent of enterprises will have shut down their traditional data centre versus ten percent today. The key word is “traditional”.

With the rise of next-gen digital infrastructure architecture, workload placement becomes a critical driver of successful digital delivery. That, in turn, is underpinned by performance, availability, latency, scalability, and so on. Indeed, Gartner suggests an “ecosystem” is required to enable “scalable, agile infrastructures”.

What’s the best way to engage with this era of digital transformation, interconnect services, cloud, edge services and Internet of Things (IoT) if you’re planning or preparing to replace your data centre? The optimum digital infrastructure architecture (aka modern data centre) to meet requirements for the next five, ten or 15 years will, as ever, depend on each organisation’s priorities. There’s no simple answer. For some, a major step will be to ensure the strongest physical foundations including cabling, pathways and security. Many organisations will need an effective way to “bridge the gap” from old-world data centre and stacks into converged networks and infrastructure. At the same time, data centre infrastructure management tools can help improve energy efficiency and reduce costs. Perhaps a through line in all situations is ensuring the right connectivity solutions: to increase network throughput, reduce latency, improve agility, ensure scalability, and so on. That way, you’re not only ready for opportunities presented by the Internet of Things – you’ll be ready for the Internet of Everything.

To learn more about ensuring you have the right connectivity solutions at your core, read the report: https://pages.panduit.com/finance-all.html