Opinion

Keeping the efficient efficient

Mike Heumann, senior vice president of worldwide sales and marketing at virtualised input/output specialists NextIO, considers where the surge in server virtualisation leaves data centre managers (DCMs) and how they can address the challenges created by increased efficiencies.

A virtual uptake
Virtualisation has been firmly positioned as a key business technology, with Gartner placing it in its top ten priorities for 2012 and a recent poll of IT managers for PC World revealing that over a third are already using server virtualisation.

However, as managers try to realise efficiencies by adding more virtual machines and maximising their existing environment, they are faced with a Catch-22 situation - virtualising means fewer physical servers, but increased efficiencies equate to more cables in order to deal with the increased levels of input/output (I/O).

Any drawbacks?
However in spite of the notable advantages that server virtualisation presents, its growth and development have revealed fault lines that will impede the technology's future progression if left unaddressed. 

Last year, Gartner noted that: "Most organisations will be well-served by focusing attention on the challenges within the server rack." Undoubtedly, a major challenge to the expansion of server virtualisation is still the resultant hardware consumption. The server virtualisation market-leader, VMware, currently boasting around 84% market dominance, has two solutions for the increased levels of I/O experienced by servers that are working to capacity for the full time.

The first involves traffic segmentation using a series of 1Gb Ethernet links. This allows the server to physically keep traffic flows separate and prioritise as appropriate. However, it drives up port, cable and switch count and costs, and limits the total number of applications that can economically deploy VMware. It also by its very nature increases the complexity of server management.

The alternative option offered by VMware is to use heavy duty full 10Gb links alongside its traffic QoS feature - while this method does reduce the cable count (as by now your server resembles Medusa's head!) it can prove expensive as no DCM wants to be deploying 10Gb connections that will knowingly be underutilised. The other issue with this method is that physical separation no longer exists in the server environment and multiple traffic flows are now pushed down the same pipe causing increased management complexities in a process that would normally be segmented. 

So while infrastructure virtualisation is a 21st century necessity as no business is content to waste its data centre resources (not when the difference could be up to 75% in terms of server utilisation), its very nature causes associated I/O problems. The question becomes how do you optimise the optimised? The answer lies in how the subsequent data is handled. While VMware offer two solutions, both involve increased cabling, and increased costs linked to capital and operational expenditure. This was never the goal of virtualisation, so it's only right that the same principles VMware started with are applied to the resulting traffic.
 
So what's the solution?
There are two major strains of thinking in terms of I/O consolidation and where it should take place: either at the access or the network layer. The access layer sees I/O consolidation take place between the server and the network, while network layer I/O consolidation takes place within the fabric. While various market forces have proposed network layer solutions that rely on Ethernet and FCoE to reduce network switches and adaptors, it's a methodology that doesn't support Infiniband, SAS, or other host adapters. That leaves the prudent future thinking DCM, with access layer solutions and dedicated top of rack devices designed to address the I/O excess.

So let's think about it logically. I/O has always relied on dedicated PCI Express connections which have resided inside the server. Up to eight cables per rack are connected to them using multiple switches to pipe data back and forth between the data centre and the network. Server virtualisation increased the traffic and the cables increased in proportion. However, what if all the traffic could be directed to just one PCI unit which then segmented and rerouted it virtually? That is a modern day reality. Instead of multiple 1Gb lines or some major 10Gb Ethernet cables, DCMs can deploy a single PCI Express cable connecting the server to a pool of I/O resources at the top of the rack. Now it's a little more complicated and different vendors offer different rack-based solutions, but intrinsically there you have it - how do you deal with an increase in traffic born from server virtualisation - virtualise the traffic as well. The beauty with this method is that PCI cables are universal and can support a number of different protocols (Ethernet 1Gb or 10Gb, Fibre Channel, InfiniBand, SAS/SATA, FCoE, iSCSI, GPU's etc.), and even better, if you need more bandwidth it can be easily updated.

The market for access-level I/O virtualisation solutions is relatively young. But it is already essential that DCMs consider solution aspects like the type of transport used (look out for post-installation traffic speeds and adaptor costs - a nasty sting in the tail of any implementation!), and protocol adaptability (make sure your solution is compliant with storage protocols like SATA for example if you have an incumbent SAN), as they will all impact on the time it takes to stand up new servers and therefore the cost savings and efficiencies your business will benefit from. With more virtualised data centre environments in existence than ever before, virtualised I/O is surely the next part of the efficiency puzzle.

This was first published in February 2012

Join the conversation Comment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.