Managing vitualisation to cut costs

News

Managing vitualisation to cut costs

Savings made by implementing virtualisation can be wiped out if it is not well managed, but a number of practices and tools can help your customer avoid that

The nascent tools market for managing x86 virtualised server environments promises to be a lucrative additional revenue stream for channel partners already in this space and, in some instances, may even help to drive solution sales.

This reflects the growing maturity of PC server virtualisation. One signs such maturity is that 18.3 per cent of all new servers shipped in Western Europe last year were virtualised, compared with 14.6 per cent in 2007, according to IDC. By 2010, however, the market researcher expects this figure to have risen to 21 per cent.

Charles Barratt, business development manager at Dataplex Systems, which resells Vizioncore's management tools, explains: "Even in today's economic climate, we're seeing a continuing drive towards consolidation and x86 virtualisation because doing more with less is at the top of everyone's minds."

Nonetheless, he believes the market has been over-hyped and penetration levels are not as high as might be expected, meaning there remains plenty of mileage in it yet. "Over the last 18 months, when we've done educational seminars and webinars and asked a room of 50 people if they've gone down this route, it's still only between 25 and 35 per cent that have so there's still a big market opportunity," Barratt says.

Another sign that the market is maturing is that organisations are now starting to implement the software in production rather than simply development and test environments. But the fact that deployments are beginning to become larger-scale and more mission-critical in nature is generating its own set of problems.

Aad Dekkers, chief marketing officer at systems integrator MTI Europe, explains: "People are investing in trying to make their total infrastructure more efficient. But once they've done that and got to a certain level, they find that they need to purchase additional management tools in order to make the system more reliable and to prevent things from going wrong."

Chris Whiteley, head of product strategy at value-added distributor Avnet Technology Solutions, agrees. "Server virtualisation has been used quite extensively for small stateless applications, but people have to date been a bit more reluctant to employ it for bigger, more critical ones. It's not just about delivering uptime. It's about delivering applications to the right levels of availability and performance and that's where we're getting to now," he says.

As a rule of thumb, implementations larger than between 10 and 30 virtual machines tend to require third party management products because the more basic offerings provided by hypervisor vendors, such as EMC's VMWare, often only go so far.

As a result, customers might find themselves needing to undertake various management tasks manually, which could wipe out cost savings garnered from reduced hardware costs, electricity bills and the like.

Mike Vinten, chief executive of value-added reseller Thesaurus Computer Systems, warns: "The sheer cost of managing small clumps of virtualised servers can become unmanageable and completely out of control. Deployments need to be architected and designed properly, but systems management software is also important to help customers take advantage of potential efficiencies."

One of the key management issues, meanwhile, relates to so-called ‘server sprawl' or the proliferation of virtual machines (VMs) in an uncontrolled fashion. Because they are easy to create, some organisations simply build too many without thinking about the implications.

But, Roy Illsley, a senior analyst at the Butler Group, says: "Whether VMs are used or not, they still consume memory and CPU resources so you need to keep abreast of which are working, which are dormant and which you can get rid of."

Introducing blanket policies can be dangerous, however, he warns. "You can't just say ‘get rid of anything that hasn't been used in 20 days', as one of your VMs might be running a financial application that calculates month-end figures. There's a lot to consider, but management tools are available to support the process."

But there are also other areas in which server sprawl can have an impact, particularly in terms of infrastructure such as networking, storage and back-up and recovery. "Virtualised server sprawl tends to be the key challenge, but it has knock-on effects because it bumps into other things like CPU, memory and storage capacity. So you need to take a complete view of the infrastructure as virtualisation will have an impact on all of its components," MTI's Dekkers says.

The network is particularly heavily hit, for example, if the choice is made to use VMWare's VMotion tool for resiliency and high availability purposes - a reason that many people decide to virtualise their x86 environment in the first place.

Should a problem occur with a primary host or should it need to be taken down for maintenance reasons, VMotion can move any given workload to a secondary machine in real-time without causing downtime. But such activity takes up significant amounts of bandwidth, which means the network may either need to be reconfigured or even upgraded to cope.

On the storage side of the equation, however, it may be necessary to deploy shared storage for the first time. While organisations might be able to get away with direct attached storage for small implementations used for development and testing, the larger a virtualised production environment is, the less sustainable such an approach becomes.

This is because, on the one hand, it becomes difficult to re-assign capacity if storage is siloed. But on the other, without introducing a storage area network (SAN), it is impossible to take advantage of VMotion's capabilities. A SAN stores each VM as a disk image and each physical server has to be able to see them all to know when and where to allocate spare processing capacity as it is required.

When purchasing a SAN, however, it is crucial that organisations size it accurately in relation to workload, or they may find their applications run more slowly. This is because VMs make frequent requests to the SAN, but the physical disks can only spin and process a certain number of input/output operations per second. As a result, if they cannot keep up, performance will be affected.

Planning around storage volumes - or areas in the SAN in which data is stored - is another consideration to bear in mind. If multiple VMs with heavy workloads are all attempting to access the same volume at the same time, performance will inevitably be hit. Therefore, it is necessary to think through how VMs with heavy and light workloads should be mixed to balance the situation out.

But this is where workload analysis and capacity planning tools such as Novell's PlateSpin and Neptuny's Caplan come in. They create a usage profile of how existing physical servers use memory, CPUs, disk and network bandwidth and what capacity is likely to be required in a virtualised world.

Other vendors such as Vizioncore and Veeam also offer tools that plug into the management software of other vendors to perform backup and recovery, undertake storage optimisation, process automation and provisioning, performance monitoring and chargeback.

While Vizioncore supports both VMWare and Microsoft's environments, Veeam's purchase of nworks means that it can now interface with VMWare's vCenter, Microsoft's System Center and Hewlett-Packard's OpenView systems management software.

Bu Dataplex's Barrattt says: "We're starting to see some of the key vendors such as Vizioncore trying to bring their point products into a single framework, and that will be the next big thing. If organisations have to buy lots of point products, it can mean savings go out of the window because of the management overhead. So it's about developing a single management interface and pulling everything into one console."

Large established enterprise players such as IBM with its Tivoli system and Computer Associates with Unicenter are also getting in on the act, however, and have upgraded their products to support the management of both physical and virtualised environments at the same time.

But the market will not fully open up until management products can look after multiple virtualised environments from a single console, believes Butler's Illsley. The problem today is that hypervisors are neither interoperable nor portable, which means virtualisation software from different vendors does not currently work together without separate systems integration activity.

"It's not such an issue for small-to-medium enterprises as they don't tend to have mixed estates, but large enterprises want visibility across the whole estate," Illsley says.

Despite this, Gartner believes the market for software to manage virtualised environments will grow significantly from a low base of $913 million in 2008 to a more substantial $1.3 billion this year.

MTI's Dekker is equally upbeat about the sector's prospects. He expects to see double or even triple digit growth over the next two years and views early merger and acquisition activity as positive. "The fact that Veeam bought nworks is a good sign as it shows that it's serious about building the portfolio required. It's also a sign that the market is becoming more mature," he says.

But to realise such potential, it will not be enough to simply sell licences. Avnet's Whiteley explains: "You have to understand what you want to achieve with applications in a virtualised environment and that's important because people don't want to move critical ones until performance and availability can be guaranteed. Management tools can help, but it's also where finding the right partner comes in."

Because moving to a virtualised world is a complex technology transition, one of the key roles of a channel partner is to help remove the risk inherent in such a move. To do this involves consulting with clients to understand their aims and rationale for going down this route in the first place.

But it also entails recommending which systems would benefit from virtualisation, which ones would not and how to integrate the two worlds. It likewise involves working out the dependencies between different systems, both physical and virtualised, and the impact on existing infrastructure of moving to the new approach.

While many customers view the introduction of management tools as the second phase of any implementation as a means of ensuring that their systems operate more efficiently, there is an argument for including such offerings in a sales pitch from the outset, however.

"Undertaking server virtualisation can show a demonstrable return on investment, but management tools can help to take the risk out of adoption. So it becomes a sales aid rather than a sale in and of itself. Without such tools, people may just stick with what they know as the risk associated with failing is even higher in a recession than it is normally," says Whiteley.

While the current economic climate has led some customers to put a blanket stop on all new investments, others are attempting to make their IT organisations more efficient as a means of weathering the storm. But even the latter kind of clients "need convincing," adds Whiteley. "This is not a walk-in, walk out sale."

As a result, a key part of any management tools pitch at the moment relates to user education. "When we put out a quotation for a proposal, management tools are almost a line item initially and people often ask if they really need them," says Dataplex's Barratt. "So part of the process is about education. It all depends on what customers want to do, but our role is to help them develop a coherent strategy and that means becoming a trusted advisor."

Growing interest in desktop virtualisation as a follow-up activity to virtualising servers is also likely to spawn a whole new raft of projects, however, and one that is again likely to generate future demand for management tools.

"There's a big world behind the friendly face of virtualisation and you really need to be able to see the big picture. Initially it's about installation, implementation and integration with the physical world. But as a continuing service, it's all about optimisation of the datacentre - and that's where the expertise comes in," concludes Barratt.

Related Topics: Topics Archive, VIEW ALL TOPICS

Join the conversation Comment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.