Reseller opportunities in virtualisation

News

Reseller opportunities in virtualisation

Microscope contributor


It seems as though you cannot go to an industry conference or open up a trade publication these days without hearing the V-word - virtualisation.

The stars have definitely aligned for the virtualisation market in recent years: Moore’s Law has given us increasing computing power that makes it possible to run multiple operating systems on the same machine; we have a push towards green computing, driven mainly by datacentre capacity concerns; and we have a compelling push for rationalisation by cash-strapped businesses looking to cut capital expenditure wherever they can (including money spent on IT equipment).

On the face of it, none of this bodes well for resellers. Customers are interested in virtualisation primarily because they want to buy less equipment, and while you might get the margin on the hardware refresh necessary to make virtualisation work, you may find that the volume of tin that these companies are buying declines over time if they consolidate their systems properly.

Consultancy role
This leaves the channel in a dangerous place, warns Lawrence James, enterprise systems marketing manager for Sun Microsystems. He adds that there is nevertheless a silver lining to this cloud.

“There is a danger in just going out and fulfilling a need by selling tin with a hypervisor on it,” he says. “There is a big opportunity for the channel to consult around moving from physical environments to logical, highly virtualised ones.”

One of the biggest problems facing companies trying to virtualise their systems is practical implementation, warns Rick Hayes, principal consultant at US-based technology consultancy Glasshouse Technologies. Trying to squeeze multiple applications onto the same box is fine in theory, but in practice it can carry challenges.

For example, virtualising an application might mean migrating it to a new platform if it is legacy software. That could bring major headaches, involving software rewrites, and potential reliability problems, if the customer plans for it at all.

“Let’s say you have whiz-bang 2000, and you want to install whiz-bang 9000. So now you have the new whiz-bang 9000, but cannot unplug the old one because you cannot run the old application on the whiz-bang 9000,” says Hayes.

This can present problems for customers, which are also opportunities for channel partners. It gives a reseller the chance to become more intimately involved with the IT department, and enables them to advise the IT director on the correct course of action. It may even be possible to bring in partners which can handle the application migration as part of the wider project.  

Planning potential
What this means is that planning should be a critical part of any reseller contract, according to Clive Longbottom, services director at Quocirca.

Customers may not understand the need to think through these issues at the start, and that is where a reseller’s expertise can help to claw back some of the margin that they may lose as customers attempt to minimise hardware purchases.

Specifying the hardware can also be more difficult than it looks, warns Longbottom. It is easy to configure a machine so that it performs inadequately when managing multiple virtualised environments, he warns.

“You also have problems with I/O,” Longbottom adds. “If you are running three or four I/O-hungry virtual images on a single machine with a virtualised network interface, you will run into all sorts of virtual constraints.”

This is something that the reseller must handle in collaboration with the customer, and it involves a conversation about projected capacity. Many customers, especially in the SME space, may not be mature enough to create and track a baseline or computing capacity.

Moreover, some customers may be in businesses where the growth of the company – and therefore the load on the systems – is volatile. Ideally, sales contracts between resellers and customers will reflect this, offering some form of flexibility.

Managing virtualised systems
But even when such design practicalities have been tackled, the concept of virtualisation still presents multiple implementation and management challenges.

Virtual machines still need managing. In fact, those IT departments that are lulled into a forced sense of security by virtualisation and fail to manage their virtual machines risk running into chaos. Unfortunately, virtualisation management tools are evolving.

“The operations and management layer is the part that is most immature at the moment,” admits Lawrence James, enterprise systems marketing manager at Sun Microsystems. “That area needs to focus on tasks such as automatically patching machines, and having patching mechanisms that understand the nuances and dependencies of the different systems.

“It is very easy to create a virtual image, and it is easy to say that you can do what you want with it, but that may include not deleting it,” warns Longbottom. “That image will have an operating system in it, with an associated licence. It will have an application stack, along with the licences associated with those. Unless you manage those virtual images, you will run into licensing problems.”

Some companies are beginning to tackle this issue. Veeam, for example, sells products from managing and backing up VMware environments. It has just released version 3.5 of its Reporter product, which is designed to discover the different elements of a virtual infrastructure.

“You can see who created a virtual machine and what the purpose was. You can also track a virtual machine’s creation, deletion and entire lifecycle,” says Veeam CEO Ratmir Timashev.

Sun’s James says that there is also an opportunity to create out-of-the-box virtualisation solutions for small businesses with relatively simple requirements. The company has created solutions around Microsoft’s Hyper-V virtualisation solution, and has also built bundles based on VMware.

“We are combining the whole three – an ESX server, a management server, and shared storage as a key part of that. It drives down the total cost of ownership,” he says. “We have come out with the types of infrastructure that customers are likely to want to use, and some of our partners are then wrapping their services around that.”

Compatibility challenges
This might work for small to medium-sized companies, but large enterprises implementing virtualisation may find themselves facing a different set of challenges. In particular, it seems to be difficult, given the relatively new technology behind modern virtualisation, to create heterogeneous virtual systems.

Let’s say, for example, that politics and differing technical requirements within an organisation led to one part of the company implementing Hyper-V, and another part implementing VMware, or Citrix XenServer. Today, there seems to be little chance of having those work together, certainly at the management level, where virtual machines need to be patched, monitored and tracked. Moving a virtual machine between one hypervisor’s environment and another seems to be a non-starter at this point.

“Whether we will have a standard set of interfaces or APIs across the industry is an open question,” says James. Xen comes from an open source background, so that might be more likely to work with other systems, he argues.

But some of these virtualisation technologies work in an entirely different way. For example, Parallels, which virtualises operating systems, works on what CEO Sergei Beloussov calls a container-based system.

“With containers, you cannot create different operating systems. You create many different instances of the same operating system, which is very different from a hypervisor,” he says.

Whereas hypervisors sit generally on the “bare metal” of the machine, underneath the operating system layer, the Parallels-based container system sits on top of an operating system. Rather than detracting from performance, Beloussov argues that it increases performance by sharing key aspects of the host operating system between guest images. This means you do not have to replicate certain services each time you launch a new virtual machine, he argues.

Timashev believes that demand for such heterogeneous solutions will pick up, maybe in the near future, as customers get to grips with these problems. But research still seems relatively blue-sky. IBM has touched on this with its Reservoir research, with which it has demonstrated how to move virtual machines from one remote server to another. That doesn’t solve the heterogeneity problem, but it at least makes it theoretically possible to manage a set of virtual images across a geographically diverse IT infrastructure.

VMware, which is aggressively pursuing the management space as it tries to flesh out its virtualisation stack, has done relatively well with VMotion, its virtual machine management system, which enables virtual machines to be automatically moved from one machine to another across a local network to help solve the load balancing problems that companies will inevitably face as they begin to build up their collection of virtual images.

But just as hypervisors do not play well with others, these management tools tend to be largely incompatible in many cases. “At this stage it is still the wild west. anything goes. The vast majority of tooling out there is proprietary and there is not a lot of standardisation,” Longbottom says, although Microsoft and Citrix are working together.  

Microsoft came relatively late to the party, having only offered a virtual PC environment designed to run on top of an operating system until recently. The company launched its hypervisor technology, called Hyper‑V, last year, 180 days after Windows Server 2008 hit the shelves.

The company has been relatively clever in making Hyper-V compatible with other vendors’ virtualisation systems, says Longbottom. It has been a firm partner of Citrix’s for four years, so making Hyper-V compatible with the latter’s Xen product enables it to retain that partnership and avoid any political strife.

“But VMware is still going in its own direction. It is still going its own way,” says Longbottom. He feels that VMware has fought back considerably in the past year, building up its toolset with products such as VMotion, and 18 months ahead of Microsoft in terms of functionality. However, he warns that VMware still suffers from a lack of heterogeneous support, and must reach out to other vendors more than it has been.  

Unless this happens, he says others could take its market away from it in the management space, with systems management vendors such as HP, BMC and IBM poised to do just that.

Longbottom also has concerns that VMware might be overrated in the market, and that if it does not look after this crucial part of its technology culture, its fortunes may begin to pall. An EMC that was having problems selling storage and not recognising its success with VMware could even end up selling the company, he posits.

Acquisition trend
There are also a variety of different companies offering nothing other than virtualisation management services, and leaving the hypervisor technology to the core players. Some of these have already been acquired. Novell bought PlateSpin, which manages virtual machines with its software tools, in February last year. The likelihood is that more of these smaller companies will be acquired by larger players, probably in the systems management space, as the industry gains more understanding of the management challenge surrounding virtualisation.

While vendors do the acquisition dance – an inevitable trend in a nascent, growth industry struggling through a broader economic recession – customers will be focused on more relevant realities. They must take advantage of virtualisation’s benefits while avoiding the common mistakes made by many vendors: virtual server sprawl, licensing issues, and a lack of accountability when it comes to creating and deleting these images.

Perhaps as virtualisation evolves, more of these management functions will make their way into the hypervisor, and we will find ourselves moving away from managing virtual machine images altogether. It would make more sense to be able to dynamically spawn and then kill an image automatically, rather than storing it on a disk somewhere, which represents a potential security risk.

However, as this section of the industry matures, confused and disorganised customers will need channel partners that understand the challenges and can guide them through it. That is where the opportunity and the consulting margins lie.

Join the conversation Comment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.