Let’s face it, storage sounds boring – but it really matters to customers today, and it makes resellers money. Faced with marketing descriptions around data transfer speeds needed for consolidation of massive Storage Area Network it is difficult to become too animated. Most of the time, the verbiage used around storage technology is positively soporific.
That’s justification enough for the first part of the proposition (i.e. that storage is dull) but to be fair, these kind of pronouncements are simply telling it as it is; a disk array transfers data at really high speeds. That means it will be really good in datacentres where scores of virtual servers will be crammed onto a handful of physical devices. The subsequent strain on storage I/O is massive, so and you need a disk array that can cope. Often, storage is an afterthought in such installations – the virtualisation is the cool and sexy bit, the one that really lights up the eyes of the CIO, due to the reduction in space, management and power overheads it delivers.
Virtualisation – in its own right or as the foundation for private clouds – is a vitally important driver in the market right now. But without decent storage, the server virtualisation won’t be effective. If the transfer rate between the disk and the server box is not high enough, performance won’t be as good as it could or should be. Now, you might well be thinking well OK, in that case, we’ll put Flash technology in and there is no denying that Flash storage is great for virtualised deployments, but only where you need really fast near-line access to very large volumes of data.
But Flash is also really expensive at the moment and in many cases, it won’t stand up to a thorough cost-benefit analysis. High-performance disk arrays will. If we skip forward a few months (or maybe a couple of years), the cost of Flash will certainly come down. But it won’t come anywhere near disk in terms of cost-per-gigabyte.
With storage capacity requirements continually growing (this is not an industry myth – it is genuinely happening), most customers will want to use disk for most of the volume data retention they do. Some may use cloud-based archiving at the secondary or tertiary level, and make limited use of Flash as a sort of high-speed cache at the near-line end. But disk will still do most of the hard work.
Big storage for big data
It will be the same for big data projects. Flash lends itself well to the kind of multi-threaded database operations that are performed in big data deployments, but there will still be a massive requirement to store information – and traditional disk drives will continue to play their part.
Storage in all forms matters to customers and at the heart of all datacentres, you still need good, solid, reliable disk systems that you can depend upon to work – even if they don’t exactly get anyone excited. We had a great example of this recently from a customer, who talked about the storage solutions that are “boringly reliable” as being exactly what they required. In the world of storage, that’s a real complement.
We should also remember that storage makes money for resellers. Virtualisation, private cloud and big data projects should always bring plenty of opportunity to add-value in providing consultancy, installation, migration and monitoring. There will be a decent margin on the software licenses, but probably quite a bit more on the actual storage. Most arrays cost upwards of £5,000 at the entry-level and customers are often spending £50,000 or more to equip their data centres. Resellers can make very good margins on these sales.
Start talking in those terms, and storage suddenly isn’t boring, it’s very exciting indeed. We, as an industry, should give it the credit and the attention it deserves.
Craig Parker is head of product marketing at Fujitsu UK & Ireland
This was first published in October 2013