Report

Changed but not forgotten, the Ethernet story

One of the great things about the IT industry is that everything is permanently in flux. So nobody ever remembers those bold, confident and entirely wrong market projections the analysts make for the future of ATM, Token Ring and the ASP market.

Some of us remember, but that doesn’t seem to matter. All the above mentioned technologies were praised to the skies by loudmouthed self promoters and tame analysts and subsequently failed to meet expectations. But it didn’t do the faulty soothsayers any harm.

Anyone who mooted the idea that Ethernet (being cheap, in place and a pain to rip out and replace) would win out over the more expensive alternatives, was openly mocked and ostracized, as if they’d proposed that the world is round. Ethernet had far too many limitations, they said. It’ll never go above 10 Megabytes per second down a cable, or so many packets would collide that the ozone layer would disappear and our molecules would shake so violently that we’d explode. When 10/100 MBPS Ethernet was first demonstrated, a man was made to talk down the length of the cable first, waving a red flag. However, many of us suspected that Ethernet, like the car, was a work in progress that would be constantly evolving to meet each new challenge.

These days all the fashionable technologies have disappeared, as have all the Yes Men, Nodders and Assistant Yes Men who predicted the demise of Ethernet. So there is nobody around to admit we were right. (Not that these sort of people would ever admit getting anything wrong.)

When a group of the great and good of the networking industry gathered recently, to discuss how they could possibly develop a networking system fast enough to connect data centres together, what did they call themselves? The 25 Gigabit Ethernet Consortium. It included luminaries such as Arista Networks, Broadcom, Google, Mellanox Technologies and Microsoft and they recently announced a specification.

Amanda Jaramilo, Arista’s corporate communications guru, was at Synoptics when everyone was writing off Ethernet and is one of the few who have kept the flame alive. Mind you, they’ve had to transfer the torch a few times, so this Ethernet is quite a different incarnation.

But the technology is on its way, says Jaramilo. “This [consortium] is a real effort with products targeted in the next 18 to 36 months. The First step is finalizing the silicon, and then the building of products - both switches and host adapters."

The 25Gbps and 50Gbps Ethernet will give the lowest cost per Gigabit for processing power and storage and have the backing of some of the largest cloud providers. The standardisation effort will help to guarantee interoperability, hopefully.

It’s going to need a lot of innovation at every level, warns Kevin Deierling, marketing VP at one of the pioneering developers of the new spec, Mellanox Technologies.

“Achieving cost effective and power optimized Ethernet requires advanced chip processes and silicon photonics for optical interconnects,” says Deierling.

But there’s a lot of complications to be negotiated first, says Brad McConnell, principal architect at data centre operator Rackspace.

Once the next generation ASICs (application specific integrated circuits) are out, which are designed to be optimised for 100Gbps ports, there will be no clean way to divide this into 40Gbps ports, so there would be 20Gbps wasted. Logically then, since 100 Gbps breaks cleanly into 25 Gbps ports, wouldn’t it be a good idea to make the 10GBPS servers up to that speed?

“Essentially, 25G will be a good thing, but it needs to be seen through the lens of an upgrade from 10G, not a downgrade from 40G,” says McConnell.

None of this transition can really happen until options exist to densely aggregate 100G without significant expense, he warns.

Still, it’ll be cheaper than ATM! We all said it would, didn’t we?

This was first published in July 2014

Join the conversation Comment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.