This article is more than 1 year old

When tuning the server, don't forget the network

Polish up your performance

It really doesn’t matter how you configure your servers, how many processor cores they have or how much memory: if the network doesn’t have the bandwidth to service their needs, they will seem slow.

Users will complain, dissatisfaction will soar and customers will click off to the competition. Likewise, if the network infrastructure introduces extra latency, any investment in the latest and greatest in server technology will be wasted.

Network optimisation involves addressing two key questions: how servers connect to the network, and how the network is organised.

Ask for more

The first is easy to sort out, the second more difficult and potentially a lot more expensive. So let’s start with item one.

As a rule of thumb, assume that you will need to improve on the network interfaces built into any server that you buy. Few vendors offer 10GbE on server motherboards, the majority sticking with just one or two Gigabit ports.

That's fine if you’re a small business but far from adequate on servers potentially hosting hundreds of virtual machines, each of which will get only a slice of that connectivity.

Fortunately, throwing extra bandwidth at a server is simple

Of course, there are exceptions, notably when it comes to high-end rack and blade servers, where 10Gb on the motherboard is becoming a standard option. Even then, you may need more.

Fortunately, throwing extra bandwidth at a server is relatively simple. Just plug in extra adapters or, where spare slots aren’t available, upgrade to faster, more capable interfaces.

Where you have Gigabit, add more of the same or go for 10GbE instead. Where you have 10GbE, look to add extra adapters or consider 40GbE; products to support this are just starting to become available.

Let's have a heated debate

When it comes to the supporting network, the debate at present centres on whether to stick with a hierarchical architecture or flatten the network to improve performance.

The accepted approach is hierarchical. Plug servers into top-of-rack switches, provide uplinks to a fast backbone switch network and use the Spanning Tree Protocol (STP) to give loop-free connectivity.

Unfortunately, blocking network paths with STP to avoid loops wastes bandwidth, and the paths chosen are not always the most cost effective. Moreover, it takes time for STP to reconfigure network paths when servers failover or get moved around, both common occurrences in the virtualised data centre.

The workaround is to keep these Layer 2 networks relatively small and join them together via yet another tier, this time of Layer 3 switches. But that adds more hardware hops and yet more latency, not to mention a host of other complications when it comes to allocating resources and migrating virtual machines across subnets.

According to some, the answer lies in a flatter network of switches, the ideal being a single Layer 2 switch fabric between servers and other network resources, incorporating not just data but iSCSI and Fibre Channel over Ethernet (FCoE) storage traffic.

Trill seekers

Cisco is taking a lead here with the its 10GbE Nexus 7000 switches with support for FabricPath, a first-generation implementation of the Transparent Interconnection of Lots of Links (or Trill) standard still under development at the IETF.

Others are not far behind. Among them is Juniper with its QFabric architecture, which the company claims will be able to reduce latency by more than 70 per cent compared with a tiered network approach, and boost perceived server performance by up to 100 per cent.

These are impressive claims which, if proved correct, will have major implications as the industry adapts to embrace hugely scalable cloud computing solutions. But it won’t be cheap or easy. With few deliverables as yet, it remains to be seen whether the flat network approach will work and, ultimately, be worth the investment.

In the meantime, there is still plenty you can do with the tiered network, such as upgrading to newer, faster switches and using VLAN and quality-of-service prioritisation technologies to optimise traffic flows.

Better use of virtual networking facilities built into virtualisation platforms can also help here, giving you the chance to take full advantage of all those lovely multi-core processors and other expensive server assets. ®

More about

TIP US OFF

Send us news