This article is more than 1 year old

Mellanox shoots higher and lower with long-haul switches

Adding MetroDX for campus and a MetroX that spans 100km

InfiniBand and Ethernet switch maker Mellanox Technologies, recently hammered because it missed its latest quarterly results big-time and gave a weak forecast for the first quarter, is sticking to its switched-fabric knitting and expanding its product portfolio, making good on its promise to flesh out a series of long-haul switches to link data centers scattered around campuses or separated by relatively large distances.

Like its current line of InfiniBand and Ethernet switches, Mellanox's Metro line of long-haul switches are based on the switch-hitting SwitchX ASICs, which debuted back in April 2011and which were enhanced with the SwitchX-2 ASICs back in October 2012.

The SwitchX ASICs can speak either InfiniBand at 56Gb/sec Fourteen Data Rate (FDR) or Ethernet at 40Gb/sec – and if you are nice and you pay Mellanox a little extra, it has a patch that will let Ethernet run at 56Gb/sec speeds, as well

In a normal Mellanox switch, that ASIC drives 48, 58, or 64 ports running at 10GE speeds, or 12, 18, or 36 ports running at 40GE speeds. On InfiniBand devices, the ASIC bandwidth and brains are used to drive anywhere from eight to 36 InfiniBand ports running at either 40Gb/sec or 56Gb/sec.

With the MetroX line of long-haul switches, you sacrifice some of the ports, bandwidth, juice, and brains so you can drive a signal out of the switch at a typical data center length (up to maybe 100 meters), and push it from one kilometer to as much as 100 kilometers – all with acceptable latencies and, presumably, at a cost that will not be too crazy.

Mellanox previewed the MetroX TX6100 long-haul switch at last November's SC12 supercomputing conference, and it has the same basic feeds and speeds, although the port arrangement on the box has been rearranged. The TX6100 has six data center downlink ports that run at either 40Gb/sec or 56Gb/sec speeds and use QSFP connectors, just as Mellanox explained last year, but the six long-haul ports top out at 40Gb/sec speeds, not the 56Gb/sec speeds El Reg was told they would support last November. (This is not a big change, and that was a beta product back then anyway, subject to change.)

The modified MetroX TX6100 long-haul switch spreads the ports out a bit

The modified MetroX TX6100 long-haul switch spreads the ports out a bit

The spec sheet says these six downlinks have an aggregate throughput of 336Gb/sec, and they are really InfiniBand links that are designed explicitly to support the Remote Direct Memory Access (RDMA) protocol both over these downlinks and over the long haul. If you want the Metro boxes to speak to Ethernet switches, you fire up the RDMA over Converged Ethernet (RoCE) protocol, and the Ethernet network is none the wiser.

The long-haul uplinks in the MetroX TX6100 use special long-range multiplexing LR4 transceivers that also fit in QSFP+ ports, and they have an aggregate throughput of 240Gb/sec across those six ports. The port-to-port hop latency is around 200 nanoseconds inside the switch, and the latency is around five nanoseconds per meter out over dark fiber.

How the three long-haul switches in the Metro family stack up

How the three long-haul switches in the Metro family stack up

The table above says that the TX6100 can link sites separated by as much as 10 kilometers, which is true, but that is only if you step those ports back to 10Gb/sec speeds. If you try to pump the uplinks over the dark fiber to 40Gb/sec, Mellanox says that the distance is cut back to 2 kilometers. But in the second quarter of this year, the TX6100 will be able to pump 40Gb/sec up to 10 kilometers, as shown below. This will be using the MetroX-2 ASICs, not the original MetroX chips (again, both are variants of the SwitchX chips) that debuted with the beta iron last November.

The very long haul switch that Mellanox was promising last year, the MetroX TX6200, looks to be coming to market a little early – which is good news for the switch maker.

This switch has two 10Gb/sec uplinks that can pump data over dark fiber at a maximum distance of 100 kilometers, and uses QSFP+ long-haul transceivers from Mellanox. Technically, those two uplinks are rated at 40GB/sec, but you only have 10Gb/sec of aggregate throughput dedicated to the ports, so it doesn't matter. The two downlinks have 112Gb/sec of aggregate bandwidth and can run at 40Gb/sec or 56Gb/sec speeds. You have the same 200 nanosecond port-to-port hop latency and the same five nanoseconds per meter latency going down and back the dark fiber.

If your long-haul networking needs are more modest – say, one kilometer and under at a campus range – then Mellanox is making good on its promise to put out the TX6000 switch, which has been branded the MetroDX line. This switch is designed to span distances of up to one kilometer, and like the other long-haul switches, can use the LR4 QSFP+ transceivers, which run at 40Gb/sec.

The LR4 modules for the Metro switches

The LR4 modules for the Metro switches

The MetroDX TX6000 campus-level switch has 18 downlinks that run at 56Gb/sec speeds with a combined throughput of 1Tb/sec, and the 18 uplinks top out at 40Gb/sec speeds and have a total of 720Gb/sec of aggregate data throughput across those uplinks. The same port-to-port and dark latencies prevail: 200 nanoseconds inside the switch and five nanoseconds per meter.

All three switches come in 1U rack enclosures and burn around 200 watts. Amit Katz, director of product management at Mellanox, tells El Reg that the two MetroX long-haul switches will ship in March, and the MetroDX switch will ship in April.

Pricing has not yet been set, but will be available at launch for each product. And while Mellanox is not pricing its devices today, Katz did say that Mellanox has every intention of beating Obsidian Strategies' Longbow and Bay Microsystems' IBEX long-haul InfiniBand switches on price. ®

More about

TIP US OFF

Send us news


Other stories you might like