A Path from GbE to 10GbE

Recently folks have asked how they could squeeze more out of their Gigabit Ethernet (GbE) infrastructures while they work to secure funding for a 10GbE upgrade in the future. I’ve been selling 10GbE NICs for 10 years and blogging the past six. What I’ve learned as a former system architect, IT manager, server sales, and now network sales is that the least painful method for making the transition is to demonstrate the payback to your management in stages. First I’d upgrade my existing servers to 10GbE adapters, but running on my existing GbE network to demonstrate that I was pushing that infrastructure to its full potential. It’s very likely that your existing multi-core servers are sometimes more CPU bound than bandwidth. Also, it is possible you may have some extra top of rack switch ports you can leverage. There are several interesting tricks worth considering. The first is to move to a current 10GbE controller, one that supports GbE (1000Base-T is the formal name for the RJ-45 telephone style modular connector). If this still doesn’t give you the performance bang you’re seeking then you can consider testing an operating system bypass (OS Bypass) network driver.

Upgrading from the generic GbE port mounted on your server’s mother board to a PCI Express option card with dual 10G ports means you’re moving from GbE chip technology designed 15 years ago to very possibly a state of the art 10G Ethernet controller designed in the past year or two. As mentioned in other posts like “Why 10G” and “Four Reasons Why 10GbE NIC Design Matters” some of today’s 10GbE chips internally offer 1,000s of virtual NIC interfaces, highly intelligent network traffic steering to CPU cores, and a number of advanced stateless network packet processing offloads (meaning that more work is done on the NIC that would otherwise normally have to be done by your Intel server CPUs).  Much of this didn’t exist when your server’s GbE chip was initially designed back in 2000. So what is the best way to make the jump?

There are two methods to plug your existing RJ45 terminated networking cables into new 10GbE server class NICs. The first, and easiest, is to use a native dual port 10GBase-T card that supports GbE like Solarflare’s SFN5161T which runs roughly $420. The second approach, which provides a much better path to 10GbE, is to use a dual port SFP+ card like Solarflare’s SFN7002F with a pair of 1000Base-T modules. In this case, the adapter is $395, and each module is roughly $40 (be careful here because there are numerous Cisco products offered that are often just “compatible”). When you get around to migrating to 10GbE both approaches will require new switches and very likely new network wiring. The 10Gbase-T standard, which uses the familiar RJ45 networking connector, will require that you move to the more expensive Cat6 cabling, and often these switches are more expensive and draw more power. If you have to rewire with Cat6, then you should seriously consider using passive DirectAttach DA cables with bonded SFP+ connectors that start at $20-$25 for 0.5-2M long cables. By the time your network admin custom makes the Cat6 cables for your rack it’ll likely be a break even expense cost (especially when you have to spend time diagnosing bad/failing cables). DA cables should be considerably more trouble free over time, frankly, 10GBase-T really pushes the limits of both the Cat6 cables and RJ-45 connectors.

Another thing to consider is leveraging an OS Bypass layer like Solarflare’s OpenOnload (OOL) for network intense applications like Nginx, Memcached, and HAProxy. We saw that OOL delivered a 3X performance gain over running packets through the Linux OS, which was documented in this whitepaper. In the testing for this whitepaper, we found that for Nginx content served from memory would typically take six cores to respond to a full 10G link. Running OOL it only required two cores. Turning this around a bit, with OOL on a dual port 10G card you should only need roughly four cores to serve static in-memory content at wire-rate 10G to both ports. So suppose you have an eight core server today with a pair of GbE links, and during peak times it’s typically running near capacity. By upgrading to a Solarflare adapter with OOL, still just utilizing both 10G ports as GbE ports, you could easily be buying yourself back some significant Intel CPU cycles. The above requires a very serious your mileage may vary type statement, but if you’re interested in giving it a try in your lab Solarflare will work with you on the Proof of Concept (POC). It should be noted that adding OOL to a SFN7002F adapter will roughly double the price of the adapter, but compare that additional few hundred dollars in a 10G software expense to the cost of replacing your server with a whole new one, installing all new software, perhaps additional new software licenses, configuration, testing, etc… Replacing the NIC, and adding an OS Bypass layer like OOL is actually pretty quick, easy & painless.

If you’re interested in kicking off a GbE to 10GbE POC please send us a brief email.

Leave a Reply