This article was originally published in October 2012 at 10GbE.net.
In Ethernet five terms: link aggregation, port trunking, link bundling, teaming, and channel bonding, are all used to describe coupling multiple network ports together within a system or switch, to form an even larger logical network pipe.
Typically Linux, OSX, and Unix systems offer this as a standard OS feature. All versions of Windows before “Server 2012” needed a special driver to handle this, but with Myricom’s standard 10GbE windows driver, this feature is included. In fact, Myricom’s Windows driver will team up to 16 ports of 10GbE together creating a single logical 160GbE port. Today 5U servers exist with up to 10 available PCIe slots, an example might be the SuperMicro 5086B-TRF. If one were to install 8 dual port Myricom 10GbE adapters running under Windows and “Team” them all together they could create the network equivalent of a firehouse. Now honestly you wouldn’t get all 160Gbps in each direction, likely at best only 70% of that. Various issues ranging from the method used for teaming to how your switch is configured and how it views a teamed connection will likely establish your limit.
Several years ago a customer reported to me that he’d teamed five 10GbE ports together and was seeing 62Gbps one way and 72Gbps the other, this was a single socket Core i7 board and we both believed was very likely hitting the limit of the PCIe chipset he was using.
Why would one want to do this? Well, suppose you need to serve up an enormous library of content to a vast array of computers. By “teaming” up several adapters you can quickly, easily & dramatically improve your network performance because your software won’t have to manage multiple network connections. Furthermore, the teaming driver will handle network port outages and mask those from your application. To learn more about teaming (link aggregation) consider reading this wiki page.
Thanks to Mike Fahey from Emulex for suggesting the topic.