This post was originally published in August of 2008 at 10GbE.net.
When embarking on a new IT project one rarely considers the network, unless of course, the network is the project. Data networks in many cases get the same level of attention as the AC power. You expect plenty to be available, all the time and without interruption. Rarely is the network considered a performance bottle neck?
One time I assumed responsibility for improving the performance of an MS SQL server that was vital to our business. The primary job this server ran took 75 minutes and it was scheduled to run, how many of you see this coming, every hour! This server was tracking and reporting on $10’s of millions in new business every month.At first glance, I noticed several back to back to back bottle necks. The system was memory starved, the drives were in a near constant state of thrashing and all SQL I/O from the system went through a $10/NIC card. Although the NIC functioned it was forcing the switch to drop far too many packets. At lunch that day we picked up a newer server class NIC card for $40 and immediately recorded a substantial performance improvement. The job would finish in just under the 60 minutes allowed. We could have spent the next week chasing performance curves, instead, we installed a new server, a dual processor single core box and the job now completed in well under a minute. So a $40 NIC improved performance by 20% while replacing the whole server for roughly $5,000 improved performance by 98%. Clearly, the NIC delivered the biggest bang for the buck, but it just brought the network performance curve in-line with that of the CPU, memory & disk.
How many dual-socket quad-core servers were installed today, August 13th, 2008, with GbE? These servers have 4X the horse power of my $5,000 server from 2002, but they both share the same GbE. Furthermore, today we use VMWare and Xen to pack several logical servers into a single physical server in an effort to more efficiently utilize our hardware resources. We don’t hesitate to add more memory or disk, but adding a 10GbE board requires substantially more effort and planning.
When making the jump from GbE to 10GbE one needs to not only select a NIC but the media (CX4 or fiber) and a new switch infrastructure. High performance NICs run $700-$2,000/each. depending on the media and vendor. If you go fiber the optics run $500-3,000/each and you need one on each end of the cable. Finally, there’s the switch. Stack-able layer-2 switches run in the $400-$1,200/port range while enterprise layer-3 switches often run several thousand dollars/port.
If your server is I/O bound a good 10GbE NIC and switch can enable 5-10X the output of the “free” GbE port that comes with your server. Suppose you purchase a new server for $5,000, then you add a high performance 10GbE CX4 copper NIC and use a low-cost layer two switches so the upgrade to 10GbE costs roughly $1,200 for this server. You need to only measure a 25% gain in overall performance for you to realize a positive return on your investment! There is a new breed of hybrid switches that now offer 24 GbE ports and four 10GbE ports so one can easily make the shift from GbE for servers to 10GbE. Consider giving 10GbE a try.