Will Twinax Replace CX4

This article was originally published in October of 2008 at 10GbE.net.

Last week Cisco jumped behind something called Twinax, Why? Three likely reasons:
  • Flexibility – Cisco and Juniper both selected SFP+ as the PHY for their new line of 10GbE switches. Offering an SFP+ cable with a connector on the end that enables you to use a single SFP+ port for all your connection needs is a stroke of genius. Say you need a short run from one switch to a server, plug a Twinax cable with SFP+ connectors on each end in and you’re good to go, up to 10 meters. Suppose later you need to move that server another 50 meters away then pop in SR optics on both ends and use fiber. No changes to the servers or switches, just swap in optics.
  • Cost – there has been a run up recently in the price for copper, while the cost of Twinax coax cable remains fairly fixed.
  • Power – SFP+ is rated at 1W/port, the Twinax solution typically draws 1/4W. CX4 is similar but compared to 10GBase-T at 10W (current generation) or even 2W for the next generation (under 30M) this is a huge power saving.
  • Latency over 10GBase-T – Current 10GBase-T uses a DSP at each end to separate the signal from the noise. This DSP adds roughly two micro seconds on each end of the connection, compared to under 200ns for the Twinax conversion.
We are closely watching how Twinax plays out over the next few months, and we’ll let you know what we learn.
UPDATE JANUARY 2012 – Twinax, otherwise known as Direct Attach or DA has won. In the past year, we’ve seen a significant drop off in requests for CX4 NICs since their height in 2008.

{ 1 Comment }

Shake and Bake, Conduct a Bake-off

This article was originally published in September 2008 at 10GbE.net.

Have you ever held a Bake-Off to select a core technology for a project? Not an RFI, but an actual honest to god series of “real world” tests. Few things are as exciting as setting up a technology obstacle course that is somewhat indicative of what your business environment is like then having various vendors run through it. Several times in my past I’ve conducted these when emerging technologies like server UPS systems and VOIP telephony were new in order to shake out the posers from the players, evaluate “real-world” performance then determine value.
 
Few vendors post actual price and performance data on the web, let alone the methodology they used to arrive at those performance numbers. If only there were an independent third party that actually ran Netperf, Iperf, ntttcps, ntttcpr and other tools on all the available 10GbE NICs using the same test systems then posted the results for everyone to see. Some companies would never recover. For legal reasons, the vendors won’t, and in most cases do not want to, do it because the results would only help one or two companies and likely not theirs. Today all most consumers have to go on is the cost of the adapter, wouldn’t it be great if you knew the cost/Mbps of the adapter prior to buying it so you could easily compare between adapters. Some would argue that features like iWARP and TOE should be factored in, but today they are just marketing fluff and rarely deliver any significant end user value.
 
So how do you determine which NIC will perform the best and deliver the most value for your company, do a bake-off! If you can make the time and the project is big enough the cost to conduct the back-off should easily be offset by the savings, education, and performance gains you reap over time. Also, a well constructed and executed bake-off will demonstrate not only to you but your management, that you’re an effective individual and a good steward of the companies resources.
Finally, share the full set of results with the vendors that participated, some will moan and groan, while others will kindly thank you for the opportunity to compete and move on. If the race was close their reactions at this point might be your deciding factor. So pull on your oven mitts and start baking…

{ Add a Comment }

Hidden Costs and Benefits of 10GbE

This post was originally published in August of 2008 at 10GbE.net

When embarking on a new IT project one rarely considers the network, unless of course, the network is the project. Data networks in many cases get the same level of attention as the AC power. You expect plenty to be available, all the time and without interruption. Rarely is the network considered a performance bottle neck?

One time I assumed responsibility for improving the performance of an MS SQL server that was vital to our business. The primary job this server ran took 75 minutes and it was scheduled to run, how many of you see this coming, every hour! This server was tracking and reporting on $10’s of millions in new business every month.At first glance, I noticed several back to back to back bottle necks. The system was memory starved, the drives were in a near constant state of thrashing and all SQL I/O from the system went through a $10/NIC card. Although the NIC functioned it was forcing the switch to drop far too many packets. At lunch that day we picked up a newer server class NIC card for $40 and immediately recorded a substantial performance improvement. The job would finish in just under the 60 minutes allowed. We could have spent the next week chasing performance curves, instead, we installed a new server, a dual processor single core box and the job now completed in well under a minute. So a $40 NIC improved performance by 20% while replacing the whole server for roughly $5,000 improved performance by 98%. Clearly, the NIC delivered the biggest bang for the buck, but it just brought the network performance curve in-line with that of the CPU, memory & disk.
 
How many dual-socket quad-core servers were installed today, August 13th, 2008, with GbE? These servers have 4X the horse power of my $5,000 server from 2002, but they both share the same GbE. Furthermore, today we use VMWare and Xen to pack several logical servers into a single physical server in an effort to more efficiently utilize our hardware resources. We don’t hesitate to add more memory or disk, but adding a 10GbE board requires substantially more effort and planning.
 
When making the jump from GbE to 10GbE one needs to not only select a NIC but the media (CX4 or fiber) and a new switch infrastructure. High performance NICs run $700-$2,000/each. depending on the media and vendor. If you go fiber the optics run $500-3,000/each and you need one on each end of the cable. Finally, there’s the switch. Stack-able layer-2 switches run in the $400-$1,200/port range while enterprise layer-3 switches often run several thousand dollars/port.
 
If your server is I/O bound a good 10GbE NIC and switch can enable 5-10X the output of the “free” GbE port that comes with your server. Suppose you purchase a new server for $5,000, then you add a high performance 10GbE CX4 copper NIC and use a low-cost layer two switches so the upgrade to 10GbE costs roughly $1,200 for this server. You need to only measure a 25% gain in overall performance for you to realize a positive return on your investment! There is a new breed of hybrid switches that now offer 24 GbE ports and four 10GbE ports so one can easily make the shift from GbE for servers to 10GbE. Consider giving 10GbE a try.

{ Add a Comment }

TOEs are now last seasons Manolo Blahnik’s, only worse?

This was originally published in June of 2008 at 10GbE.net.

For those not into style Manolo Blahnik is one of the leading female shoe designers, and often Blahnik’s start at $700/pair, the price of a good 10GbE NIC. As most servers have moved to dual socket quad-core processors the value proposition for TCP Offload Engine (TOE) 10GbE NICs has quickly eroded. 

In the spring of 2006, a good non-TOE 10GbE NIC consumed 40% of the host CPU in a dual-socket dual-core server and provided >6Gbps of performance, while a similar TOE did the same job using only 10% of the host CPU. So with a 30% savings in host CPU, there was some value in using a TOE. With two years of improvements in silicon, stateless offloads, and servers moving to dual-socket quad cores we now have 10GbE NICs capable of near-wire rate (>9.5Gbps) that consume only 10% of the host CPU. Similarly, TOE NICs in the same environment consume roughly 5% of the host CPU.
By most estimates, servers are typically running at 20% CPU utilization, as a result of application load. So will a 5% savings in host CPU be noticed, let alone worth the added purchase price of a TOE? No. Add to that the Linux Foundation’s 14-point argument against using TOES, written by the Linux Kernel developers themselves, and one would wonder why people still consider TOEs in style.
 
Here are the 14 reasons cited by the Linux Foundation on their
 
  1. Security updates
  2. Point-in-time solution
  3. Different network behavior
  4. Performance
  5. Hardware-specific limits
  6. Resource-based denial-of-service attacks
  7. RFC compliance
  8. Linux features
  9. Requires vendor-specific tools
  10. Poor user support
  11. Short term kernel maintenance
  12. Long term user support
  13. Long term kernel maintenance
  14. Eliminates global system view
 
If you are seriously interested in buying a TOE you should read their TOE page.

{ 2 Comments }

Optics Adoption

This post was originally published on April 2008 at 10GbE.net

Today for short runs under 15 meters there are two common options: CX4 copper and SR (short range) fiber. The difference between them is essentially the cost of the fiber optic module. Today the most common module for 10GbE is the XFP, soon it will be SFP+. There are three sources for SR optics under $700 listed on our optics page. Optics are required on both ends so this makes fiber typically $1,400 more expensive than CX4 copper. Also, copper adapters require less support logic and as such are often less expensive. 

Single port copper NICs run in the $700-$1,000 range while similar fiber NICs are $800-$1,200. The expectation is that SFP+ fiber modules will be roughly 25% less expensive than XFPs so this will make fiber more affordable in the second half of 2008 as SFP+ gains traction. The real knee in the 10GbE adoption curve though will occur when the next generation of 10GBase-T products hit the market in early 2009. The current generation of 10GBase-T silicon requires far too much power to make it practical. This second generation of 10GBase-T will allow people to use cables and connectors they are familiar with, ex.Cat6E and RJ45, to attach servers and switches within 100M without the expensive CX4 cables or the optics required today for fiber. Finally, most of the 10GbE NIC vendors are on their second or third generation silicon. By early 2009 most will have trimmed and tuned things to the point that they will have, or soon support LAN on Motherboard solutions. When this happens we will see high-end servers with 10Gbase-T support built in and 10G will then truly begin to replace GbE in the enterprise. We expect that this will likely begin to become common as we enter 2009.
Finally, most of the 10GbE NIC vendors are on their second or third generation silicon. By early 2009 most will have trimmed and tuned things to the point that they will have, or soon support LAN on Motherboard solutions. When this happens we will see high-end servers with 10Gbase-T support built in and 10G will then truly begin to replace GbE in the enterprise. We expect that this will likely begin to become common as we enter 2009.

{ Add a Comment }

Is 10GBase-T in fashion?

This was originally posted on March or 2008 at 10GbE.net

No. In March and April, several companies began marketing 10GBase-T NICs: Chelsio, Neterion, Tehuti, and even Mellanox (the Infiniband company). Only one switch company, SMC, has dipped their toe in 10GBase-T market, why? Power. All of these products are based on first generation 10GBase-T silicon which is very thirsty for power.

In the 10GbE world, all the NIC vendors separate their 10GbE chip from the physical (PHY) interface chip so they can be more responsive and flexible in creating NIC products and easily support several PHYs with a single 10GbE NIC chip. Today only three companies make 10GBase-T PHY chips: SolarFlare, Teranetics, and Aquantia. Teranetics is having the most success signing Chelsio, Tehuti, and Mellanox while Solarflare picked up SMC. What most avoid telling you is how much power these 10GBase-T PHY chips require, 8-12W. The vast majority of this power is used for only one purpose, separating the signal from the noise, the needle from the haystack.

What does this mean to you? Below is a simple example with 50 servers, focused only on the PHY power and the total power utilization for each of the three currently available media formats. Below is the power budget for the PHY on each end (NIC or Switch), total to support a server (both NIC and Switch PHY power) and the total for a 50 server project:

10GBase-CX4 PHY 0.5W/end, 1W/server, 50W for the project
10GBase-R XFP PHY 3W/end, 6W/server, 300W for the project
10GBase-T PHY 10W/end, 20W/server, 1000W for the project

This is the power needed to support just the 10GBase-T cabling and it is consuming enough energy to power two of the servers in your project! This is a cost you will carry for the life of the project and we all know conditioned data center power and cooling is not cheap.

When GbE first came out the initial round of PHY chips were also power hungry, of course, that is no longer the case. With 10GbE the physics are significantly more challenging and it may take another year or two before 10GBase-T solutions have power consumption similar to CX4. If you are doing a project today that can benefit from 10GbE use CX4 if possible, or fiber if you need more than 15M. For fiber consider using SFP+ or XFPs as they are the most current optics, are the least expensive and consume far less power than XENPACK or X2.

Note on April 14th Solarflare announced their new PHY silicon, 10Xpress SFT9001, which consumes between 2.2 and 6W depending on cable length. This brings 10GBase-T into parity with fiber. This PHY chip will be available in sample lots to 10GbE OEMs in May. Even more recently on April 21, Aquantia announced a 10GBase-T PHY chip that is sampling in May and which claims to bring the power down to 5.5W for Cat6A cable up to 100M long. The delay from samples to OEM NIC vendors and completed NIC samples for customers is often in the neighborhood of 3-6 months. So in our opinion, 10GBase-T NICs with a reasonable power envelope, 10-15W for the entire NIC, should be something that is available for consideration in the fall of 2008, just in time for those with year end budgets.

For more information and another perspective consider checking out what the Linley Group has to say on 10GBase-T.

{ Add a Comment }