This article was originally published in May of 2011 on 10GbE.net.
This article was originally published in May of 2011 at 10GbE.net.
Today for the umpteenth time I had to explain to someone that if you go optical to connect your server to your switch with 10GbE it could easily cost you twice as much. There is a secret at the end of this entry that MIGHT allow you to save some big time cash if you have enough muscle, but you have to read to the end of this entry.
This article was originally published in April of 2009 at 10GbE.net.
One would think that after 30 years our industry would have developed a NIC naming convention for “dual-port.” Does a dual-port NIC mean your OS sees one or two interfaces? Do dual-port NICs mean that one port is active and the other is for fail-over? Can a dual-port run traffic through both port simultaneously? It all depends on who you talk to, and the product they’re selling.
- Chelsio’s N320E for $790 is an example of this type of card.
- Intel’s AF DA card for $799 appears to be another example of this class of card.
- Myricom’s 10G-PCIE2-8B2-2S+E for $995 appears to be the only example of this approach. Myricom utilizes two unique 10GbE controllers on the same PCI Express Gen2 NIC and a PCI Express bridge chip to break the slot into two unique NIC devices.
- Myricom’s 10G-PCIE-8B-2S+E for $795 is an example of this type of card. The fail over time is under 10 microseconds.
- Chelsio’s B320E Bypass adapter for $3,483 is similar but it can detect an OS/BIOS/System failure and make a hard switch over to the second port.
This article was originally published in January of 2009 at 10GbE.net.
In 2007 over one million 10GbE network ports were purchased. Many of those were for a switch to switch interconnects but some were to connect servers to networks via 10GbE. Natural selection is now taking effect in the 10GbE NIC market as the big dogs, Intel & Broadcom, start thrashing around in an effort to secure market share as 10GbE matures. Both want to dominate the 10GbE LAN on Motherboard (LoM) market. In the NIC market, four companies likely supply over 80% of the 10GbE NICs purchased and they are Chelsio, Intel, Myricom, and Neterion. The remaining 20% of NIC sales fall to companies like Broadcom, SMC, NetXen, ServerEngines, Tehuti, AdvancedIO, Endace, Napatech, etc… One should be wondering why Broadcom is in the second group, it’s because Broadcom’s focus is on selling 10GbE silicon to OEMs like IBM and HP for LoM projects positioning their silicon on high-end server mother boards and not retailing NIC cards.
This article was originally published in November of 2008 at 10GbE.net.
This article was originally published in October of 2008 at 10GbE.net.
- Flexibility – Cisco and Juniper both selected SFP+ as the PHY for their new line of 10GbE switches. Offering an SFP+ cable with a connector on the end that enables you to use a single SFP+ port for all your connection needs is a stroke of genius. Say you need a short run from one switch to a server, plug a Twinax cable with SFP+ connectors on each end in and you’re good to go, up to 10 meters. Suppose later you need to move that server another 50 meters away then pop in SR optics on both ends and use fiber. No changes to the servers or switches, just swap in optics.
- Cost – there has been a run up recently in the price for copper, while the cost of Twinax coax cable remains fairly fixed.
- Power – SFP+ is rated at 1W/port, the Twinax solution typically draws 1/4W. CX4 is similar but compared to 10GBase-T at 10W (current generation) or even 2W for the next generation (under 30M) this is a huge power saving.
- Latency over 10GBase-T – Current 10GBase-T uses a DSP at each end to separate the signal from the noise. This DSP adds roughly two micro seconds on each end of the connection, compared to under 200ns for the Twinax conversion.
This article was originally published in September 2008 at 10GbE.net.
This post was originally published in August of 2008 at 10GbE.net.
When embarking on a new IT project one rarely considers the network, unless of course, the network is the project. Data networks in many cases get the same level of attention as the AC power. You expect plenty to be available, all the time and without interruption. Rarely is the network considered a performance bottle neck?
This was originally published in June of 2008 at 10GbE.net.
For those not into style Manolo Blahnik is one of the leading female shoe designers, and often Blahnik’s start at $700/pair, the price of a good 10GbE NIC. As most servers have moved to dual socket quad-core processors the value proposition for TCP Offload Engine (TOE) 10GbE NICs has quickly eroded.
- Security updates
- Point-in-time solution
- Different network behavior
- Hardware-specific limits
- Resource-based denial-of-service attacks
- RFC compliance
- Linux features
- Requires vendor-specific tools
- Poor user support
- Short term kernel maintenance
- Long term user support
- Long term kernel maintenance
- Eliminates global system view
This post was originally published on April 2008 at 10GbE.net
Today for short runs under 15 meters there are two common options: CX4 copper and SR (short range) fiber. The difference between them is essentially the cost of the fiber optic module. Today the most common module for 10GbE is the XFP, soon it will be SFP+. There are three sources for SR optics under $700 listed on our optics page. Optics are required on both ends so this makes fiber typically $1,400 more expensive than CX4 copper. Also, copper adapters require less support logic and as such are often less expensive.