Recently Solarflare concluded some testing with Nginx that measured the amount of traffic Nginx could respond to before it started dropping requests. We then scaled up the number of cores provided to Nginx to see how additional compute resources impacted the servicing of web page requests, and this is the resulting graph:
As you can see from the above graph most NIC implementations require about six cores to achieve 80% wire-rate. The major difference highlighted in this graph though is that with a Solarflare adapter, and their OpenOnload OS Bypass driver they can achieve 90% wire-rate performance utilizing ONLY two cores versus six. Note that this is with Intel’s most current 10G NIC the x710.
What’s interesting here though is that OpenOnload internally can bond together up to six 10G links, before a configuration file change is required to support more. This could mean that a single 12 core server, running a single Nginx instance should be able to adequately service 90% wire-rate across all six 10G links, or theoretically 54Gbps of web page traffic. Now, of course, this is assuming everything is in memory, and the rest of the system is properly tuned. Viewed another way this is 4.5Gbps/core of web traffic serviced by Nginx running with OpenOnload on a Solarflare adapter compared to 1.4Gbps/core of web traffic with an Intel 10G NIC. This is a 3X gain in performance for Solarflare over Intel, how is the possible?
Simple, OpenOnload is a user space stack that communicates directly with the network adapter in the most efficient manner possible to service UDP & TCP requests. The latest version of OpenOnload has also been tuned to address the C10K problem. What’s important to note, is that by bypassing the Linux OS to service these communication requests Solarflare reduces the number of kernel context switches/core, memory copies, and can more effectively utilize the processor cache. All of this translates to more available cycles for Nginx on each and every core.
To further drive this point home we did an additional test just showing the performance gains OOL delivered to Nginx on 40GbE. Here you can see that the OS limits Nginx on a 10-core system to servicing about 15Gbps. With the addition of just OpenOnload to Nginx, that number jumps to 45Gbps. Again another 3X gain in performance.