Severs Can Protect Themselves From a DDoS Attack

Solarflare is completing SolarSecure Server Defense, a Docker Container housing a start-of-the-art threat detection, and mitigation system. This system dynamically detects new threats and updates the filters applied to all network packets traversing the kernel network device driver in an effort to fend off future attacks in real time without direct human intervention. To do this Solarflare has employed four technologies: OpenOnload, SolarCapture Live, Bro Network Security Monitor, and SolarSecure Filter Engine.

OpenOnload provides an OSBypass means of shunting copies of all packets making it past the current filter set to SolarCapture. SolarCapture provides a Libpcap framework for packet capture which then hands these copied packets onto Bro for analysis. Bro then applies a series of scripts to each packet, and if a script detects a hit it raises an event. Each class of event then triggers a special SolarSecure Filter Engine script which then creates a new network packet filter. This filter is then loaded in real-time into the packet filter engine of the network adapter’s kernel device driver to be applied to all future network packets. Finally, Server Defense can alert your admins as new rules are created on each server across your infrastructure.

SolarSecure Server Defense inspects all inbound, outbound, container to container, and VM to VM packets on the same physical server, and filters are applied to every packet. This uniquely positions Solarflare Server Defense as the only containerized cyber defense solution designed to protect each individual server, VM or container, within an enterprise from a wide class of threats ranging from a simple SYN flood to a sophisticated DDoS attack. Even more compelling, it can actually defend from attacks originating from inside the same physical network, behind your existing perimeter defenses. It can actually defend one VM from an attack launched by another VM on the same physical server!

To learn more please contact Scott Schweitzer at Solarflare.

3X Better Performance with Nginx

Recently Solarflare concluded some testing with Nginx that measured the amount of traffic Nginx could respond to before it started dropping requests. We then scaled up the number of cores provided to Nginx to see how additional compute resources impacted the servicing of web page requests, and this is the resulting graph:

click for larger image

As you can see from the above graph most NIC implementations require about six cores to achieve 80% wire-rate. The major difference highlighted in this graph though is that with a Solarflare adapter, and their OpenOnload OS Bypass driver they can achieve 90% wire-rate performance utilizing ONLY two cores versus six. Note that this is with Intel’s most current 10G NIC the x710.

What’s interesting here though is that OpenOnload internally can bond together up to six 10G links, before a configuration file change is required to support more.  This could mean that a single 12 core server, running a single Nginx instance should be able to adequately service 90% wire-rate across all six 10G links, or theoretically 54Gbps of web page traffic. Now, of course, this is assuming everything is in memory, and the rest of the system is properly tuned. Viewed another way this is 4.5Gbps/core of web traffic serviced by Nginx running with OpenOnload on a Solarflare adapter compared to 1.4Gbps/core of web traffic with an Intel 10G NIC. This is a 3X gain in performance for Solarflare over Intel, how is the possible?

Simple, OpenOnload is a user space stack that communicates directly with the network adapter in the most efficient manner possible to service UDP & TCP requests. The latest version of OpenOnload has also been tuned to address the C10K problem. What’s important to note, is that by bypassing the Linux OS to service these communication requests Solarflare reduces the number of kernel context switches/core, memory copies, and can more effectively utilize the processor cache. All of this translates to more available cycles for Nginx on each and every core.

To further drive this point home we did an additional test just showing the performance gains OOL delivered to Nginx on 40GbE. Here you can see that the OS limits Nginx on a 10-core system to servicing about 15Gbps. With the addition of just OpenOnload to Nginx, that number jumps to 45Gbps. Again another 3X gain in performance.

If you have web servers today running Nginx, and you want to give them a gargantuan boost in performance please consider Solarflare and their OpenOnload technology. Imagine taking your existing web server today which has been running on a single Intel x520 dual port 10G card, replacing that with a Solarflare SFN7122F card, installing their OpenOnload drivers and seeing a 3X boost in performance. This is a fantastic way to breathe new life into existing installed web servers. Please consider contacting Solarflare today do a 10G OpenOnload proof of concept so you can see these performance gains for yourself first hand.

Beyond Gigabit Ethernet

Where wired connections can be made, they will always provide superior performance to that of wireless techniques. Since the commercialization of the telegraph over 175 years ago mankind has been looking for ever faster ways to encode & transfer information. The wired standard we’re all most familiar with today is Gigabit Ethernet (GbE). It runs throughout your office to your desktop, phone, printers, copiers, and wireless access points. It is the most pervasive method in the enterprise for reliably linking devices. So what’s next?

Two weeks ago if you’d have asked most technology professionals they would have answered 10 Gigabit Ethernet (10GbE). That was the commonly accepted plan. Then Cisco, Aquantia, Freescale & Xilinx announced an alliance to further develop & promote a proposed Next Generation (NBase-T) wired standard supporting 2.5GbE & 5GbE speeds over existing installed wires (Category 5a & 6) cables. We all know Cisco, and that’s enough to get pretty much everyone’s attention, but who are the other three? Aquantia is one of the leaders in producing the physical interface (PHY) chips that exist at both ends of the wire. Switch companies like Cisco use Aquantia, as do network interface card companies like Solarflare, Intel, and Chelsio. Aquantia has figured out how to take digital information and encode it into electrical signals designed to travel at very high speeds through very noisy wires. Then on the other end, their chips have the smarts to find the signal within the vast amounts of noise created by the wires themselves. Freescale & Xilinx are a bit further up the food chain, they make more programmable chips that can be positioned between Aquantia & Cisco’s switch logic, or the Intel processor in your computer.

So why did Cisco push to form the NBase-T Alliance, what do they gain from this investment? It turns out that improvements in Wireless networking are behind this, and Cisco has a large wireless business. In commercial environments, wireless access points now use a wider range of frequencies in parallel so they can service more of our wireless devices. These access points are pushing the limits on the back end with what GbE is capable of. Since most enterprises are already wired with Cat5a or Cat6 rewiring to support 10GbE would be very expensive. Hence the drive towards NBase-T.

The question though is how about performance desktop users? Folks doing video editing, simulation, or anything that is data intensive could easily push well beyond GbE. We’re now starting to see Apple & others ship 4K resolution desktop computers and displays. These devices can be huge data consumers. What’s the plan for supporting them beyond GbE? The answer still appears to be 10GbE, but time will tell.

Your Server as the Last Line of Cyber Defense

Here is an excerpt from an article I wrote for Cyber Defense Magazine that was published earlier today:

Since the days of medieval castle design, architects have cleverly engineered concentric defensive layers along with traps, to thwart attackers, and protect the strong hold. Today many people still believe that the moat was a water obstacle designed to protect the outer wall, when in fact it was often inside the outer wall and structured as a reservoir to flood any attempt at tunneling in. Much like these kingdoms of old, today companies are leveraging similar design strategies to protect themselves from Internet attackers.

The last line of defense is always the structure of the wall, and guards of the castle keep itself. Today the keep is your network server that provides customers with web content, partners with business data, and employee’s remote access. All traffic that enters your servers comes in through a network interface card (NIC). The NIC represents both the wall and the guards for the castle keep.  Your NIC should support a stateless packet filtering firewall application that is authorized to drop all unacceptable packets. By operating within both the NIC, and the kernel driver, this software application can drop packets from known Internet marauders, rate limit all inbound traffic, filter off SYN floods, and only pass traffic on acceptable ports. By applying all these techniques your server can be far more available for your customers, partners, and employees.

For the rest of the article, with several cool sections of code that explain how to protect your server, please visit Cyber Defense Magazine.

Building an Inexpensive Performance Packet Generator

You know that nice feeling you get when someone surprises you with a feature you weren’t expecting, but that totally change the way you use something. Like when your son told you about that free HBO app so you can now watch HBO on your iPad.” Well, recently Solarflare released an update to SolarCapture Pro (SCP V1.3) with just such a feature, replay.

On the surface replay sounds rather hum drum, you can replay libpcap files out to an Ethernet interface. So for the Uber nerds out there yes, you can plug this into Ostenato and make a poor man’s high-performance Ixia on the cheap under $2,250 (you’ll need an SFN7122F & the software SFS-SCP) plus the cost of your server.

If one considers that Solarflare provides the highest performance network adapters currently available for both 10GbE & 40GbE.  This replay feature could be extremely powerful. For example, someone could load a server up with memory, spin up one or more very large libpcap files then use the following command to blast them into their network at wire-rate.

solar_replay pps=1.5e6 prebuffer repeat=512 eth2=play1.pcap eth3=play1.pcap

In this example replay will sustain 1.5 million packets per second (Mpps), note this rate can be as high as 14.8Mpps if your pcap file is all small packets. Next before the replay actually starts play1.pcap will be “prebuffered” meaning that it will be loaded into memory before the replay begins so that disk performance won’t be a factor in the playback. Next, the replay will loop 512 times.  Finally, it will replay the same buffer out both 10GbE or 40GbE ports on the adapter, eth2 & eth3.

So what will this look like? Simple a storm of packets on two interfaces that are hopefully attached to different switches in your infrastructure. Note that the packet rate is actually limited by the size of the packets.

Additionally, you can pin the replay to specific cores, increase the number of buffers, adjust port & time ranges of what you want to replay from the pcap files, and throttle the rate to a multiple of the initial capture speed.

This is by far the most advanced replay capability available today on an ASIC based network adapter. SolarCapture is extremely powerful, and this sample just scratches the surface of what it is capable of.

If you’re interested in taking SolarCapture out for a test drive, or just want to learn more feel free to contact me, or reach out directly to Solarflare.

Towards a More Secure Network Interface (SNI)

Many of the objects in our lives are Internet connected. Everything from watches to home thermostats, refrigerators & even septic systems are now wired to the Internet. All of these devices have a certain expectation of trust when they connect to the Internet. Unfortunately there in lies the fundamental flaw. This “trust everything model” is inherent in nearly all network connected hardware individuals & corporations deploy, with the specific exception of course of security appliances.

Why do our networks work this way? Because it’s easier for hardware engineers to assume trust then require authentication. Take for example your car, it has hundreds of systems & sensors that are all interconnected. There is an assumed level of trust by every device that makes up your vehicle because the automaker believed they controlled everything. Now suppose you’re driving along at say 60MPH, and I was to reach in through your Onstar link & activate the ABS system on the right side of the vehicle. How’s that trust working for you now? Don’t laugh, I’m serious.  Automobile manufacturers are all facing this issue today thanks to several well-publicized hacks last summer.

Can you board a major airline in the US by simply walking into the airport, traversing the terminal, then boarding the plane? No. At a minimum, you have to go through a Transportation Safety Administration (TSA) checkpoint. Then a second, very simplistic, validation of your ticket at the gate. The TSA, in essence, is a packet filter, where you are the packet. They look at you, your ID, run you through a millimeter scanner & your stuff through an X-ray, and if all this passes muster you’re permitted to proceed.

Suppose there was a very bright tiny TSA agent that lived just inside your computer who supervised your connection to the Internet checking every bit of data coming into your computer. This tiny TSA agent seeing everything applies some basic sanity checks to your inbound data, let’s call this capability a Secure Network Interface (SNI). Here are some examples of the types of tests that this SNI might execute before allowing information to be handed off to your applications or operating system:

  • Is the data coming from somewhere or someone I trust?
  • Is it coming in specifically to the application I know & trust?
  • Is this a request that I find acceptable?
  • Is there anything in the request I might find objectionable?

Today corporate networks rely on firewalls, and other advanced filtering & security hardware to setup a demilitarized zone (DMZ) for all their Internet servers.  They then setup a second set of hardware firewalls with more restrictive rules to further protect internal systems & servers. Finally, we have the laptops, desktops & production servers, many of these also run software firewalls that do some basic network traffic filtering, think of them as each having that gate agent checking your data just before you need it.  This software firewall approach is flawed by design because the offending network traffic has already entered your system and has had access to your device drivers and low-level OS stack functions. Image if the TSA only existed at the gate to your plane.  Think of all the other doors & passages that would remain unprotected.

Imagine if every server had an SNI, actual hardware at the edge of your server or high-end workstation. Your network administrators could then explicitly & logically connect systems to each other & the appropriate users to one another through each of these SNI protected systems. The default would be that all outsiders would be ignored, if your network perimeter were then breached like Target was last fall, it wouldn’t make any difference. No logical connections would exist between say the unsecured HVAC system, yes the thieves broken in through the server that controlled the AC, and any of the corporate severs. This HVAC system would only be known to the VPN server, all other servers would shun it’s existence because the default action in their SNI would be deny, if you weren’t on the approved IP list to connect with a given server you’d be out of luck.

So does a Secure Network Interface (SNI) exist today? Yes,  Solarflare has a brand new software product called SolarSecure that installs a high-performance packet filter in the silicon of the server network adapter.  For now, you can click on this link to learn more.  In the near future, another Blog entry will explain the amazing capabilities of this exciting new technology.

Crash and Boom: Inside the 10GbE Adapter Market

It may be hard to believe, but we’re coming up on ten years with 10GbE as an adapter option for servers and workstations.
In  2003 the first 10GbE network adapters based on a new breed of chips hit the market—and by 2006, the list had eventually grown to include nearly twenty (AdvancedIO, Broadcom, Chelsio, Intel, Emulex, Endace, Mellanox, Myricom, Napatech, NetEffect, Neterion, NetXen, QLogic, ServerEngines, SMC, Solarflare, Teak Technologies, and Tehuti Networks).
Designing & building a 10GbE ASIC is not a cheap undertaking. Even on a shoestring budget, it could easily run $7-10M for that first working chip. Some of these companies never made it past that initial functional 10GbE controller chip. The above-combined efforts represent nearly one-quarter of a billion dollars to launch the 10GbE adapter market. To remain in this market long term… the full article is published over on HPCWire.

VMA – Voltaire Messaging Abandoned

This morning Mellanox announced that they are releasing the Voltaire Messaging Accelerator (VMA) as open source. Tom Thirer, the director of product management at Mellanox said: “By opening VMA source code we enable our customers with the freedom to implement the acceleration product and more easily tailor it to their specific application needs.”  He then followed this up with “We encourage our customers to use the free and open VMA source package and to contribute back to the community.” Now to be fair, I work for a company that has been selling 10GbE NICs, along with delivering & supporting a competing open source kernel bypass stack to the customer for over 5 years.

So what does moving VMA into OpenSource mean to Mellanox’s customers who run their business on systems that use VMA in production? Well, any problems or issues you now, or will ever have in the future with VMA, are now your problems and you get the privilege of fixing them.

OpenSource is a great method for rapidly advancing a broad appeal code base.  We all know and love Linux, the perceived shining star of the open source community, it runs on everything from a $60 Raspberry Pi to IBM’s System z mainframes. OpenSource works very well when there is significant interest, and demand for what the code offers. Mellanox’s VMA isn’t Linux, it’s a very specific network driver that runs on only one company’s network chip in a very niche set of markets. One of the main reasons Mellanox acquired Voltaire in 2011 for $208M was to gain control of VMA, it was one of the few unique features of Voltaire’s product line. Ever since then Mellanox been trying to stabilize the code base, reduce the jitter (unpredictable delays that can paralyze low latency systems), and exterminate some very pesky bugs. Those bugs and the support issues attached to them are the driving reason behind why Mellanox is now giving the source code away to the open source community.

Some might argue that they’re doing the financial services, HPC, and Web2.0 markets a huge favor by “donating” this code to the community. Mellanox is a business, they’ve spent many millions to acquire VMA in 2011, and likely much more over the past two years to further develop & maintain it. You don’t just jettison an expensive piece of code because you want to give your customers “the freedom to implement the acceleration product and more easily tailor it to their specific application needs.”

It’s been known in the industry for at least six weeks that Mellanox was going in this direction, in fact, the source code has actually been in Google Code since August 12, so whose contributed changes? Well, Mellanox has, over 30 times in fact, in order to get ready for this announcement.  This is big news, so how many people are following the code? Three, and two are the Mellanox employees who have submitted code fixes, all but one submitted by the same employee. How about the discussion list perhaps users are commenting there, nope it’s empty.

Finally, if Mellanox were serious about VMA moving forward there would be one or more courses on this product in the Mellanox Academy, today there are zero!  Check out the course catalog for yourself.  If the catalog isn’t enough to convince you that Mellanox’s focus is on Infiniband then let’s follow the numbers, and look at their most recent financials. Toward the end of their last quarterly SEC 10Q filing, you’ll see that Ethernet made up only 14% of their revenue. FDR, QDR & DDR Infiniband combined make up over 80% of their revenue. Mellanox is Infiniband, and more importantly, Infiniband is Mellanox.

Now Mellanox has said that they will still provide a binary version of VMA that they will support, but they’ve not publicly stated what that support contract will cost.

Building a Better Security Appliance

In the past, this Blog has discussed how one might setup a rule based cyber security application like Snort or Suricata on 10Gb Ethernet using Myricom’s FastStack Sniffer10G packet capture solution. I learned recently of another approach for managing cyber security which utilizes a different and unique approach. This technique leverages detailed traffic logs, and an advanced scripting engine tuned for managing Internet domain sourced content. The application is called Bro, and it’s fast becoming the hot new tool for managing cyber security. A partner of ours, Reservoir Labs, recently released a 1U cyber security appliance that at its core uses Bro with FastStack Sniffer10G to provide a stand-alone or managed cluster solution.

While Snort and Suricata rely on rules to analyze traffic Bro uses a scripting language designed to manipulate Internet domain sourced packet flows. Here is how packets actually flow through this solution. Raw traffic is captured via a network tap which is wired into an Emulex card running FastStack Sniffer10G. Sniffer10G then utilizes flow hashing via a four tuple (source/destination address/port) to spread inbound traffic between ring buffers attached to each core on the server. Bro then connects to these Libpcap structured ring buffers and combs through that data utilizing a sophisticated schema designed to identify and log real time traffic into flows. With Bro running on each core it can then leverage the full system to search for threats. The scripting language is similar to Python, but it was designed to analyze traffic flows looking for dynamic cyber-attacks.

Furthermore, Bro can run standalone or via a unified cluster based management framework. While all this sounds new, it isn’t. Bro has a long history coming out of Lawrence Berkley National Labs where it’s been running in production since 1996. So if you’re building a state of the art cyber security infrastructure for your enterprise you should also seriously consider utilizing Bro or tap into the folks at Reservoir Labs.

Extreme Packet Capture, Star Trek Style (Part 2)

Our approach to technology defines who we are, as individuals and groups. The groups could be companies, countries or a species, regardless the technology we employ demonstrates our origins and our roots. The Klingons are a fictional race of warriors and hunters who pride themselves on war ships with camouflage cloaking, strong defensive shields, and superior maneuverability. In contrast, the fictional Federation is a collection of races whose focus is on exploration. Their star charts, scientific scanners, and fast charging photon based phasers offer a unique contrast to the Klingon’s much slower & less efficient particle beam disrupters.
The same holds true for packet capture solutions. Our company, Myricom, designed our product in collaboration with a government agency interested in network security. One of the key design criteria was the replacement of an already existing method with one that bypassed the operating system so lossless packet capture at wire-rate could be achieved. The process of capturing network packets is transparent to the end user application, and they are stored in memory via a user space or kernel space driver. This technique enables our product to support over a dozen existing applications right out of the box. Another vendor designed their capture product for the financial market where saving the market data to disk for later analysis is critically important. Both of these approaches fit perfectly for the problems they solve, but one is more versatile.
In part one I promised to wrap up this series by talking about injection and sample code, so let’s begin. Injection is simply taking packets in memory and putting them onto the ethernet. Where FastStack Sniffer10G differentiates itself from other approaches is that it has total control over the network interface, with nothing between it and the wire. Therefore when you capture packets, you can modify the contents if you like, and then inject them back onto the wire without anyone being the wiser. Most security appliances do just this, they act as a man in the middle, a guard who looks at everything and only lets in, our out if you’re really careful, things that are acceptable. Since Sniffer10G is an in-memory solution this can be sustained at wire-rate provided your man-in-the-middle code is pretty tight, and you leverage multiple queues for processing traffic in parallel. There is no transparent way to offer injection so you need to use the  Application Programming Interface (API), but several useful sample programs are included, with the source.
The sample programs provided that do injection are: snf_pktgen, snf_replay, and snf_bridge. The snf_pktgen is just what it’s name implies a simple packet generator. You can tell snf_pktgen what packet size to use, how many packets to send (or infinite) and the number of concurrent parallel threads to use to send them. It will make a best effort to pack the wire full of packets with the size you provided. Similar to that we have snf_replay which will play back a sequence of packets already constructed to the ethernet. Here you pass snf_replay the file name that contains the packets. An optional packet rate (undocumented for various reasons). Another option to read the whole packet file into memory prior to writing it to the ethernet. Also optional insertion of a VLAN tag for those packets without tags. The number of times to replay the file to the ethernet, and the number of threads you’d like transmitting concurrently. Finally, we have snf_bridge, I’ve not personally used this one. With snf_bridge you define the port to use for capture and injection (they can be different), the number of in memory rings to use, and the CPU binding mask. The you can specify the number of packets to forward before exiting. The number of times to retry forwarding a packet before dropping it. The amount of time to wait between capture & injection, in milliseconds. Finally, the option to reflect non-UDP and TCP packets to the network device. All of these sample programs, and several more, are available in the /opt/snf/bin/tests directory in binary form and /opt/snf/share/examples directory as source code.
So Federation, Romulan, or Myricom, a clear problem statement, and the tool box we bring to the table defines the products we build and the solutions we offer. Unless of course, you’re Captain Kirk, who prefers when possible to redefine the problem into something that he can solve with the resources at hand. If you ever want to chat packet capture please don’t hesitate to flip open your communicator and ring me up, 919-389-5064.