Uncategorized

Idea + technology + {…} = product

If you’ve been in this industry a while you may remember the IBM Simon or the Apple Newton, both great ideas for products, but unfortunately, the technology just wasn’t capable of fulfilling the promise that these product designers had in mind. The holidays provide a unique opportunity to reflect. They also simultaneously create an environment for an impulse buy proceeded by a pause every year to play with my kids (now 21 and 24). 2017 was no different, and so this year for the first time ever I picked up not one but three quadcopter drones. After dinner Christmas day, all three were simultaneous buzzing around our empty two car garage attempting to take down several foam rubber cubes balanced on the garage door opener return beam. Perhaps I should bound this a bit more, a week earlier I’d spent $25 on each of the kid’s drones, not knowing if they would even interested, and $50 on my own. We had a blast, and if you’ve not flown one you should really splurge and spend $50 for something like the Kidcia unit, it’s practically indestructible. On the downside, the rechargeable lithium batteries only last about eight minutes, so I strongly suggest purchasing several extra batteries and the optional controller.

During the past week since these purchases, but before flying, I’ve wondered several times why we haven’t seen life-sized quad-copter drones deployed in practical real-world applications? It turns out this problem has waited 110 years for the technology. Yes, the quadcopter or rotary wing aircraft was first conceived, designed and demonstrated, in tethered flight mode back in 1907. The moment you fly one of today’s quadcopters you quickly realize why they flew in tethered flight mode back in 1907, crashing is often synonymous with a landing. These small drones, mine has folding arms and dual hinged propellers, take an enormous beating and still continue to fly as if nothing happened. We put at least a dozen flights on each of the three drones on Christmas day, and we’ve yet to break a single propeller. Some of the newer, more costly units, now include collision avoidance, which may actually take some of the fun away. So back to the problem at hand, why has it taken the quadcopter over 110 years to gain any traction beyond concept? Five reasons stand out, all technological, that have made this invention finally possible:

  • Considerably computing power & sophisticated programming in a single ultra-low power chip
  • Six-axis solid-state motion sensors (3-axis gyroscope, 3-axis accelerometer) also on a single ultra-low power chip
  • Very high precision, efficient, compact lightweight electric motors
  • Compact highly efficient energy storage in the form of lithium batteries
  • Extremely low mass, highly durable, yet flexible propellers

That first tethered quadcopter back in 1907 achieved only two feet of altitude while flown by a single pilot and powered by a single motor with four pairs of propellers. Two of the pairs of propellers were counter-rotating to eliminate the effects of torque, and four men, aside from the pilot, were required to keep the craft steady. Clearly far too many dynamically changing variables for a single person to process. Today’s quadcopter drones have an onboard computer that continuously adjusts all four motors independently while measuring the motion of the craft in six axes and detecting changes in altitude (via another sensor). The result is that when a drone is properly setup it can be flown indoors and raised to any arbitrary altitude where it will remain hovering in place until the battery is exhausted. Once the pilot requests the drone move left to right, all four rotors speeds are independently adjusted via the onboard computer to keep the drone from rotating or losing altitude. Controlled flight of a rotary wing craft, whether a drone or a flying car, requires considerable sensor input, and enormous computational power.

Petroleum-powered quadcopters are available, but to overcome issues in the variations of engine speeds and latency, the time from sensor input to action, they often utilize variable pitch propellers with electronic actuators. These actuators allow for rapid, and subtle changes in propeller pitch adjusting for variable inputs from the sensors and the pilot. While gas-powered drones often provide greater thrust, for most applications modern drones are assembled using electronic motors. These electric motors are extremely efficient, respond rapidly to subtle changes in voltage by delivering predictable rotational speeds, all while being very lightweight. Coupled with highly efficient lithium batteries, these make for an ideal platform for building drones.

The final component making these drones possible are advanced plastics and carbon fiber that now provide for very light-weight propellers that can take considerable abuse without fracturing or failing. When I grew up in the late 1960s and early 70s it didn’t take much to break that rubber band powered a red plastic propeller that came with balsa wood planes of that era. Today I can crash my drone into the garage door at nearly full speed and all eight propeller blades remain scratch free.

So next time you interact with a product and wonder why it doesn’t perform to your expectations, perhaps the technology has still not caught up to the intent of the product designers. Happy Holidays.

Container Networking is Hard

ContainerNetworkingisHardLast night I had the honor of speaking at Solarflare’s first “Contain NY” event. We were also very fortunate to have two guest speakers, Shanna Chan from Red Hat, and Shaun Empie from NGINX. Shanna presented OpenShift then provided a live demonstration where she updated some code, rebuilt the application, constructed the container, then deployed the container into QA. Shaun followed that up by reviewing NGINX Plus with flawless application delivery and live action monitoring. He then rolled out and scaled up a website with shocking ease. I’m glad I went first as both would have been tough acts to follow.

While Shanna and Shaun both made container deployment appear easy, both of these deployments were focused on speed to deploy, not maximizing the performance of what was being deployed. As one dives into the details of how to extract the most from the resources we’re provided we quickly learn that container networking is hard, and performance networking from within containers is even an order of magnitude more challenging. Tim Hockin, a Google Engineering Manager, was quoted in the eBook “Networking, Security & Storage” by THENEWSTACK that “Every network in the world is a special snowflake; they’re all different, and there’s no way that we can build that into our system.”

Last night when I asked those assembled why container networking was hard no one offered what I thought was the obvious answer, that we expect to do everything we do on bare metal from within a container, and we expect that the container can be fully orchestrated. While that might not sound like “a big ask”, when you look at what is done to achieve performance networking today within the host, this actually is. Perhaps I should back up, when I say performance networking within a host I mean kernel bypass networking.

For kernel bypass to work it typically “binds” the server NIC’s silicon resources pretty tightly to one or more applications running in userspace. This tight “binding” is accomplished using one of the at least several common methods: Solarflare’s Onload, Intel’s DPDK, or Mellanox’s RoCE. Each approach has its own pluses and minuses, but that’s not the point of this blog entry. When using any of the above it is this binding that establishes the fast path from the NIC to host memory. The objectives of these approaches, and this “binding” though runs counter to what one needs when doing orchestration, and that is a level of abstraction between the physical NIC hardware and the application/container. This level of abstraction can then be rewired so containers can easily be spun up, torn down, or migrated between hosts.

It is this abstraction layer where we all get knotted up. Do we use an underlying network leveraging MACVLANs or IPVLANs or an overlay network using VXLAN or NVGRE? Can we leverage a Container Network Interface (CNI) to do the trick? This is the part about container networking that is still maturing. While MACVLANs provide the closest link to the hardware and afford the best performance, they’re a layer-2 interface, and running unchecked in large-scale deployments they could lead to a MAC explosion resulting in trouble with your switches. My understanding is that with this layer of connectivity there is no real entry point to abstract MACVLANs into say a CNI so one could use Kubernetes to orchestrate their deployment. Conversely, IPVLANs are a layer-3 interface and have already been abstracted to a CNI for Kubernetes orchestration. The real question is what is the performance penalty one can observe and measure between using a MACVLAN connected container and an IPvLAN connected one? All work to be done. Stay tuned…

OCP & the Death of the OEM Mezz

OCPSince the early days of personal computing, we’ve had expansion cards. The first Apple and Radio Shack TRS-80 micro-computers enabled hackers like me to buy a foundational system from the OEM then over time upgrade it with third-party peripherals. For example, my original TRS-80 Model III shipped with 4KB of RAM and a cassette tape drive (long-term data storage, don’t ask). Within a year I’d maxed the system out to 48KB of RAM (16KB per paycheck) and a pair of internal single sided, single density 5.25” floppy drives (90KB of storage per side). A year or so later the IBM PC debuted and transformed what was a hobby for most of us into a whole new market, personal computing (PC). For the better part of two decades IBM lead the PC market with an open standards approach, yeah they brought out MicroChannel Architecture (MCA) and PCNetwork, but we won’t hold that against them. Then in 2006 as the push towards denser server computing reached a head IBM introduced the BladeCenter H. A blade-based computing chassis with integrated internal switching. This created an interesting new twist in the market the OEM proprietary mezzanine  I/O card format (mezz), unique to IBM BladeCenter H.

At that time I was with another 10Gb Ethernet adapter company managing their IBM OEM relationship. To gain access to the new specification for the IBM BladeCenter H mezz card standard you had to license it from IBM. This required that your company pay IBM a license fee (a serious six-figure sum), or provide them with a very compelling business case for how your mezz card adapter would enable IBM to sell thousands more BladeCenter H systems. In 2006 we went the business case route, and in 2007 delivered a pair of mezz cards and a new 32-port BladeCenter H switch for the High-Performance Computing (HPC) market. All three of these products required a substantial amount of new engineering to create OEM specific products for a captive IBM customer base. Was it worth it, sure the connected revenue was easily well into the eight figures? Of course, IBM couldn’t be alone in having a unique mezz card design so soon HP and Dell debuted their blade products with their own unique mezz card specifications. Now having one, two or even three OEM mezz card formats to comply with isn’t that bad, but over the past decade nearly every OEMs from Dell through SuperMicro, and a bunch of smaller ones have introduced various unique mezz card formats.

Customers define markets, and huge customers can really redefine a market. Facebook is just such a customer. In 2011 Facebook openly shared their data center designs in an effort to reduce the industry’s power consumption. Learning from other tech giants Facebook spun off this effort into a 501c non-profit called the Open Compute Project Foundation (OCP) which quickly attracted rock star talent to its board like Andy Bechtolsheim (SUN & Arista Networks) and Jason Waxman (Intel). Then in April of last year Apple, Cisco, and Juniper joined the effort, and by then OCP had become an unstoppable force. Since then Lenovo and Google have hopped on the OCP wagon. So what does this have to do with mezz cards? Everything, OCP is all about an open system design with a very clear specification for a whole new mezz card architecture. Several of the big OEMs and many of the smaller ones have already adopted the OCP specification. In early 1Q17 servers sporting Intel’s Skylake Purley architecture will hit the racks, and we’ll see the significant majority of them supporting the new OCP mezz card format. I’ve been told by a number of OEMs that the trend is away from proprietary mezz card formats, and towards OCP. Hopefully, this will last for at least the next decade.

Solarflare at DataConnectors Charlotte NC

You can never know too much about Cyber Security. Later this month Data Connectors is hosting a Tech Security event on Thursday, January 21st at the Sheraton Charlotte Airport Hotel, and we’d like to offer you a free VIP Pass!

Solarflare will be at this event showing our latest in 10Gb and 40Gb Ethernet server adapters along with discussing how servers can defend themselves against a DDoS attack through our hardened kernel driver which provides SYN cookie, white/black listing by port and address as well as rate limiting. To learn more please stop by.

25/50/100GbE Facts

Several years ago the mega data center and Web 2.0 companies started looking for an alternative to the approved 40GbE standard which is the link speed offered between 10GbE, and 100GbE. They viewed the IEEE approved implementations of both 40G and 100G, which were simply multiple 10G lanes, as very cumbersome. These mega data centers seek to leverage High-Performance Computing (HPC) concepts and desire that their exascale networks utilize fabrics that scale in even multiples. Installing 25/50GbE at the servers, and (2x25G) 50GbE, or (4x25G) 100GbE for the switch to switch links is much more efficient. It turns out two groups formed in parallel: The 25 Gigabit Ethernet Consortium and the 2550100 Alliance (that’s 25, 50, 100 for those that didn’t see it) to develop & promote a single lane 25GbE specification as the next Ethernet. This approach would then be extended to two 25G lanes for 50G, and four 25G lanes for 100G. It should be noted that today the IEEE, the industry standards body, has not yet ratified a 25GbE standard (when it does it will be referred to as IEEE 802.3by). Once approved this standard will be used to create Ethernet Controller NIC silicon and compatible switch silicon. This work is underway, but won’t be completed until sometime in 2016.

The 25 Gigabit Ethernet Consortium was founded by Arista, Broadcom, Google, Mellanox & Microsoft. While the 2550100 Alliance is about fifty companies, and the ten most notable in this alliance are: Accton, Acer, Cavium, Finisar, Hitachi Data Systems, Huawei, Lenovo, NEC, Qlogic, and Xilinx. Interestingly absent from the above lists are key Ethernet product companies: Chelsio, Cisco, Emulex, HP, Intel, and Solarflare. The focus of this piece will be on the Ethernet NIC controller silicon, because if you can’t connect the server then the whole discussion is just switch to switch interconnects, which is a another class of problem for discussion in a different forum. Today there appears to be only two general purpose Ethernet controller NIC chips that support a version of 25/50/100GbE and they are Mellanox’s ConnectX-4 and QLogic’s cLOM8514. For NIC silicon to actually be useful though it must be delivered on a PCI Express adapter. At this point QLogic has only demonstrated their 25GbE publicly, but has not announced a date to ship a PCIe adapter. This means that Mellanox is the solitary vendor shipping a production 25/50/100GbE adapter today. QLogic has not formally stated when it will be producing adapters with the cLOM8514.

In the home Wifi networking market hardware vendors typically race ahead of IEEE standards, and produce products to secure “first mover advantage”, otherwise known to end users as the bleeding edge, but they can only do this because their products are for the most part stand-a-lone. Enterprise and data center markets are highly interconnected and shipping a product ahead of the approved IEEE specification is inviting an avalanche of support calls. Today there still remain significant open technical issues around 25/50/100GbE such as auto negotiation, link training, and forward error correction. The IEEE has yet to resolve these, but they are being discussed. At the end user level, interoperability is the key issue. If a company were to produce a stand-a-lone NIC product without an accompanying cable & switch ecosystem they would be flooded with support requests. The converse is also true, if a company were to build a switch around the Broadcom silicon without offering a bundled in server NIC it would quickly also become an interoperability situation. Those on the bleeding edge, would surely now understand the true meaning of the phrase.

So why haven’t the more traditional 10GbE NIC vendors jumped on the 25/50/100GbE band wagon? Simple, without an approved IEEE standard the likelihood of profiting from your investment in 25/50/100GbE is fairly low. Today, exclusive of R&D, producing an Ethernet controller NIC chip is a multi-million dollar exercise. So to justify spinning a 25/50/100GbE NIC chip in early 2015 for a “first mover advantage” one would require a plan that it would produce well into the tens of millions in revenue. Couple this with the interoperability support nightmare of getting one vendor’s NIC working with a second vendors cable, and third vendors switch, and any profit that might exist could quickly be consumed.

Enterprise customers want choice, which by definition implies multivendor interoperability based on mature standards. Once the IEEE 802.3by standard (25/50/100GbE) is ratified next year it is expected that all the NIC vendors will begin shipping 25/50/100GbE NIC products.

7/27 Update: Broadcom announced a 25/50GbE NIC controller today.

A Path from GbE to 10GbE

Recently folks have asked how they could squeeze more out of their Gigabit Ethernet (GbE) infrastructures while they work to secure funding for a 10GbE upgrade in the future. I’ve been selling 10GbE NICs for 10 years and blogging the past six. What I’ve learned as a former system architect, IT manager, server sales, and now network sales is that the least painful method for making the transition is to demonstrate the payback to your management in stages. First I’d upgrade my existing servers to 10GbE adapters, but running on my existing GbE network to demonstrate that I was pushing that infrastructure to its full potential. It’s very likely that your existing multi-core servers are sometimes more CPU bound than bandwidth. Also, it is possible you may have some extra top of rack switch ports you can leverage. There are several interesting tricks worth considering. The first is to move to a current 10GbE controller, one that supports GbE (1000Base-T is the formal name for the RJ-45 telephone style modular connector). If this still doesn’t give you the performance bang you’re seeking then you can consider testing an operating system bypass (OS Bypass) network driver.

Upgrading from the generic GbE port mounted on your server’s mother board to a PCI Express option card with dual 10G ports means you’re moving from GbE chip technology designed 15 years ago to very possibly a state of the art 10G Ethernet controller designed in the past year or two. As mentioned in other posts like “Why 10G” and “Four Reasons Why 10GbE NIC Design Matters” some of today’s 10GbE chips internally offer 1,000s of virtual NIC interfaces, highly intelligent network traffic steering to CPU cores, and a number of advanced stateless network packet processing offloads (meaning that more work is done on the NIC that would otherwise normally have to be done by your Intel server CPUs).  Much of this didn’t exist when your server’s GbE chip was initially designed back in 2000. So what is the best way to make the jump?

There are two methods to plug your existing RJ45 terminated networking cables into new 10GbE server class NICs. The first, and easiest, is to use a native dual port 10GBase-T card that supports GbE like Solarflare’s SFN5161T which runs roughly $420. The second approach, which provides a much better path to 10GbE, is to use a dual port SFP+ card like Solarflare’s SFN7002F with a pair of 1000Base-T modules. In this case, the adapter is $395, and each module is roughly $40 (be careful here because there are numerous Cisco products offered that are often just “compatible”). When you get around to migrating to 10GbE both approaches will require new switches and very likely new network wiring. The 10Gbase-T standard, which uses the familiar RJ45 networking connector, will require that you move to the more expensive Cat6 cabling, and often these switches are more expensive and draw more power. If you have to rewire with Cat6, then you should seriously consider using passive DirectAttach DA cables with bonded SFP+ connectors that start at $20-$25 for 0.5-2M long cables. By the time your network admin custom makes the Cat6 cables for your rack it’ll likely be a break even expense cost (especially when you have to spend time diagnosing bad/failing cables). DA cables should be considerably more trouble free over time, frankly, 10GBase-T really pushes the limits of both the Cat6 cables and RJ-45 connectors.

Another thing to consider is leveraging an OS Bypass layer like Solarflare’s OpenOnload (OOL) for network intense applications like Nginx, Memcached, and HAProxy. We saw that OOL delivered a 3X performance gain over running packets through the Linux OS, which was documented in this whitepaper. In the testing for this whitepaper, we found that for Nginx content served from memory would typically take six cores to respond to a full 10G link. Running OOL it only required two cores. Turning this around a bit, with OOL on a dual port 10G card you should only need roughly four cores to serve static in-memory content at wire-rate 10G to both ports. So suppose you have an eight core server today with a pair of GbE links, and during peak times it’s typically running near capacity. By upgrading to a Solarflare adapter with OOL, still just utilizing both 10G ports as GbE ports, you could easily be buying yourself back some significant Intel CPU cycles. The above requires a very serious your mileage may vary type statement, but if you’re interested in giving it a try in your lab Solarflare will work with you on the Proof of Concept (POC). It should be noted that adding OOL to a SFN7002F adapter will roughly double the price of the adapter, but compare that additional few hundred dollars in a 10G software expense to the cost of replacing your server with a whole new one, installing all new software, perhaps additional new software licenses, configuration, testing, etc… Replacing the NIC, and adding an OS Bypass layer like OOL is actually pretty quick, easy & painless.

If you’re interested in kicking off a GbE to 10GbE POC please send us a brief email.

Severs Can Protect Themselves From a DDoS Attack

Solarflare is completing SolarSecure Server Defense, a Docker Container housing a start-of-the-art threat detection, and mitigation system. This system dynamically detects new threats and updates the filters applied to all network packets traversing the kernel network device driver in an effort to fend off future attacks in real time without direct human intervention. To do this Solarflare has employed four technologies: OpenOnload, SolarCapture Live, Bro Network Security Monitor, and SolarSecure Filter Engine.

OpenOnload provides an OSBypass means of shunting copies of all packets making it past the current filter set to SolarCapture. SolarCapture provides a Libpcap framework for packet capture which then hands these copied packets onto Bro for analysis. Bro then applies a series of scripts to each packet, and if a script detects a hit it raises an event. Each class of event then triggers a special SolarSecure Filter Engine script which then creates a new network packet filter. This filter is then loaded in real-time into the packet filter engine of the network adapter’s kernel device driver to be applied to all future network packets. Finally, Server Defense can alert your admins as new rules are created on each server across your infrastructure.

SolarSecure Server Defense inspects all inbound, outbound, container to container, and VM to VM packets on the same physical server, and filters are applied to every packet. This uniquely positions Solarflare Server Defense as the only containerized cyber defense solution designed to protect each individual server, VM or container, within an enterprise from a wide class of threats ranging from a simple SYN flood to a sophisticated DDoS attack. Even more compelling, it can actually defend from attacks originating from inside the same physical network, behind your existing perimeter defenses. It can actually defend one VM from an attack launched by another VM on the same physical server!

To learn more please contact Scott Schweitzer at Solarflare.

3X Better Performance with Nginx

Recently Solarflare concluded some testing with Nginx that measured the amount of traffic Nginx could respond to before it started dropping requests. We then scaled up the number of cores provided to Nginx to see how additional compute resources impacted the servicing of web page requests, and this is the resulting graph:

click for larger image

As you can see from the above graph most NIC implementations require about six cores to achieve 80% wire-rate. The major difference highlighted in this graph though is that with a Solarflare adapter, and their OpenOnload OS Bypass driver they can achieve 90% wire-rate performance utilizing ONLY two cores versus six. Note that this is with Intel’s most current 10G NIC the x710.

What’s interesting here though is that OpenOnload internally can bond together up to six 10G links, before a configuration file change is required to support more.  This could mean that a single 12 core server, running a single Nginx instance should be able to adequately service 90% wire-rate across all six 10G links, or theoretically 54Gbps of web page traffic. Now, of course, this is assuming everything is in memory, and the rest of the system is properly tuned. Viewed another way this is 4.5Gbps/core of web traffic serviced by Nginx running with OpenOnload on a Solarflare adapter compared to 1.4Gbps/core of web traffic with an Intel 10G NIC. This is a 3X gain in performance for Solarflare over Intel, how is the possible?

Simple, OpenOnload is a user space stack that communicates directly with the network adapter in the most efficient manner possible to service UDP & TCP requests. The latest version of OpenOnload has also been tuned to address the C10K problem. What’s important to note, is that by bypassing the Linux OS to service these communication requests Solarflare reduces the number of kernel context switches/core, memory copies, and can more effectively utilize the processor cache. All of this translates to more available cycles for Nginx on each and every core.

To further drive this point home we did an additional test just showing the performance gains OOL delivered to Nginx on 40GbE. Here you can see that the OS limits Nginx on a 10-core system to servicing about 15Gbps. With the addition of just OpenOnload to Nginx, that number jumps to 45Gbps. Again another 3X gain in performance.

If you have web servers today running Nginx, and you want to give them a gargantuan boost in performance please consider Solarflare and their OpenOnload technology. Imagine taking your existing web server today which has been running on a single Intel x520 dual port 10G card, replacing that with a Solarflare SFN7122F card, installing their OpenOnload drivers and seeing a 3X boost in performance. This is a fantastic way to breathe new life into existing installed web servers. Please consider contacting Solarflare today do a 10G OpenOnload proof of concept so you can see these performance gains for yourself first hand.

Beyond Gigabit Ethernet

Where wired connections can be made, they will always provide superior performance to that of wireless techniques. Since the commercialization of the telegraph over 175 years ago mankind has been looking for ever faster ways to encode & transfer information. The wired standard we’re all most familiar with today is Gigabit Ethernet (GbE). It runs throughout your office to your desktop, phone, printers, copiers, and wireless access points. It is the most pervasive method in the enterprise for reliably linking devices. So what’s next?

Two weeks ago if you’d have asked most technology professionals they would have answered 10 Gigabit Ethernet (10GbE). That was the commonly accepted plan. Then Cisco, Aquantia, Freescale & Xilinx announced an alliance to further develop & promote a proposed Next Generation (NBase-T) wired standard supporting 2.5GbE & 5GbE speeds over existing installed wires (Category 5a & 6) cables. We all know Cisco, and that’s enough to get pretty much everyone’s attention, but who are the other three? Aquantia is one of the leaders in producing the physical interface (PHY) chips that exist at both ends of the wire. Switch companies like Cisco use Aquantia, as do network interface card companies like Solarflare, Intel, and Chelsio. Aquantia has figured out how to take digital information and encode it into electrical signals designed to travel at very high speeds through very noisy wires. Then on the other end, their chips have the smarts to find the signal within the vast amounts of noise created by the wires themselves. Freescale & Xilinx are a bit further up the food chain, they make more programmable chips that can be positioned between Aquantia & Cisco’s switch logic, or the Intel processor in your computer.

So why did Cisco push to form the NBase-T Alliance, what do they gain from this investment? It turns out that improvements in Wireless networking are behind this, and Cisco has a large wireless business. In commercial environments, wireless access points now use a wider range of frequencies in parallel so they can service more of our wireless devices. These access points are pushing the limits on the back end with what GbE is capable of. Since most enterprises are already wired with Cat5a or Cat6 rewiring to support 10GbE would be very expensive. Hence the drive towards NBase-T.

The question though is how about performance desktop users? Folks doing video editing, simulation, or anything that is data intensive could easily push well beyond GbE. We’re now starting to see Apple & others ship 4K resolution desktop computers and displays. These devices can be huge data consumers. What’s the plan for supporting them beyond GbE? The answer still appears to be 10GbE, but time will tell.

Your Server as the Last Line of Cyber Defense

Here is an excerpt from an article I wrote for Cyber Defense Magazine that was published earlier today:

Since the days of medieval castle design, architects have cleverly engineered concentric defensive layers along with traps, to thwart attackers, and protect the strong hold. Today many people still believe that the moat was a water obstacle designed to protect the outer wall, when in fact it was often inside the outer wall and structured as a reservoir to flood any attempt at tunneling in. Much like these kingdoms of old, today companies are leveraging similar design strategies to protect themselves from Internet attackers.

The last line of defense is always the structure of the wall, and guards of the castle keep itself. Today the keep is your network server that provides customers with web content, partners with business data, and employee’s remote access. All traffic that enters your servers comes in through a network interface card (NIC). The NIC represents both the wall and the guards for the castle keep.  Your NIC should support a stateless packet filtering firewall application that is authorized to drop all unacceptable packets. By operating within both the NIC, and the kernel driver, this software application can drop packets from known Internet marauders, rate limit all inbound traffic, filter off SYN floods, and only pass traffic on acceptable ports. By applying all these techniques your server can be far more available for your customers, partners, and employees.

For the rest of the article, with several cool sections of code that explain how to protect your server, please visit Cyber Defense Magazine.