Is Serverless Also Trafficless?

Recently I read the article “Why Now Is The Time To Go Serverless” by Romi Stein, the CEO of OpenLegacy, a composable platform company. While I agree with Romi on several points he made around the importance of APIs, micro-service architectures, and cloud computing. I agree that serverless doesn’t truly mean computing without a server, but rather computing on servers owned and provisioned by major cloud providers. My main point of contention is that large businesses executing mission-critical functions in public clouds may eventually come to regret this move to a “Serverless” architecture as it may also be “Trafficless.” Recently we’ve seen a rash of colossal security vulnerabilities from companies like Solarwinds and Microsoft (Outlook Server). Events like these should make us all pause and rethink how we handle security. Threat detection, and the resulting aftermath of a breach, especially in a composable enterprise highly dependent on a public cloud infrastructure, may be impossible because key data doesn’t exist or isn’t available.

Getty Images

In a traditional on-premises environment, it is generally understood that the volume of network traffic within the enterprise is often 10X that of the traffic entering and leaving the enterprise. One of the more essential strategies for detecting a potential breach within an enterprise is to examine; hopefully, in near-real-time, both the internal and external network flows looking for irregular traffic patterns. If you are notified of a breach, an analysis of these traffic patterns is often used to confirm a breach has occurred. To service both of these tasks copies are made of network traffic in flight, its called traffic capture. The data may then be reduced and eventually shipped off to Splunk, or run through a similar tool, hopefully locally. Honestly, I was never a big fan of shipping off copies of a company’s network traffic to a third party for analysis; many of a company’s trade secrets reside in these digital breadcrumbs.

Is a serverless environment also trafficless? Of course not, that’s ridiculous, but are private cloud providers willing to, or even capable of, sharing copies of all the network traffic your serverless architecture generates? If they were, what would you do with all that data? Wait, here’s another opportunity for the public cloud guys. They could sell everyone another service that captures and analyzes all your serverless network traffic to tell you when you’ve been breached! Seriously, this is something worthy of consideration.

Equifax & Micro Segmentation

Earlier this week it was reported that an Equifax web service was hacked creating a breach that existed for about 10 weeks. During that time the attackers used that breach to drain 143 million people’s private information. The precise technical details of the breach, which Equifax claims was detected and closed on July 29, has yet to be revealed. While it says it’s seen no other criminal activity on its main services since July 29th that’s of little concern as Elvis has left the building. At 143 million that means a majority of the adults in the US have been compromised. Outside of Equifax specific code vulnerabilities or further database hardening what could Equifax have done to thwart these attackers?

Most detection and preventative countermeasures that could have minimized Equifax’s exposure employ some variation of behavior detection at one network layer. They then shunt suspect traffic to a sideband queue for further detailed human analysis. Today the marketing trend to attract Venture Capital investment is to call these behavior detection algorithms Artificial Intelligence or Machine Learning. How intelligent they are, and to what degree they learn is something for a future blog post. While at the NGINX Conference this week we saw several companies selling NGINX layer-7 (application layer) plugins which analyzed traffic prior to passing it to NGINX’s HTML code evaluation engine. These plugins receive the entire HTML request after the OS stack has assembled it from multiple network packets. They then do a rapid analysis the request to determine if it poses a threat. If not then the request is passed back to NGINX for the web application to respond to. Along the way, the plugin abstracts metadata from the request and in parallel, it shoots that up to their cloud service for further evaluation. This metadata is then compared against prior history and other real-time customer data from with similar services to extract new potential threat vectors. As they are detected rules are then pushed back down into the plugin that can be applied to future packets.

Everything discussed above is layer-7, the application layer, traffic analysis, and mitigation. What does layer-7 have to do with network micro-segmentation? Nothing, what’s mentioned above is the current prevailing wisdom instantiated in several solutions that are all the rage today. There are several problems with a layer-7 solution. First, it competes with your web application for host CPU cycles. Second, if the traffic is determined to be malicious you’ve already invested tens of thousands of CPU instructions, perhaps even in excess of one hundred thousand instructions to make this determination, all that computer time is lost once the message is dropped. Third, the attack is now deep inside your web server and whose to say the attacker hasn’t learned what he needed to move to a lower layer attack vector to evade detection. Layer-7 while convenient, easy to use, and even easier to understand is very inefficient.

So what is network micro-segmentation, and how does it fit in? Network segmentation is the act of altering the flow of traffic such that only what you want is permitted to pass. Imagine the factory that makes M&Ms. These days they use high-speed cameras and other analytics that look for deformed M&Ms and when they see one they steer it away from the packaging system. They are in fact segmenting the flow of M&Ms to ensure that only perfect candy-coated pieces ever make it into our mouths. The same is true for network traffic, segmentation is the process of only allowing network packets to flow into or out of a given device via a specific policy or set of policies. Micro-segmentation is doing that down to the application level. At Layer-3, the network layer, that means separating traffic by the source and destination network address and port, while also taking into account the protocol (this is known as “the five-tuple”, a set of five elements). When we focus on filtering traffic by network port we can say that we are doing application level filtering because ports are used to map network traffic to applications. When we also take into account the local IP address for filtering then we can also say we filter by the local container (ex. Docker) or Virtual Machine (VM) as these can often get their own local IP address. Both of these items together can really define a very specific network micro-segmentation strategy.

So now imagine a firewall inside a smart network interface card (NIC) that can filter both inbound and outbound packets using this network micro-segmentation. This is at layer-3, the Network, micro-segmentation within the smart NIC. When detection is moved into the NIC no x86 CPU cycles are consumed when evaluating the traffic, and no host resources are lost if the packet is deemed malicious and is dropped. Furthermore, if it is a malicious packet and it’s stopped by a firewall in the NIC then the threat has never entered the host CPU complex, and as such, the system’s integrity is preserved. Consider how this can improve an enterprise’s security as it scales out both with new servers, as well as adding containers and VMs. So how can this be done?

Solarflare has been shipping its 8000 line of smart NICs since June of 2016, and later this fall they will release a new firmware called ServerLock(TM). ServerLock is a first generation firewall in the smart NIC that is centrally managed. Every second it sends a summary of network flows through the NIC, in both directions, to a central ServerLock Manager system. This system then allows administrators to view these network flows graphically and easily turn them into security roles and policies that can then be deployed. Policies can then be deployed to a specific local IP address, a collection of addresses (think Docker containers or VMs) called an “IP Set”, a host or host groups. When deployed policies can be placed in Monitor or Enforce mode. Monitor Mode will allow all traffic to flow, but it will generate alerts for all traffic outside of all the defined policies for a local IP address. In Enforce mode, ONLY traffic conforming to the defined policies will be permitted. Traffic outside of those policies will generate an alert and be dropped. Once a network device begins to drop traffic on purpose we say that that device is segmenting the network. So in Enforce mode, ServerLock smart NICs will actively segment that server’s network by only passing traffic for supported applications, only those for which a policy exists. This applies to traffic in both directions, so for example, if an administrator walks into the data center, grabs a keyboard and elects to Secure Copy (SCP) a file from a database server to his workstation things will get interesting. If the ServerLock smart NIC in that database server doesn’t have a policy supporting SCP (port 22) his outbound request from that database server to his workstation will be dropped in the NIC. Likely unknown to him an alert will be generated on the central ServerLock Manager console calling out the application and both the database server and his workstation, and he’ll have some explaining to do.

ServerLock begins shipping this fall so while it’s too late for Equifax it’s not too late for the next Equifax. So how would this help moving forward? Simple, if every server, including web servers and database servers, has a ServerLock smart NIC then every second these servers would report their flow data to the central Solarflare ServerLock Manager for further analysis. Solarflare is working with Cloudwick to do real-time analysis of this layer-3 traffic so that Cloudwick can then proactively suggest in real time back to ServerLock administrators new roles and policies to proactively protect servers against all sorts of threats. More to come as this product is released.

9/11/17 Update – It was released over the weekend that Equifax is now pointing the blame at an Apache Struts module. The exact module has yet to be disclosed, but it could be any one of the following that has been previously addressed. On Saturday The Apache group replied pointing to other sources that believe it might have been caused by exploiting a remote code execution bug in their REST plugin as outlined in CVE-2017-9805. More to come.

9/12/17 Update – Alert Logic has the best analysis thus far.

Podcast on iTunes, Google Play & Stitcher

logopodcastAfter several weeks we now have a critical mass of episodes of The Technology Evangelist Podcast to justify posting it on iTunes, Google Play, and Stitcher. So now you can subscribe from your favorite service or player and stay current. Recently we completed episodes on high-performance packet capture, electronic trading, NVMe, and Hadoop.

We’re now scheduling and recording episodes on FPGAs, digital currencies (bitcoin), Intel Skylake, TensorFlow, Containers, the race to zero, and much, much more. If you have ideas for topics or speakers, we always welcome suggestions, so please email The Technology Evangelist or comment to this blog post.

Four Container Networking Benefits

ContainerContainer networking is walking in the footsteps taken by virtualization over a decade ago. Still, networking is a non-trivial task as there are both underlay and overlay networks one needs to consider. Underlay Networks like a bridge, MACVLAN and IPVLAN are designed to map physical ports on the server to containers with as little overhead as possible. Conversely, there are also Overlay networks that require packet level encapsulation using technologies like VXLAN and NVGRE to accomplish the same goals.  Anytime network packets have to flow through hypervisors or layers of virtualization performance will suffer. Towards that end, Solarflare is now providing the following four benefits for those leveraging containers.

  1. NGINX Plus running in a container can now utilize ScaleOut Onload. In doing so NGINX Plus will achieve 40% improvement in performance over using standard host networking. With the introduction of Universal Kernel Bypass (UKB) Solarflare is now including for FREE both DPDK and ScaleOut Onload for all their base 8000 series adapters. This means that people wanting to improve application performance should seriously consider testing ScaleOut Onload.
  2. For those looking to leverage orchestration platforms like Kubernetes, Solarflare has provided the kernel organization with an Advanced Receive Flow Steering driver. This new driver improves performance in all the above-mentioned underlay networking configurations by ensuring that packets destined for containers are quickly and efficiently delivered to that container.
  3. At the end of July during the Black Hat Cyber Security conference, Solarflare will demonstrate a new security solution. This solution will secure all traffic to and from containers with enterprise unique IP addresses via hardware firewall in the NIC.
  4. Early this fall, as part of Solarflare’s Container Initiative they will be delivering an updated version of ScaleOut Onload that leverages MACVLANs and supports multiple network namespaces. This version should further improve both performance and security.

To learn more about all the above, and to also gain NGINX, Red Hat & Penguin Computing’s perspectives on containers please consider attending Contain NY next Tuesday on Wall St. You can click here to learn more.

Large Deployment Container Networking Challenges?

ContainerShipI’d like to hear from those of you in the comments section regarding deploying software in containers today into production. My interest is specifically regarding large deployments across a number of servers, and the issues you’re having with networking. People really using Kubernetes and Docker Swarm. Not those tinkering with containers on a single host, but DevOps who’ve suffered the real bruises and scraps from setting up MACvlans, IPvlans, Calico, Flannel, Kuryr, Magnum, Weave, Contiy Networking, etc…

Some will suggest I read the various mailing lists (check), join Slack channels (check), attend DockerCon (check), or even contribute to projects they prefer (you really don’t want my code). I’m not looking for that sort of feedback because in all those various forums the problem I have at my level of container networking experience is separating the posers from the real doers. My hope is that those willing to suggest ideas, can provide concrete examples of server-based container networking rough edges they’ve experienced and that if improved it would make a significant difference for their company. If that’s you then please comment publicly below, or use the private form to the right. Thank you for your time.

Ultra-Scale Breakthrough for Containers & Neural-Class Networks

Large Container Environments Need Connectivity for 1,000s of Micro-services

DockerContainer

An epic migration is underway from hypervisors and virtual machines to containers and micro-services.  The motivation is simple, there is far less overhead with containers and the payback is huge. You get more apps per server as host operating systems, multiple guest operating systems, and hypervisors are replaced by a single operating system. Solarflare is seeking to advance the development of networking for containers. Our goal is to provide the best possible performance, with the highest degree of connectivity, and easiest-to-deploy NICs for containers.

Solarflare’s first step in addressing the special networking requirements of containers is the delivery of the industry’s first Ethernet NIC with “ultra-scale connectivity.” This line of NICs has the ability to establish virtual connections from a container microservice to thousands of other containers and microservices. Ultra-scale network connectivity eliminates the performance penalty of vSwitch overhead, buffer copying, and Linux context switching. It provides application servers the capacity to provide each micro-service with a dedicated network link. This ability to scale connectivity is critical to the success of deploying large container environments within a data center, across multiple data centers, and multiple global regions.

Neural-Class Networks Require Ultra-Scale Connectivity

A “Neural Network” is a distributed, scale-out computing model that enables AI deep learning which is emerging as the core of next-gen applications software. Deep learning algorithms use huge neural networks, consisting of many layers of neurons (servers), to process massive amounts of data for instant facial, and voice recognition, language translation, and hundreds of other AI applications.

“Neural-class“ networks are computing environments which may not be used for artificial intelligence, but share the same distributed scale-out architecture, and massive size. Neural-class networks can be found in the data centers of public cloud service providers, stock exchanges, large retailers, insurance providers, and carriers, to name a few. These neural-class networks need ultra-scale connectivity. For example, in a typical neural-class network, a single 80-inch rack houses 38 dual-processor servers, each server with 10 dual-threaded cores, for a total of 1,520 threads. In this example, in order for each thread to work together on a deep learning or trading algorithm without constant Linux context switching, virtual network connections are needed to over 1,000 other threads in the rack.

Solarflare XtremeScale™ Family of Software-Defined NICs

XtremeScale Software-Defined NICs from Solarflare (SFN8000 series) are designed from the ground-up for neural-class networks. The result is a new class of Ethernet adapter with the ultra-high-performance packet processing and connectivity of expensive network processors, and the low-cost and power of general purpose NICs. There are six capabilities needed in neural-class networks which can be found only in XtremeScale software-defined NICs:

  1. Ultra-High Bandwidth – In 2017, Solarflare will provide high-frequency trading, CDN and cloud service provider applications with port speeds up to 100Gbps, backed by “cut-through” technology establishing a direct path between VMs and NICs to improve CPU efficiency.
  2. Ultra-Low Latency – Data centers are distributed environments with thousands of cores that need to constantly communicate with each other. Solarflare kernel bypass technologies provide sub-one microsecond latency with industry standard TCP/IP.
  3. Ultra-Scale Connectivity – A single densely-populated server rack easily exceeds over 1,000 cores. Solarflare can interconnect the cores to each other for distributed applications with NICs supporting 2,048 virtual connections.
  4. Software-Defined – Using well-defined APIs, network acceleration, monitoring, and security can be enabled and tuned, for thousands of separate vNIC connections, with software-defined NICs from Solarflare.
  5. Hardware-Based Security – Approximately 90% of network traffic is within a data center. With thousands of servers per data center, Solarflare can secure entry to each server with hardware-based firewalls.
  6. Instrumentation for Telemetry – Network acceleration, monitoring and hardware security is made possible by a new class of NIC from Solarflare which captures network packets at line speeds up to 100Gbps.

In May Solarflare will release a family of kernel bypass libraries called Universal Kernel Bypass (UKB). This starts with an advanced version of DPDK providing packets directly from the NIC to the container to several versions of Onload which provide higher level sockets connections from the NIC directly to containers.