Four Container Networking Benefits

ContainerContainer networking is walking in the footsteps taken by virtualization over a decade ago. Still, networking is a non-trivial task as there are both underlay and overlay networks one needs to consider. Underlay Networks like a bridge, MACVLAN and IPVLAN are designed to map physical ports on the server to containers with as little overhead as possible. Conversely, there are also Overlay networks that require packet level encapsulation using technologies like VXLAN and NVGRE to accomplish the same goals.  Anytime network packets have to flow through hypervisors or layers of virtualization performance will suffer. Towards that end, Solarflare is now providing the following four benefits for those leveraging containers.

  1. NGINX Plus running in a container can now utilize ScaleOut Onload. In doing so NGINX Plus will achieve 40% improvement in performance over using standard host networking. With the introduction of Universal Kernel Bypass (UKB) Solarflare is now including for FREE both DPDK and ScaleOut Onload for all their base 8000 series adapters. This means that people wanting to improve application performance should seriously consider testing ScaleOut Onload.
  2. For those looking to leverage orchestration platforms like Kubernetes, Solarflare has provided the kernel organization with an Advanced Receive Flow Steering driver. This new driver improves performance in all the above-mentioned underlay networking configurations by ensuring that packets destined for containers are quickly and efficiently delivered to that container.
  3. At the end of July during the Black Hat Cyber Security conference, Solarflare will demonstrate a new security solution. This solution will secure all traffic to and from containers with enterprise unique IP addresses via hardware firewall in the NIC.
  4. Early this fall, as part of Solarflare’s Container Initiative they will be delivering an updated version of ScaleOut Onload that leverages MACVLANs and supports multiple network namespaces. This version should further improve both performance and security.

To learn more about all the above, and to also gain NGINX, Red Hat & Penguin Computing’s perspectives on containers please consider attending Contain NY next Tuesday on Wall St. You can click here to learn more.

Ultra-Scale Breakthrough for Containers & Neural-Class Networks

Large Container Environments Need Connectivity for 1,000s of Micro-services

DockerContainer

An epic migration is underway from hypervisors and virtual machines to containers and micro-services.  The motivation is simple, there is far less overhead with containers and the payback is huge. You get more apps per server as host operating systems, multiple guest operating systems, and hypervisors are replaced by a single operating system. Solarflare is seeking to advance the development of networking for containers. Our goal is to provide the best possible performance, with the highest degree of connectivity, and easiest-to-deploy NICs for containers.

Solarflare’s first step in addressing the special networking requirements of containers is the delivery of the industry’s first Ethernet NIC with “ultra-scale connectivity.” This line of NICs has the ability to establish virtual connections from a container microservice to thousands of other containers and microservices. Ultra-scale network connectivity eliminates the performance penalty of vSwitch overhead, buffer copying, and Linux context switching. It provides application servers the capacity to provide each micro-service with a dedicated network link. This ability to scale connectivity is critical to the success of deploying large container environments within a data center, across multiple data centers, and multiple global regions.

Neural-Class Networks Require Ultra-Scale Connectivity

A “Neural Network” is a distributed, scale-out computing model that enables AI deep learning which is emerging as the core of next-gen applications software. Deep learning algorithms use huge neural networks, consisting of many layers of neurons (servers), to process massive amounts of data for instant facial, and voice recognition, language translation, and hundreds of other AI applications.

“Neural-class“ networks are computing environments which may not be used for artificial intelligence, but share the same distributed scale-out architecture, and massive size. Neural-class networks can be found in the data centers of public cloud service providers, stock exchanges, large retailers, insurance providers, and carriers, to name a few. These neural-class networks need ultra-scale connectivity. For example, in a typical neural-class network, a single 80-inch rack houses 38 dual-processor servers, each server with 10 dual-threaded cores, for a total of 1,520 threads. In this example, in order for each thread to work together on a deep learning or trading algorithm without constant Linux context switching, virtual network connections are needed to over 1,000 other threads in the rack.

Solarflare XtremeScale™ Family of Software-Defined NICs

XtremeScale Software-Defined NICs from Solarflare (SFN8000 series) are designed from the ground-up for neural-class networks. The result is a new class of Ethernet adapter with the ultra-high-performance packet processing and connectivity of expensive network processors, and the low-cost and power of general purpose NICs. There are six capabilities needed in neural-class networks which can be found only in XtremeScale software-defined NICs:

  1. Ultra-High Bandwidth – In 2017, Solarflare will provide high-frequency trading, CDN and cloud service provider applications with port speeds up to 100Gbps, backed by “cut-through” technology establishing a direct path between VMs and NICs to improve CPU efficiency.
  2. Ultra-Low Latency – Data centers are distributed environments with thousands of cores that need to constantly communicate with each other. Solarflare kernel bypass technologies provide sub-one microsecond latency with industry standard TCP/IP.
  3. Ultra-Scale Connectivity – A single densely-populated server rack easily exceeds over 1,000 cores. Solarflare can interconnect the cores to each other for distributed applications with NICs supporting 2,048 virtual connections.
  4. Software-Defined – Using well-defined APIs, network acceleration, monitoring, and security can be enabled and tuned, for thousands of separate vNIC connections, with software-defined NICs from Solarflare.
  5. Hardware-Based Security – Approximately 90% of network traffic is within a data center. With thousands of servers per data center, Solarflare can secure entry to each server with hardware-based firewalls.
  6. Instrumentation for Telemetry – Network acceleration, monitoring and hardware security is made possible by a new class of NIC from Solarflare which captures network packets at line speeds up to 100Gbps.

In May Solarflare will release a family of kernel bypass libraries called Universal Kernel Bypass (UKB). This starts with an advanced version of DPDK providing packets directly from the NIC to the container to several versions of Onload which provide higher level sockets connections from the NIC directly to containers.