Podcast on iTunes, Google Play & Stitcher

logopodcastAfter several weeks we now have a critical mass of episodes of The Technology Evangelist Podcast to justify posting it on iTunes, Google Play, and Stitcher. So now you can subscribe from your favorite service or player and stay current. Recently we completed episodes on high performance packet capture, electronic trading, NVMe, and Hadoop.

We’re now scheduling and recording episodes on FPGAs, digital currencies (bitcoin), Intel Skylake, TensorFlow, Containers, the race to zero, and much, much more. If you have ideas for topics or speakers, we always welcome suggestions, so please email The Technology Evangelist or comment to this blog post.

Four Container Networking Benefits

ContainerContainer networking is walking in the footsteps taken by virtualization over a decade ago. Still, networking is a non-trivial task as there are both underlay and overlay networks one needs to consider. Underlay Networks like a bridge, MACVLAN and IPVLAN are designed to map physical ports on the server to containers with as little overhead as possible. Conversely, there are also Overlay networks that require packet level encapsulation using technologies like VXLAN and NVGRE to accomplish the same goals.  Anytime network packets have to flow through hypervisors or layers of virtualization performance will suffer. Towards that end, Solarflare is now providing the following four benefits for those leveraging containers.

  1. NGINX Plus running in a container can now utilize ScaleOut Onload. In doing so NGINX Plus will achieve 40% improvement in performance over using standard host networking. With the introduction of Universal Kernel Bypass (UKB) Solarflare is now including for FREE both DPDK and ScaleOut Onload for all their base 8000 series adapters. This means that people wanting to improve application performance should seriously consider testing ScaleOut Onload.
  2. For those looking to leverage orchestration platforms like Kubernetes, Solarflare has provided the kernel organization with an Advanced Receive Flow Steering driver. This new driver improves performance in all the above-mentioned underlay networking configurations by ensuring that packets destined for containers are quickly and efficiently delivered to that container.
  3. At the end of July during the Black Hat Cyber Security conference, Solarflare will demonstrate a new security solution. This solution will secure all traffic to and from containers with enterprise unique IP addresses via hardware firewall in the NIC.
  4. Early this fall, as part of Solarflare’s Container Initiative they will be delivering an updated version of ScaleOut Onload that leverages MACVLANs and supports multiple network namespaces. This version should further improve both performance and security.

To learn more about all the above, and to also gain NGINX, Red Hat & Penguin Computing’s perspectives on containers please consider attending Contain NY next Tuesday on Wall St. You can click here to learn more.

Large Deployment Container Networking Challenges?

ContainerShipI’d like to hear from those of you in the comments section regarding deploying software in containers today into production. My interest is specifically regarding large deployments across a number of servers, and the issues you’re having with networking. People really using Kubernetes and Docker Swarm. Not those tinkering with containers on a single host, but DevOps who’ve suffered the real bruises and scraps from setting up MACvlans, IPvlans, Calico, Flannel, Kuryr, Magnum, Weave, Contiy Networking, etc…

Some will suggest I read the various mailing lists (check), join Slack channels (check), attend DockerCon (check), or even contribute to projects they prefer (you really don’t want my code). I’m not looking for that sort of feedback because in all those various forums the problem I have at my level of container networking experience is separating the posers from the real doers. My hope is that those willing to suggest ideas, can provide concrete examples of server-based container networking rough edges they’ve experienced and that if improved it would make a significant difference for their company. If that’s you then please comment publicly below, or use the private form to the right. Thank you for your time.

Ultra-Scale Breakthrough for Containers & Neural-Class Networks

Large Container Environments Need Connectivity for 1,000s of Micro-services

An epic migration is underway from hypervisors and virtual machines to containers and micro-services.  The motivation is simple, there is far less overhead with containers and the payback is huge. You get more apps per server as host operating systems, multiple guest operating systems, and hypervisors are replaced by a single operating system. Solarflare is seeking to advance the development of networking for containers. Our goal is to provide the best possible performance, with the DockerContainerhighest degree of connectivity, and easiest-to-deploy NICs for containers.

Solarflare’s first step in addressing the special networking requirements of containers is the delivery of the industry’s first Ethernet NIC with “ultra-scale connectivity.” This line of NICs has the ability to establish virtual connections from a container microservice to thousands of other containers and microservices. Ultra-scale network connectivity eliminates the performance penalty of vSwitch overhead, buffer copying, and Linux context switching. It provides application servers the capacity to provide each microservice with a dedicated network link. This ability to scale connectivity is critical to the success of deploying large container environments within a data center, across multiple data centers, and multiple global regions.

Neural-Class Networks Require Ultra-Scale Connectivity

A “Neural Network” is a distributed, scale-out computing model that enables AI deep learning which is emerging as the core of next-gen applications software. Deep learning algorithms use huge neural networks, consisting of many layers of neurons (servers), to process massive amounts of data for instant facial, and voice recognition, language translation, and hundreds of other AI applications.

“Neural-class“ networks are computing environments which may not be used for artificial intelligence, but share the same distributed scale-out architecture, and massive size. Neural-class networks can be found in the data centers of public cloud service providers, stock exchanges, large retailers, insurance providers, and carriers, to name a few. These neural-class networks need ultra-scale connectivity. For example, in a typical neural-class network, a single 80-inch rack houses 38 dual-processor servers, each server with 10 dual-threaded cores, for a total of 1,520 threads. In this example, in order for each thread to work together on a deep learning or trading algorithm without constant Linux context switching, virtual network connections are needed to over 1,000 other threads in the rack.

Solarflare XtremeScale™ Family of Software-Defined NICs

XtremeScale Software-Defined NICs from Solarflare (SFN8000 series) are designed from the ground-up for neural-class networks. The result is a new class of Ethernet adapter with the ultra-high-performance packet processing and connectivity of expensive network processors, and the low-cost and power of general purpose NICs. There are six capabilities needed in neural-class networks which can be found only in XtremeScale software-defined NICs:

  1. Ultra-High Bandwidth – In 2017, Solarflare will provide high-frequency trading, CDN and cloud service provider applications with port speeds up to 100Gbps, backed by “cut-through” technology establishing a direct path between VMs and NICs to improve CPU efficiency.
  2. Ultra-Low Latency – Data centers are distributed environments with thousands of cores that need to constantly communicate with each other. Solarflare kernel bypass technologies provide sub-one microsecond latency with industry standard TCP/IP.
  3. Ultra-Scale Connectivity – A single densely-populated server rack easily exceeds over 1,000 cores. Solarflare can interconnect the cores to each other for distributed applications with NICs supporting 2,048 virtual connections.
  4. Software-Defined – Using well-defined APIs, network acceleration, monitoring, and security can be enabled and tuned, for thousands of separate vNIC connections, with software-defined NICs from Solarflare.
  5. Hardware-Based Security – Approximately 90% of network traffic is within a data center. With thousands of servers per data center, Solarflare can secure entry to each server with hardware-based firewalls.
  6. Instrumentation for Telemetry – Network acceleration, monitoring and hardware security is made possible by a new class of NIC from Solarflare which captures network packets at line speeds up to 100Gbps.

In May Solarflare will release a family of kernel bypass libraries called Universal Kernel Bypass (UKB). This starts with an advanced version of DPDK providing packets directly from the NIC to the container to several versions of Onload which provide higher level sockets connections from the NIC directly to containers.