Four Container Networking Benefits

ContainerContainer networking is walking in the footsteps taken by virtualization over a decade ago. Still, networking is a non-trivial task as there are both underlay and overlay networks one needs to consider. Underlay Networks like a bridge, MACVLAN and IPVLAN are designed to map physical ports on the server to containers with as little overhead as possible. Conversely, there are also Overlay networks that require packet level encapsulation using technologies like VXLAN and NVGRE to accomplish the same goals.  Anytime network packets have to flow through hypervisors or layers of virtualization performance will suffer. Towards that end, Solarflare is now providing the following four benefits for those leveraging containers.

  1. NGINX Plus running in a container can now utilize ScaleOut Onload. In doing so NGINX Plus will achieve 40% improvement in performance over using standard host networking. With the introduction of Universal Kernel Bypass (UKB) Solarflare is now including for FREE both DPDK and ScaleOut Onload for all their base 8000 series adapters. This means that people wanting to improve application performance should seriously consider testing ScaleOut Onload.
  2. For those looking to leverage orchestration platforms like Kubernetes, Solarflare has provided the kernel organization with an Advanced Receive Flow Steering driver. This new driver improves performance in all the above-mentioned underlay networking configurations by ensuring that packets destined for containers are quickly and efficiently delivered to that container.
  3. At the end of July during the Black Hat Cyber Security conference, Solarflare will demonstrate a new security solution. This solution will secure all traffic to and from containers with enterprise unique IP addresses via hardware firewall in the NIC.
  4. Early this fall, as part of Solarflare’s Container Initiative they will be delivering an updated version of ScaleOut Onload that leverages MACVLANs and supports multiple network namespaces. This version should further improve both performance and security.

To learn more about all the above, and to also gain NGINX, Red Hat & Penguin Computing’s perspectives on containers please consider attending Contain NY next Tuesday on Wall St. You can click here to learn more.

Large Deployment Container Networking Challenges?

ContainerShipI’d like to hear from those of you in the comments section regarding deploying software in containers today into production. My interest is specifically regarding large deployments across a number of servers, and the issues you’re having with networking. People really using Kubernetes and Docker Swarm. Not those tinkering with containers on a single host, but DevOps who’ve suffered the real bruises and scraps from setting up MACvlans, IPvlans, Calico, Flannel, Kuryr, Magnum, Weave, Contiy Networking, etc…

Some will suggest I read the various mailing lists (check), join Slack channels (check), attend DockerCon (check), or even contribute to projects they prefer (you really don’t want my code). I’m not looking for that sort of feedback because in all those various forums the problem I have at my level of container networking experience is separating the posers from the real doers. My hope is that those willing to suggest ideas, can provide concrete examples of server-based container networking rough edges they’ve experienced and that if improved it would make a significant difference for their company. If that’s you then please comment publicly below, or use the private form to the right. Thank you for your time.