Video Streaming to Jumbo Boob Tubes (Part 2 of 3)

The cult classic Monty Python and the Holy Grail in standard definition (SD) from Apple’s iTunes service is 1.00GB of data, the HD (720p) download file is 2.91GB, and the SuperHD (Apple just calls it 1080p) file is 4.84GB. This is a perfect real world example of the problem because the base file is something most of us geeks are familiar with and it’s exactly 1.00GB. As such it clearly shows the expansion as we move from format to format. Also, it should be noted that this doesn’t address UltraHD which will grow files another 12X in size over SuperHD.

So with the data demand for video streaming growing exponentially as a result of consumers moving from SD to HD & SuperHD, and soon UltraHD what can be done to ease the burden on these servers? First, we need to take a quick low-level look at the problem. A movie in the “cloud” is nothing more than a file on storage somewhere that needs to be transferred through a network to a consumer. Focusing on the server side we have several areas where one can further optimize performance:

  • Tracking all the clients & their associated streams.
  • Moving the bits that make up the movie from storage to memory.
  • Then moving those bits from memory to the network adapter.
  • Finally transmitting these bits from the network adapter onto the wire so they can reach the consumer.

Tracking network connections and mapping them to streams is a collaboration between both the application, and the OS. File transfers from storage to memory is an OS function requested by the application, these have been well optimized over the years so there can only be marginal improvements here. Moving bits from memory to a network adapter is also considered an OS function, but it’s been one that the HPC (High-Performance Computing) crowd has been optimizing for well over a decade through a technique called OS Bypass. This is still more art than engineering so it’s not mainstream. The final step is often considered hardware, but it’s also the most intriguing. Normally packets arrive via the OS and have to be transmitted as is onto the Ethernet, boring. If on the other hand, they arrive via OS Bypass then some real magic can happen.

It appears that two of the highest profile web video streaming entities, NetFlix & Hulu, along with a collection of others have moved to a new web server platform called Nginx (pronounced engine-x) to address the C10K problem. C10K is the name given to a class of problems associated with keeping track of 10’s of thousands of web server clients, and the data they are connected to. Nginx addresses this through a number of sophisticated techniques, that are better explained on their website.

The next issue is that of transporting the movie from storage, as a file, to memory. Again, this falls into the realm of Nginx as it’s both the application and the OS that are responsible for all the latency around file I/O. Nginx has had extensive tuning over the years to minimize this sort of latency.

So now we’re left with what is considered both an OS and a hardware issue. This is moving the bits to the network adapter and putting them on the wire to the client. Nginx with any generic 10GbE adapter works fine, but the real magic happens when it’s coupled with an intelligent 10GbE adapter and FastStack™ VideoPump™ which utilizes HPC OS Bypass techniques to dramatically improve system performance. Next week in part 3 of this series we’ll show you what those improvements really are and give you a peek into the magic behind the curtain that makes this all possible. Stay tuned, same bat time, same bat channel…

Leave a Reply