7 Things I Learned From the IEEE Hot Interconnects Panel on SmartNICs

For the second year, I’ve had the pleasure of chairing the panel on SmartNICs. Chairing this panel is like being both an interviewer and a ringmaster. You have to ask thoughtful questions and respond to answers from six knowledgeable panelists while watching the Slack channel for real-time comments or questions that might further improve the event’s content. Recently I finished transcribing the talk and discovered the following seven gems sprinkled throughout the conversation.

7. Software today shapes the hardware of tomorrow. This is almost a direct quote from one of the speakers, but nearly half of the participants echoed it several times in different ways. One said that vertical integration means moving stuff done today in code for an Arm core into gates tomorrow.

6. DPUs are evolving into storage accelerators. Perhaps the biggest vendor mentioned adding compression, which means they are serious about soon positioning their DPU as a computational storage controller. 

5. Side-Channel Attacks (SCA) are a consideration. Only one vendor brought up this topic, but it was on the mind of several. Applying countermeasures in silicon to thwart side-channel attacks nearly doubles the number of gates for that specific cryptographic block. I understand that the countermeasures essentially consume the inverse power while also generating the inverse electromagnetic effects so that external measurements of the chip package during cryptographic functions yield a completely fuzzed result. 

4. Big Vendors are Cross-pollinating. We saw this last year with the announcement of the NVIDIA BlueField 2X, which includes a GPU on their future SmartNIC, but this appeared to be a bolt-on. NVIDIA’s roadmap didn’t integrate the GPU into the DPU until BlueField 4 some several years out. Now Xilinx, who will soon be part of AMD, is hinting at similar things. Intel, who acquired Altera several years ago, is also bolting Xeons onto their Infrastructure Processing Unit (IPU).  

3. Monterey will be the Data Center OS. VMWare wasn’t on the panel, but some panelists had a lot to say on the topic. One mentioned that the data center is the new computer. This same panelist strongly implied that the future of DPUs lies in the control plane plugging into Monterey. Playing nicely with Monterey will likely become a requirement if you want to sell into future data centers.  

2. The CPU on the DPU is fungible. The company seeking to acquire Arm mentioned they want to use a CPU micro-architecture in their DPU that they can tune. In other words, extending the Arm instruction set found in their DPU with additional instructions designed to process network packets. Now that could be interesting.

Finally, this is a nerdy plumbing type thing, but it will change the face of SmartNICs and bring enormous advantages to them is the move to Chiplets. Today ALL SmartNICs or DPUs rely on a single die in a single package, then one or perhaps two packages on a PCIe card. In the future, a single chip package will contain multiple dies, each with different functional components, and possibly fabricated at other process nodes, so…. 

1. The inflection point for chiplet adoption is integrated photonics. Chiplets becoming commonplace in DPU packages will become popular when there is a need to connect optics directly to the die in the package. This will enable very high-speed connections over extremely short distances.

Data Types, Computation & Electronic Trading

I’ll return to the “Expanded Role of HPC Accelerators” in my next post, but before doing so, we need to take a step back and look at how data is stored to understand how best to operate on and accelerate that data. When you look under the hood of your car, you’ll find that there are at least six different types of mechanical fasteners holding things together.

Artificial Intelligence and Electronic Trading

Hoses often require a slotted screwdriver to remove, while others a Phillips’s head. Some panels require a star-like Torx wrench while others a hexagonal Allen, but the most common one you’ll find are hexagonal headed bolts and nuts in both English and Metric sizes. So why can’t the engineers who design these products select a single fastener style? Simple, each problem has unique characteristics, and these engineers are often choosing the most appropriate solution. The same is true for data types within a computer. Ask almost anyone, and they’ll say that data is stored in a computer in bytes, just like your engine has fasteners. 

Computers process data in many end-user formats, from simple text and numbers to sounds, images, video, and much more. Ultimately, it all becomes bits organized, managed, and stored as bytes. We then wrap abstraction layers around these collections of bytes as we shape them into numbers that our computer can then process. We know that some numbers are “natural,” they’re also called “integers,” meaning they have no fractional component, for example, one, two, and three. Simultaneously, other “real” numbers contain a decimal point that can have anywhere from one to an infinite collection of digits to the decimal’s right. Computers process data using both of these numerical formats.     

Textual data is relatively easy to grasp as it is often stored as an integer without a positive or negative sign. For example, the letter “A” is stored using a standard that has assigned it the value 65, which as a single byte is “01000001.” In Computer Science, how a number is represented can be just as important as the number itself’s value. One can store the number three as an integer, but when it is divided by the integer value two, some people would say the result is 1.5, but they could be wrong. Depending on the division operator used, some languages support several, and the data type assigned to the product there could be several different answers, all of which would be correct within their context. As you move from integer to real numbers or, more specifically, floating-point numbers, these numerical representations expand considerably based on the degree of precision required by the problem at hand. Today, some of the more common numerical data types are:

  • Integers, which often take a single byte of storage, and can be signed or unsigned. Integers can come in at least seven different distinct lengths from a nibble stored as four bits, a byte (eight bits), a half-word (sixteen bits), word (thirty-two bits), double word (sixty-four bits), octaword (one hundred and twenty-eight bits), and an n-bit value.
  • Half Precision floating-point, also known as FP16 is where the real number is expressed in 16 bits of storage. The numerical value is stored in ten bits; then there are five bits for the exponent and a single sign bit.
  • Single Precision, also known as floating-point 32 or FP32, for 32 bits. Here we have 23 bits for the fractional component, eight for the exponent, and one for the sign.
  • Double Precision, also known as floating-point 64 or FP64, for 64 bits. Finally, we have 52 bits for the fractional component, 11 for the exponent, and one for the sign.

There are other types, but these are the major ones. How computational units process data differs broadly based on the architecture of the processing unit. When we talk about processing units, we’re specifically talking about the Central Processing Unit (CPUs), Graphical Processing Unit (GPUs), Digital Signal Processor (DSPs), and Field Programmable Gate Array (FPGAs). CPUs are the most general. They can process all of the above data types and much more; they are the BMW X5 sport utility vehicle of computation. They’ll get you pretty much anywhere you want; they’ll do it in style and provide good performance. GPUs are that tricked out Jeep Wrangler Rubicon designed to crawl over pretty much anything you can throw at it while still carrying a family of four. On the other hand, DSPs are dirt bikes, they’ll get one person over rough terrain faster than anything available, but they’re not going to shine when they hit the asphalt. Finally, we have FPGAs, and they’re the Dodge Challenger SRT Demon in the pack; they never leave the asphalt; they just leave everything else on it behind. All that to say that GPUs and DSPs are designed to operate on floating-point data, while FPGAs do much better with integer data. So why does this matter?

Every problem is not suited to a slot headed fastener; sometimes you’ll need a Torx, while others a hexagonal headed bolt. The same is true in computer science. If your system’s primary application is weather forecasting, which is floating-point intense, you might want vast quantities of GPUs. Conversely, if you’re doing genetics sequencing, where data is entirely integer-based, you’ll find that FPGAs may outperform GPUs, delivering up to a 100X performance per watt advantage. Certain aspects of Artificial Intelligence (AI) have benefited more from using FP16 based calculations over using FP32 or FP64. In this case, DSPs may outshine GPUs in calculations per watt. As AI emerges in key computational markets moving forward, we’ll see more and more DSPs applications; one of these will be electronic trading.

Today the cutting edge of electronic trading platforms utilize FPGA boards that have many built-in ultra-high-performance networking ports. These boards, in some cases, have upwards of 16 external physical network connections. Trading data and orders into markets are sent via network messages, which are entirely character; hence integer, based. These FPGAs contain code blocks that rapidly process this integer data, but computations slow down considerably as some of these integers are converted to floating-point for computation. For example, some messages use a twelve-character format for the price where the first six digits are the whole number, and the second six digits represent the decimal number. So, a price of $12.34 would be represented as the character string “000012340000.” Other fields also use twelve-character values for a number, but the first ten digits are the whole number, and the last two the decimal value. In this case, 12,572.75 shares of a stock would be represented as “000001257275.” Now, of course, doing financial computations maintaining the price or quantity as characters is possible; it would be far more efficient if each were recast as single-precision (FP32) numbers. Then computation could be rapidly processed. Here’s where a new blend of FPGA processing, to rapidly move around character data, and DSP computing for handling financial calculations using single precision math will shine. Furthermore, DSP engines are an ideal platform for executing trained AI-based algorithms that will drive financial trading moving forward into the future.

Someday soon, we’ll see trading platforms that will execute entirely on a single high-performance chip. This chip will contain a blend of large blocks of adaptable FPGA logic; we’re talking millions of logic tables, along with thousands of DSP engines and dozens of high-performance network connections. This will enable intelligent trading decisions to be made and orders generated in tens of a billionth of a second!  

SmartNICs and SmartSSDs, the Future of Smart Acceleration

Block Diagram for Xilinx’s U25 SmartNIC

For the past three years, I’ve been writing about SmartNICs. One of my most popular blog posts is “What is a SmartNIC” from July 2017, which has been read over 6,000 times. This year, for the second time, I’ve presented at the Storage Developer Conference (SDC). The title for this blog post was also the title of my breakout session video, which ran for 50 minutes, and went live earlier this week. Here is the abstract for that session:

Since the advent of the Smart Phone over a decade ago, we’ve seen several new “Smart” technologies, but few have had a significant impact on the data center until now. SmartNICs and SmartSSDs will change the landscape of the data center, but what comes next? This talk will summarize the state of the SmartNIC market by classifying and discussing the technologies behind the leading products in the space. Then it will dive into the emerging technology of SmartSSDs and how they will change the face of storage and solutions. Finally, we’ll dive headfirst into the impact of PCIe 5 and Compute Express Link (CXL) on the future of Smart Acceleration on solution delivery.

Scott Schweitzer, The Technology Evangelist, Xilinx, Sept 2020

In that talk, which has been seen by over 100 people in just the first 24-hours alone on YouTube (I’m told this doesn’t include conference attendees), I shared much of what I’ve learned over the past few months while producing the following new items on SmartNICs:

And there’s more to come…

SmartNICs vs. DPUs, Who Wins?

Last week I hosted an IEEE Hot Interconnects Panel with the above title. We were lucky enough to secure some time from the following luminaries, and it made for an excellent panel:

Clicking on the image below should take you to the 90 minute Youtube video of our panel discussion. For those who are just interested in the highlights you can read below for some of the interesting facts pulled from our discussion.

IEEE Hot Interconnects Panel: “SmartNICs vs. DPUs, Who Wins?”

Here are some key points that contain significant value from the above panel discussion:

  1. SmartNICs provide a second computing domain inside the server that could be used for security, orchestration, and control plane tasks. While some refer to this as an air-gapped domain it isn’t, but it is far more secure than running inside the same x86 system domain. This can be used to securely enable bare-metal as a service. — Michael Kagan
  2. Several vendors are actively collaborating on a Portable NIC Architecture (PNA) designed to execute P4 code. When available, it would then be possible to deliver containers with P4 code that could run on any NIC that supported this PNA model. — Vipin Jain
  3. The control plane needs to execute in the NIC for two reasons, first to offload the host CPU from what is quickly become 30% overhead for processing network traffic, and second to improve the determinism of the applications running on the server. –Vipin Jain
  4. App stores are inevitable, when is the question. While some think it could be years, others believe it will happen within a year. Xilinx has partnered with a company that already has one for FPGA accelerators so the leap to SmartNICs shouldn’t be that challenging. –Gordon Brebner
  5. The ISA is un-important, it’s the micro-architecture that matters. Fungible selected MIPS-64 because of it’s support for simultaneous multi-threaded execution with fine-grained context switching. — Pradeep Sindhu. While others feel that the eco-system of tools and the wide access to developers is most important and that is why they’ve selected ARM.
  6. It should be noted that normally the ARM cores are NOT in the data plane.

The first 18 minutes are introductions and marketing messages. While these are educational, they are also somewhat canned marketing messages. The purpose of a panel discussion was to ask questions that the panel hadn’t seen in advance so we could draw out of them honest perspectives and feedback from their years of experience.

IMHO, here are some of the interesting comments, with who made them and where to find them:

18:50 Michael – The SmartNIC is a different computational domain, a computer in-front of a computer, and ideal for security. It can supervise or oversee all system I/O, key thing is that it is a real computer.

23:00 Gordon – Offloading the host CPU to the SmartNIC and enabling programmability of the device is critically important. We’ll also see functions and attributes of switches being merged into these SmartNICs.

24:50 Andy – Not only data plane offload, but control plane offload from the host is also critically important. Also hardware, in the form of on chip logic, should be applied to data plane offload whenever possible so that ARM cores are NOT being placed in the data plane.

26:00 Andy – Dropped the three letter string that most hardware providers cringe when we hear it, SDK. He stressed the importance of providing one. It should be noted that Broadcom at this point, as far as I know, appears to be the only SmartNIC OEM that provides a customer facing SmartNIC SDK.

26:50 Vipin – A cloud based device that is autonomous from the system and remotely manageable. Has it’s own brain, and that truly runs independently of the host CPU.

29:33 Pradeep – There is no golden rule, or rule of thumb like 1Gb/sec/core like what AMD has said. It’s important to determine what computations should be done in the DPU, multiplexing and stateful applications are ideal. General purpose CPUs are made for processing single threaded applications very fast, horrible at multiplexing.

33:37 Andy – 1Gb/core is really low, I’d not be comfortable with that. I would consider DPDK, or XDP and it would blow that metric away. People shouldn’t settle for this metric.

35:24 Michael – Network needs to take care of the network on it’s own, so zero core for an infinite number of Gigabits.

36:45 Gordon – The SmartNIC is a kinda filtering device, where sophisticated functions like IPS, can be offloaded into the NIC.

40:57 Andy – The Trueflow logic delivers a 4-5X improvement in packet processing. There are a very limited number of people really concerned with hitting line rate packet per second at these speeds. In the data center these PPS requirements are not realistic.

42:25 Michael – I support what Andy said, these packet rates are not realistic in the data center.

44:20 Pradeep – We’re having this discussion because general purpose CPUs can no longer keep up. This is not black and white, but a continuum, where does general processing end and a SmartNIC pick up. GRPC as an example needs to be offloaded. The correct interface is not TCP or RDMA, both are too low level. GRPC is a modern level for this communication interface. We need to have architectural innovation because scale out is here to stay!

46:00 Gordon – One thing about being FPGA based is that we can support tons of I/O. With FPGAs we don’t think in terms of cores, we look at I/O volumes, several years ago we first started looking at 100GbE then figured out how to do that and extended it to 400GbE. We can see the current way scaling well into the Terabit range. While we could likely provide Terabit range performance today it would be far to costly, it’s a price point issue, and nobody would buy it, the cost of doing things is also an issue.

48:35 Michael – CPUs don’t manage data efficiently. We have dedicated hardware engines and TCAM along with caches to service these engines, that’s the way it works.

49:45 Pradeep – The person asking the question perhaps meant control flow and not flow control, while they sound the same they mean different things. Control flow is what a CPU does, flow control is what networking does. A DPU or SmartNIC needs to do both well to be successful. It appears, and I could be wrong, that Pradeep is using pipeline to refer to consecutive stages of execution on a single macro resource like a DPU then chain as a collection of pipelines that provide a complete solution.

54:00 Vipin – Sticking with fixed function execution than line rate is possible. We need to move away from focusing on processing TCP packets, and shift focus to messages with a run-to-completion model. It is a general purpose program running in the data path.

57:20 Vipin – When it came to selecting our computational architecture it was all about ecosystem, and widely available resources and tooling. We [Pensando] went with ARM.

58:20 Pradeep – The ISA is an utter detail, it’s the macro-architecture that matters, not the micro instruction architecture. We chose MIPS because of the implementation which is a simultaneous multi-threaded implementation which is far and away a much better fine grained context switching. Much much better than anything else out there. There is also the economic price/performance to be considered.

1:00:12 Michael – I agree with Vipin it’s a matter of ecosystem, we need to provide a platform for people to develop. We’re not putting ARMs on the data path. So this performance consideration Pradeep has mentioned is not relevant. The key is providing an ecosystem that attracts as many developers as possible, and making their lives easier to produce great value on the device.

1:01:08 Andy – I agree 100%, that’s why we selected ARM, ecosystem drove our choice. With ARM their are enough Linux distributions, and you could be running containers on your NIC. The transition to ARM is trivial.

1:02:30 Gordon – Xilinx mixes ARM cores with programmable FPGA logic, and hard IP cores for things like encryption.

1:03:49 Pradeep – The real problem is the data path, but clearly ARM cores are not in the data path so they are doing control plane functions. Everyone says they are using ARM cores because of the rich ecosystem, but I’d argue that x86 has a richer ecosystem. If that’s the case then why NOT keep the control plane then in the hosts? So why does the control plane need to be imbedded inside the chip?

1:04:45 Vipin – Data path is NOT in ARM. We want it on a single die, we don’t want it hoping across many wires and killing performance. The kind of integration I can do by subsuming the ARM cores into my die is tremendous. That’s why it can not be on Intel. [Once you go off die performance suffers, so what I believe Vipin means is that he can configure on the die whatever collection of ARM cores, and hard logic he wants, and wire it together how best he sees fit to meet the needs of their customers. He can’t license x86 cores and integrate them on the same die as he can with ARM cores.] Plus if he did throw an x86 chip on the card it would blow his power budget [PCIe x16 lane cards are limited to 75W].

1:06:30 Michael – We don’t have as tight an integration with data-path and ARMs as Pensando. If you want to segregate computing domains between application tier and infrastructure tier you need another computer and putting an x86 on a NIC just isn’t practical.

1:07:10 Andy – The air-gap, bare-metal as a service, use case is a very popular one. Moving control plane functions off the x86 to the NIC, frees up x86 cores and enables a more deterministic environment for my applications.

1:08:50 Gordon – Having that programable logic alongside the ARM cores gives you both the control plane offload as well as dynamically being able to modify the data plane locally.

1:10:00 Michael – We are all for users programming the NIC we are providing an SDK, and working with third parties to host their applications and services on our NICs.

1:10:15 Andy – One of the best things we do it outreach, where we provide NICs to university developers, they disappear for a few months then return with completed applications or new use cases. Broadcom doesn’t want to tightly control how people use their devices, it isn’t open if it is limited by what’s available on the platform.

1:13:20 Vipin – Users should be allowed to own and define their own SDK to develop on the platform.

1:14:20 Pradeep – We provide programming stacks [libraries?] that are available to users through RestAPIs.

1:15:38 Gordon – We took an early lead in helping define the P4 language for programming network devices. Which became Barefoot Networks switch chips, but we’ve embraced it since very early on. We actually have a P4 to Verilog compiler so you can turn your P4 code into logic. The main SmartNIC functions inside Xilinx are written in P4. Then there are plug-ins where others can add their own P4 functions into the pipeline.

1:17:35 Michael – Yes, an app-store for our NIC, certainly. It’s a matter of how it is organized. For me it is somewhere users can go where they can safely download containerized applications or services which can then run on the SmartNIC.

1:18:20 Vipin – The App Store is a little ways out there, it is a good idea. We are working in the P4 community towards standards. He mentions PNA, the Portable NIC Architecture as an abstraction. [OMG, this is huge, and I wish I wasn’t juggling the balls trying to keep the panel moving as this would have been awesome to dig into. A PNA could then enable the capability to have containerized P4 applications that could potentially run across multiple vendors SmartNICs.] He also mentioned that you will need NIC based applications, and a fabric with infrastrucutre applications so that NICs on opposite sides of a fabric can be coordinated

1:21:30 Pradeep, An App Store at this point may be premature. In the long term something like an App Store will happen.

1:22:25 Michael, things are moving much faster these days, maybe just another year for SmartNICs and an App Store.

1:23:45 Gordon, we’ve been working with Pensando and others on the PNA concept with P4 for some time.

1:28:40 Vipin, ..more coming as I listen again on Wednesday.

For those curious the final vote was three for DPU and two for SmartNIC, but in the end the customer is the real winner.

Banned from the Internet

First Ever Picture of a Black Hole

After nearly seventeen years with IBM, in July of 2000, I left for a startup called Telleo founded by four IBM Researchers I knew and trusted. From 1983 through April 1994, I worked at IBM Research in NY and often dealt with colleagues at the Almaden Research Center in Silicon Valley. When they asked me to join, there was no interview; I already had impressed all four of them years earlier, this was in May of 2000. In March of 2001, the implosion of Telleo was evident. Although I’d not been laid off, I voluntarily quit just before Telleo stopped paying on their IBM lease, which I’d negotiated. The DotCom bubble burst in late 2000, so by early 2001, you were toast if you weren’t running on revenue. Now, if you didn’t live in Silicon Valley during 2001, imagine a large mining town where the mine had closed, this was close to what it was like, just on a much grander scale. Highway 101 had gone from packed during rush hour to what it typically looked like during the weekend. Venture Capitalists drew the purse strings closed, and if you weren’t running on revenue, you were out of business. Most dot-com startups bled red monthly and eventually expired.

Now imagine being an unemployed technology executive in the epicenter of the worst technology employment disaster in history, up until that point, with a wife who volunteered and two young kids. I was pretty motivated to find gainful employment. For the past few years, a friend of mine had run a small Internet Service Provider and had allowed me to host my Linux server there in return for some occasional consulting.

I’d set Nessus up on that server, along with several other tools, so it could be used to ethically hack client’s Internet servers, only by request, of course. One day when I was feeling particularly desperate, I wrote a small Perl script that sent a simple cover letter to jobs@X.com. Where “X” was a simple string starting with “aa” and eventually ending at “zzzzzzzz”. It would wait a few seconds between each email, and since these were to jobs@x.com I figured it was an appropriate email blast. Remember this was 2001, before SPAM was a widely used term. I thought “That’s what the “jobs” account is for anyway, right?” My email was very polite and requested a position and briefly highlighted my career.

Well, somewhere around 4,000 emails later, I got shut down, and my Internet domain, ScottSchweitzer.com was Black Holed. For those not familiar with the Internet version of this term, it essentially means no email from your domain even enters the Internet. If your ISP is a friend and he fixes it for you, he can run the risk of getting sucked in, and all the domains he hosts get sucked into the void as well. Death for an ISP. Fortunately, my friend that ran the ISP was a life-long IBMer, and he had networking connections at some of the highest levels in the Internet, so the ban stopped with my domain. 

To clean this up required some emails and phone calls to fix the problem from the top down. It took two weeks and a fair amount of explaining to get my domain back online to the point where I could once again send out emails. Fortunately, I always have at least several active email accounts, and domains. Also, this work wasn’t in vain, as I’d received a few consulting gigs as a result of the email blast. So now you know someone who was banned from the Internet!

Google, Memcache, and How Solarflare May Have Come Out on Top

This post was originally made in January of 2015, but due to a take-down letter, I received a week later this story has remained unpublished for the last seven years.

January 2015 Take-Down

Memcached was released in May of 2003 by Danga Interactive. This free, open-source software layer provides a general-purpose distributed in-memory cache that can be used by both web & app servers. Five years later, Google released version 1.1.0 of its App Engine, which also included its version of an in-memory cache called Memcache. This capability to store objects in a huge pool of memory spread across a large distributed fabric of servers is integral to the performance inherent in many of Google’s products. Google showed in a presentation earlier this year on Memcache that a query from a typical data requires 60-100 milliseconds, while a similar Memcache query only needed 3-8 milliseconds, a 20X improvement. It’s no wonder Google is a big fan of Memcache. This is why in March of 2013, Google acquired both technology and people from a small networking company called Myricom, who cracked the code to accelerate the network path to Memcache.

To better understand what Google acquired, we need to roll back to 2009 when Myricom had ported its MX communications stack to Ethernet. Mx over Ethernet (MXoE) was originally crafted for the High-Performance Computing (HPC) market, to create a very limited, but extremely low latency, stack for their then two-year-old 10GbE NICs. MXoE was reformulated using UDP instead of MX, and this new low latency driver was then named DBL. It was then engineered to service the High-Frequency Traders (HFT) on Wall Street. Throughout 2010 Myricom had evolved this stack by further adding limited TCP functionality. At that time, half round trip performance (one send plus one receive) over 10GbE was averaging 10-15 microseconds, DBL did it 4 microseconds. Today (2015) by comparison Solarflare, the leader in ultra-low latency network adapters, does this in well under 2 microseconds. So in 2012, Google was searching for a method to apply ultra-low latency networking tricks to a new technology called Memcache. In the fall of 2012 Google invited several Myricom engineers to discuss how DBL might be altered to service Memcache directly. Also they were interested in known if Myricom’s new silicon, due out in early 2013, might also be a good fit for this application. By March of 2013 Google had tendered an offer to Myricom to acquire their latest 10/40GbE chip, which was about to go into production, along with 12 of their PhDs who handled both the hardware & software architecture. Little is publicly known if that chip or the underlying accelerated driver layer for Memcache has ever made it into production at Google.

Fast forward to today (2015), and earlier this week, Solarflare released a new whitepaper highlighting how they’ve accelerated the publicly available free open-source layer called Memcached using their ultra-low latency driver layer called OpenOnload. In this whitepaper, Solarflare demonstrated performance gains of 2-3 times that of a similar Intel 10GbE adapter. Imagine Google’s Memcache farm being only 1/3 the size it is today? We’re talking serious performance gains here. For example, leveraging a 20 core server, the Intel dual 10GbE adapter supported 7.4 million multiple get operations while Solarflare provided 21.9 million, nearly a 200% increase in the number of requests. If we look at mixed throughput (get/set in a ratio of 9:1), Intel delivered 6.3 million operations per second while Solarflare delivered 13.3 million, a 110% gain. That’s throughput, how about latency performance? Using all 20 cores, and batches of 48 get requests Solarflare clocked in at 2,000 microseconds, and Intel at 6,000 microseconds. Across all latency tests, Solarflare reduced network latency by, on average, 69.4% (the lowest reduction was 50%, and the highest 85%). Here is a link to the 10-page Solarflare whitepaper with all the details.

While Google was busy acquiring technology & staff to improve their own Memcache performance, Solarflare delivered it for their customers and documented the performance gains.

Half a Billion Req/Sec!

Can your In-Memory Key-Value Store handle a half-billion requests per second?

Many applications from biological to financial and Web2.0 utilize in-memory databases because of their cutting-edge performance, often delivering several orders of magnitude faster response time than traditional relational databases. When these in-memory databases are moved to their own machines in a multi-tier application environment, they often can serve 10 million requests per second, and that’s turning all the dials to 11 on a high-end dual-processor server. Much of this is due to how applications communicate with the kernel and the network.

Last year, my team used one of these databases, Redis, we then bypassed the kernel, connected up a 100Gbps network, and took that 10 million requests per second to almost 50 million. Earlier this year we began working with Algo-Logic, Dell, and CC Integration to blow way beyond that 50 million target. At RedisConf2020 in May, Algo-Logic announced a 1U Dell server they’ve customized that can service nearly a half-billion requests per second. To process these requests, we’re spreading the load across two AMD EPYC CPUs and three Xilinx FPGAs. All requests are serviced directly from local memory using an in-memory key-value store system. For those requests serviced by the FPGAs, their response time is measured in billionths of a second. Perhaps we should explain how Algo-Logic got here, and why this number is significant.

Some time ago, a new form of databases came back into everyday use, and they were classified as NoSQL, because they didn’t use Structured Query language, and were non-relational. These databases rely on clever algorithmic tricks to rapidly store and retrieve information in memory; this is very different from how traditional relational databases function. These NoSQL systems are sometimes referred to as key-value stores. With these systems, you pass in a key, and a value is returned. For example, pass in “12345,” and the value “2Z67890” might be returned. In this case, the key could be an order number, and the value returned is the tracking number or status, but the point is you made a simple request and got back a simple answer, perhaps in a few billionths of a second. What Algo-Logic has done is they wrote an application for the Xilinx Alveo U50 that turns the 40 Gbps Ethernet port on this card into four smoking fast key-value stores each with access to the cards 8GB of High Bandwidth Memory (HBM). Each Alveo U50 card with Algo-Logics KVS can service 150 million requests per second. Here is a high-level architectural diagram showing all the various components:

Architecture for 490M Requests/Second into a 1U Server

There are five production network ports on the back of the server, three 40Gbps and two 25Gbps. Each of the Xilinx Alveo U50 cards has a single 40Gbps port, and the dual 25Gbps ports are on an OCP-3 form factor card called the Xilinx XtremeScale X2562 which carries requests into the AMD EPYC CPU complex. Algo-Logic’s code running in each of the Xilinx Alveo cards breaks the 40Gbps channel into four 10Gbps channels and processes requests on each individually. This enables Algo-Logic to make the best use possible of the FPGA resources available to them.

Furthermore, to overcome network overhead issues, Algo-Logic has packed 44 get requests into a single 1408 byte packet. For those familiar with Redis, this is similar to an MGET, multiple get, request. Usually, a single 32-byte get request can easily fit into the smallest Ethernet payload, which is 40 bytes, but then Ethernet adds an additional 24-bytes of routing and a 12-byte frame gap — using a single request per-packet results in networking overhead consuming 58% of the available bandwidth. This is huge, and can clearly impact the total request per second rate. Packing 44 requests of 32 bytes each into a single packet means that the network overhead drops to 3% of the total bandwidth, which means significantly greater requests per second rates.

What Algo-Logic has done here is extraordinary. They’ve found a way to tightly link the 8GB of High Bandwidth Memory on the Xilinx Alveo U50 to four independent key-value store instances that can service requests in well under a micro-second. To learn more consider reaching out to John Hagerman at Algo-Logic Systems, Inc.

SmartNICs, the Next Wave in Server Acceleration

As system architects, we seriously contemplate and research the components to include in our next server deployment. First, we break the problem being solved into its essential parts; then, we size the components necessary to address each element. Is the problem compute, memory, or storage-intensive? How much of each element will be required to craft a solution today? How much of each will be needed in three years? As responsible architects, we have to design for the future, because what we purchase today, our team will still be responsible for three years from now. Accelerators complicate this issue because they can both dramatically breath new life into existing deployed systems, or significantly skew the balance when designing new solutions.

Today foundational accelerator technology comes in four flavors: Graphical Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), Multi-Processor Systems on a Chip (MPSoCs) and most recently Smart Network Interface Cards (SmartNICs). In this market, GPUs are the 900-pound gorilla, but FPGAs have made serious market progress the past few years with significant deployments in Amazon Web Services (AWS) and Microsoft Azure. MPSoCs, and now SmartNICs, blend many different computational components into a single chip package, often utilizing a mix of ARM cores, GPU cores, Artificial Intelligence (AI) engines, FPGA logic, Digital Signal Processors (DSPs), as well as memory and network controllers. For now, we’re going to skip MPSoCs and focus on SmartNICs.

SmartNICs place acceleration technology at the edge of the server, as close as possible to the network. When computational processing of network intense workloads can be accomplished at the network edge, within a SmartNIC, it can often relieve the host CPU of many mundane networking tasks. Normal server processes require that the host CPU spend, on average, 30% of it’s time managing network traffic, this is jokingly referred to as the data center tax. Imagine how much more you could get out of a server if just that 30% were freed up, and what if more could be made available?

SmartNICs that leverage ARM cores and or FPGA logic cells exist today from a growing list of companies like Broadcom, Mellanox, Netronome, and Xilinx. SmartNICs can be designed to fit into a Software-Defined Networking (SDN) architecture. They can accelerate tasks like Network Function Virtualization (NVF), Open vSwitch (OvS), or overlay network tunneling protocols like Virtual eXtensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE). I know, networking alphabet soup, but the key here is that complex routing, and packet encapsulation tasks can be handed off from the host CPU to a SmartNIC. In virtualized environments, significant amounts of host CPU cycles can be consumed by these tasks. While they are not necessarily computationally intensive, they can be volumetrically intense. With datacenter networks moving to 25GbE and 50GbE, it’s not uncommon for host CPUs to process millions of packets per second. This processing is happening today in the kernel or hypervisor networking stack. With a SmartNIC packet routing and encapsulation can be handled at the edge, dramatically limiting the impact on the host CPU.

If all you were looking for from a SmartNICs is to offload the host CPU from having to do networking, thereby saving the datacenter networking tax of 30%, this might be enough to justify their expense. Most of the SmartNIC product offerings from the companies mentioned above run in the $2K to $4K price range. So suppose you’re considering a SmartNIC that costs $3K, with the proper software, and under load testing, you’ve found that it returns 30% of your host CPU cycles, what is the point at which the ROI makes sense? A simplistic approach would suggest that $3K divided by 30% yields a system cost of $10K. So if the cost of your servers is north of $10K, then adding a $3K SmartNIC is a wise decision, but wait, there’s more.

SmartNICs can also handle many complex tasks like key-value stores, encryption, and decryption (IPsec, MACsec, soon even SSL/TLS), next-generation firewalls, electronic trading, and much more. Frankly, the NIC industry is at an inflection point similar to when video cards evolved into GPUs to support the gaming and virtualization market. While Sony coined the term GPU with the introduction of the Playstation in 1994, it was Nvidia five years later in 1999 who popularized the GPU with the introduction of the GeForce 256. I doubt that in the mid-1990s, while Nvidia was designing the NV10 chip, the heart of the GeForce 256, that their engineers were also pondering how it might be used in high-performance computing (HPC) applications a decade later that had nothing to do with graphic rendering. Today we can look at all the ground covered by GPU and FPGA accelerators over the past two decades and quickly see a path forward for SmartNICs where they may even begin offloading the primary computational tasks of a server. It’s not inconceivable to envision a server with a half dozen SmartNICs all tasked with encoding video, or acting as key-value stores, web caches, or even trading stocks on various exchanges. I can see a day soon where the importance of SmartNIC selection will eclipse server CPU selection when designing a new solution from the ground up.

Digital Currency: Money, Our Second Social Network

This is the first in a series designed to dispel the mystique of digital currency, think Bitcoin, but trust me we’ll go way beyond that. My goal is to explain in common language all you’ll need to know about digital currency, so you can confidently answer questions when you sit down with your first social network, your family.

As a species, our most significant evolutionary trait is our capability for building social networks. While social networks are found in many other high order mammals, we really do take it to the next level. Let’s face it for our first 36 months outside the womb we’re pretty much a defenseless bag of water rolling around wherever we’re placed. We can’t even effectively flee from even the most basic predator without assistance. From the moment we’re born we establish strong bonds with our parents and siblings, who become our first social network, our family. In our formative years, this network fills all our basic needs. It is this first network that introduces us to the second network we join, often very early in life, and that is the network of money.

Initially, we learn how to spend our first social network’s money as we roll down the aisle of the market and point out the foods we like. Soon members of our first network are sharing their money with us in return for our time when we do a task or achieve a milestone. Many of you up to this point possibly never viewed money as a social network, until recently it hadn’t occurred to me, but in fact, it is. On its face money is nothing more than a worthless token designed exclusively to be exchanged. It is these exchanges that form a network of commerce. If you closely examine your tightest social bonds, outside of your first network, they very likely were started or fueled by money. Outside of family one of my closest friends exists because over a decade ago I exchanged money in return for joining another social network. While I haven’t been associated with that network for years, this friend still shows up whenever I need him most, and it’s no longer about money.   

Terminology is critical to our understanding. Mentally we often have trouble grasping something until we can assign it a name. For years you’ve carried money around in your pocket, but have you ever considered it as your fiat currency? Here in the US, our fiat currency is the dollar. It’s very possible you’ve taken it for granted so long you never even viewed it this way. A fiat currency is the one officially sanctioned and managed by the government, another network, under which you live. If you only need one currency, in my case the dollar, then the concept of fiat becomes mute, but what happens when you begin to use more than one?

Sitting here in North Carolina a month ago for the first time in my life I need to spend some Bitcoin which I’d been gifted a few years earlier. Bitcoin is the most well-known digital currency, but it is NOT a fiat currency because it hasn’t been issued by a government. The vendor I wished to purchase a small digital currency mining rig from in China ONLY accepted Bitcoin, NOT my US dollars via credit card. At the time Bitcoin was trading around $10,000 USD so buying something for $250 USD meant I was spending a fraction of a Bitcoin. I’m only aware of a single fractional unit of a Bitcoin and that’s called a Satoshi which is equal to one ten-millionth of a Bitcoin. So, I shelled out roughly 2.5M Satoshi for my rig and anxiously awaited its arrival. We’ll dive more into Bitcoin and Satoshi in future posts, but I thought it a prudent example where the fiat currency wasn’t accepted. Next year we’ll have Libra, Facebook’s digital currency, and that’s already giving those who manage our fiat currencies serious concerns.

In computer science, there’s a concept called Metcalfe’s law which states that the value of a network is equal to the square of the number of nodes or members in that network. My extended family has roughly fifty members, so its potential value is fifty squared or 2,500. My friend has nearly 100 in his extended family, so his family’s value is 10,000. Metcalfe’s law came along with computer networking, long after Alexander Graham Bell had invented the telephone, but Bell was aware that the value of his invention would only be truly realized once it had been widely adopted. Within 10 years of its invention over 100,000 phones had been installed in the US. Bell died 46 years after his invention knowing that his new network had changed the world.  

Facebook is the largest social network our species has ever created. With over two billion active users this means that one person in four uses this network monthly. Right now, the clear majority of those two billion people live their lives with their fiat currency, and unless they travel they rarely if ever deal with another. Next year Facebook will issue its fiat currency, Libra, and it will change everything. Now it should be noted that Facebook isn’t a country so the concept of them issuing currency has raised some serious concerns from those who do issue currency. Countries around the globe are taking Facebook’s Libra head-on because they know it represents a Pandora’s box of problems for their own monetary systems, and here’s the main reason why.

Looking at Facebook as a network we could say its value is two squared or four, for now, let’s drop the all the billions as it just makes the numbers incomprehensibly large. The US has a population of 0.327 billion so the value of it as a network (again without the billions) is 0.1 and China has a population of 1.386 billion, so the value of its network is 1.9. If we now view these countries populations as networks, we see that Facebook’s value is twice that of the US and China combined. Extend that to currencies and you can why all the fuss, and why you might need to understand digital currency.

Next, we’ll dismiss the intrinsic value argument, that’s where grandpa says Bitcoin is worthless because there’s nothing behind it. You can then counter with the US went off the gold standard decades ago so why isn’t the dollar worthless? We’ll provide you with that side of the argument.    

x86 Has Hit the Wall, and Now Come the Accelerators – Part 3

TV’s Original A-Team

Accelerators are like calling in a special forces team to address a serious competitive threat. By design, a special forces team, known as “A Detachment” or “A-Team” consists of two officers and ten sergeants, all of which are cross-trained in five different skill areas: weapons, engineering, medical, communications, and operations intelligence. This enables the detachment to survive for months or even years in a hostile area without any operational support. Accelerators are the computational equivalent.

A well-designed accelerator has different blocks of silicon to address each of the four primary computational workloads we discussed in part two:

  • Scaler, working with integers, and letters
  • Floating-point, the real numbers with decimal points
  • Vector, one-dimensional arrays of floating-point numbers
  • Artificial Intelligence (AI), vectors with low precision floating point mixed with integers

If workload types are much like special forces skills then what types of physical computational cores can we leverage in an accelerator design that are optimized to address these specific tasks?

For scaler problems, Intel’s x86 platform has led for decades as far back as the early 1980s. Quietly over the last 25 years, the ARM architecture has evolved. In the past five years, ARM has demonstrated everything necessary for it to be a serious data center player. Add to that ARM’s architecture licensing model which has led to third parties developing their cores which are instruction set compatible. Both of these factors have resulted in at least a dozen companies from Apple to Samsung developing their ARM core designs. Today ARM cores can be found in everything from Nest Thermostats to Apple iPhones. Today the most popular architecture for workload acceleration is the ARMv8-A. Specifically, the Cortex-A72 design which supports both 32bit and 64bit computing, with 1-4 computational cores. Today the Broadcom Stingray, Mellanox Bluefield, NXP Layerscape, and Xilinx Versal all use the ARM Cortex-A72.

When it comes to accelerating floating point, the current trend has been towards Graphical Processing Units (GPUs). While GPUs have been around for about a decade now, it wasn’t until the NVIDIA Tesla debuted that they were viewed as a real computational accelerator. GPUs are also suitable for the third workload model vector processing. In essence, GPUs can kill two birds with one computational stone. Another solution that can also accelerate certain types of floating-point operations are digital signal processing (DSP) engines. DSPs are very good at real-world computational problems that have a high degree of multiple-accumulates and matrix operations. Here is where some accelerator boards are stronger than others. While the Broadcom Stingray only has a cryptographic engine designed to handle single-pass hashing and encryption/decryption (both scaler tasks) it lacks any sort of added acceleration for floating-point math. Mellanox’s Bluefield chip also doesn’t include any silicon specifically dedicated to floating-point. What they do promote is the fact that Bluefield provides GPUDirect so the processor can communicate directly with GPUs on another PCIe card. NXP only has ARM cores, so no additional floating-point support is provided. By contrast, Xilinx’s Versal architecture includes anywhere from 472 to 3,984 DSP engines depending on the chip series and model.

Artificial Intelligence (AI) workloads leverage vector processing but instead of high precision floating point, they only require low precision or integer numbers. Again Broadcom, Mellanox, and NXP all fall short as they don’t include any silicon to process these workloads directly. Mellanox as mentioned earlier, does support GPUDirect for passing AI workloads to another PCIe board but that’s a far cry from on-chip dedicated silicon. Xilinx’s Versal architecture includes anywhere from 128 to 400 AI engines for accelerating these workloads.

Finally, the most significant differentiator is the inclusion of FPGA logic, also known as adaptable engines. This is something unique to only Xilinx accelerator cards. This is the capability to take frequently called upon routines written in C/C++ and port them over to dedicated logic which can improve the performance of a routine by at least 8X.

In the case of Xilinx’s new Versal architecture, the senior officer is an ARM Cortex-R5 for real-time workloads. The junior one officer, and the one who does much of the work, is an ARM Cortex-A72 quad-core processor. The two ARM engines are primarily for control plane functions. Then Versal has AI cores, DSP engines, and adaptable engines (FPGA logic) to accelerate the volumetric workloads. When it comes to application acceleration in hardware the Xilinx Versal is the A-Team!