SmartNICs vs. DPUs, Who Wins?

Last week I hosted an IEEE Hot Interconnects Panel with the above title. We were lucky enough to secure some time from the following luminaries, and it made for an excellent panel:

Clicking on the image below should take you to the 90 minute Youtube video of our panel discussion. For those who are just interested in the highlights you can read below for some of the interesting facts pulled from our discussion.

IEEE Hot Interconnects Panel: “SmartNICs vs. DPUs, Who Wins?”

Here are some key points that contain significant value from the above panel discussion:

  1. SmartNICs provide a second computing domain inside the server that could be used for security, orchestration, and control plane tasks. While some refer to this as an air-gapped domain it isn’t, but it is far more secure than running inside the same x86 system domain. This can be used to securely enable bare-metal as a service. — Michael Kagan
  2. Several vendors are actively collaborating on a Portable NIC Architecture (PNA) designed to execute P4 code. When available, it would then be possible to deliver containers with P4 code that could run on any NIC that supported this PNA model. — Vipin Jain
  3. The control plane needs to execute in the NIC for two reasons, first to offload the host CPU from what is quickly become 30% overhead for processing network traffic, and second to improve the determinism of the applications running on the server. –Vipin Jain
  4. App stores are inevitable, when is the question. While some think it could be years, others believe it will happen within a year. Xilinx has partnered with a company that already has one for FPGA accelerators so the leap to SmartNICs shouldn’t be that challenging. –Gordon Brebner
  5. The ISA is un-important, it’s the micro-architecture that matters. Fungible selected MIPS-64 because of it’s support for simultaneous multi-threaded execution with fine-grained context switching. — Pradeep Sindhu. While others feel that the eco-system of tools and the wide access to developers is most important and that is why they’ve selected ARM.
  6. It should be noted that normally the ARM cores are NOT in the data plane.

The first 18 minutes are introductions and marketing messages. While these are educational, they are also somewhat canned marketing messages. The purpose of a panel discussion was to ask questions that the panel hadn’t seen in advance so we could draw out of them honest perspectives and feedback from their years of experience.

IMHO, here are some of the interesting comments, with who made them and where to find them:

18:50 Michael – The SmartNIC is a different computational domain, a computer in-front of a computer, and ideal for security. It can supervise or oversee all system I/O, key thing is that it is a real computer.

23:00 Gordon – Offloading the host CPU to the SmartNIC and enabling programmability of the device is critically important. We’ll also see functions and attributes of switches being merged into these SmartNICs.

24:50 Andy – Not only data plane offload, but control plane offload from the host is also critically important. Also hardware, in the form of on chip logic, should be applied to data plane offload whenever possible so that ARM cores are NOT being placed in the data plane.

26:00 Andy – Dropped the three letter string that most hardware providers cringe when we hear it, SDK. He stressed the importance of providing one. It should be noted that Broadcom at this point, as far as I know, appears to be the only SmartNIC OEM that provides a customer facing SmartNIC SDK.

26:50 Vipin – A cloud based device that is autonomous from the system and remotely manageable. Has it’s own brain, and that truly runs independently of the host CPU.

29:33 Pradeep – There is no golden rule, or rule of thumb like 1Gb/sec/core like what AMD has said. It’s important to determine what computations should be done in the DPU, multiplexing and stateful applications are ideal. General purpose CPUs are made for processing single threaded applications very fast, horrible at multiplexing.

33:37 Andy – 1Gb/core is really low, I’d not be comfortable with that. I would consider DPDK, or XDP and it would blow that metric away. People shouldn’t settle for this metric.

35:24 Michael – Network needs to take care of the network on it’s own, so zero core for an infinite number of Gigabits.

36:45 Gordon – The SmartNIC is a kinda filtering device, where sophisticated functions like IPS, can be offloaded into the NIC.

40:57 Andy – The Trueflow logic delivers a 4-5X improvement in packet processing. There are a very limited number of people really concerned with hitting line rate packet per second at these speeds. In the data center these PPS requirements are not realistic.

42:25 Michael – I support what Andy said, these packet rates are not realistic in the data center.

44:20 Pradeep – We’re having this discussion because general purpose CPUs can no longer keep up. This is not black and white, but a continuum, where does general processing end and a SmartNIC pick up. GRPC as an example needs to be offloaded. The correct interface is not TCP or RDMA, both are too low level. GRPC is a modern level for this communication interface. We need to have architectural innovation because scale out is here to stay!

46:00 Gordon – One thing about being FPGA based is that we can support tons of I/O. With FPGAs we don’t think in terms of cores, we look at I/O volumes, several years ago we first started looking at 100GbE then figured out how to do that and extended it to 400GbE. We can see the current way scaling well into the Terabit range. While we could likely provide Terabit range performance today it would be far to costly, it’s a price point issue, and nobody would buy it, the cost of doing things is also an issue.

48:35 Michael – CPUs don’t manage data efficiently. We have dedicated hardware engines and TCAM along with caches to service these engines, that’s the way it works.

49:45 Pradeep – The person asking the question perhaps meant control flow and not flow control, while they sound the same they mean different things. Control flow is what a CPU does, flow control is what networking does. A DPU or SmartNIC needs to do both well to be successful. It appears, and I could be wrong, that Pradeep is using pipeline to refer to consecutive stages of execution on a single macro resource like a DPU then chain as a collection of pipelines that provide a complete solution.

54:00 Vipin – Sticking with fixed function execution than line rate is possible. We need to move away from focusing on processing TCP packets, and shift focus to messages with a run-to-completion model. It is a general purpose program running in the data path.

57:20 Vipin – When it came to selecting our computational architecture it was all about ecosystem, and widely available resources and tooling. We [Pensando] went with ARM.

58:20 Pradeep – The ISA is an utter detail, it’s the macro-architecture that matters, not the micro instruction architecture. We chose MIPS because of the implementation which is a simultaneous multi-threaded implementation which is far and away a much better fine grained context switching. Much much better than anything else out there. There is also the economic price/performance to be considered.

1:00:12 Michael – I agree with Vipin it’s a matter of ecosystem, we need to provide a platform for people to develop. We’re not putting ARMs on the data path. So this performance consideration Pradeep has mentioned is not relevant. The key is providing an ecosystem that attracts as many developers as possible, and making their lives easier to produce great value on the device.

1:01:08 Andy – I agree 100%, that’s why we selected ARM, ecosystem drove our choice. With ARM their are enough Linux distributions, and you could be running containers on your NIC. The transition to ARM is trivial.

1:02:30 Gordon – Xilinx mixes ARM cores with programmable FPGA logic, and hard IP cores for things like encryption.

1:03:49 Pradeep – The real problem is the data path, but clearly ARM cores are not in the data path so they are doing control plane functions. Everyone says they are using ARM cores because of the rich ecosystem, but I’d argue that x86 has a richer ecosystem. If that’s the case then why NOT keep the control plane then in the hosts? So why does the control plane need to be imbedded inside the chip?

1:04:45 Vipin – Data path is NOT in ARM. We want it on a single die, we don’t want it hoping across many wires and killing performance. The kind of integration I can do by subsuming the ARM cores into my die is tremendous. That’s why it can not be on Intel. [Once you go off die performance suffers, so what I believe Vipin means is that he can configure on the die whatever collection of ARM cores, and hard logic he wants, and wire it together how best he sees fit to meet the needs of their customers. He can’t license x86 cores and integrate them on the same die as he can with ARM cores.] Plus if he did throw an x86 chip on the card it would blow his power budget [PCIe x16 lane cards are limited to 75W].

1:06:30 Michael – We don’t have as tight an integration with data-path and ARMs as Pensando. If you want to segregate computing domains between application tier and infrastructure tier you need another computer and putting an x86 on a NIC just isn’t practical.

1:07:10 Andy – The air-gap, bare-metal as a service, use case is a very popular one. Moving control plane functions off the x86 to the NIC, frees up x86 cores and enables a more deterministic environment for my applications.

1:08:50 Gordon – Having that programable logic alongside the ARM cores gives you both the control plane offload as well as dynamically being able to modify the data plane locally.

1:10:00 Michael – We are all for users programming the NIC we are providing an SDK, and working with third parties to host their applications and services on our NICs.

1:10:15 Andy – One of the best things we do it outreach, where we provide NICs to university developers, they disappear for a few months then return with completed applications or new use cases. Broadcom doesn’t want to tightly control how people use their devices, it isn’t open if it is limited by what’s available on the platform.

1:13:20 Vipin – Users should be allowed to own and define their own SDK to develop on the platform.

1:14:20 Pradeep – We provide programming stacks [libraries?] that are available to users through RestAPIs.

1:15:38 Gordon – We took an early lead in helping define the P4 language for programming network devices. Which became Barefoot Networks switch chips, but we’ve embraced it since very early on. We actually have a P4 to Verilog compiler so you can turn your P4 code into logic. The main SmartNIC functions inside Xilinx are written in P4. Then there are plug-ins where others can add their own P4 functions into the pipeline.

1:17:35 Michael – Yes, an app-store for our NIC, certainly. It’s a matter of how it is organized. For me it is somewhere users can go where they can safely download containerized applications or services which can then run on the SmartNIC.

1:18:20 Vipin – The App Store is a little ways out there, it is a good idea. We are working in the P4 community towards standards. He mentions PNA, the Portable NIC Architecture as an abstraction. [OMG, this is huge, and I wish I wasn’t juggling the balls trying to keep the panel moving as this would have been awesome to dig into. A PNA could then enable the capability to have containerized P4 applications that could potentially run across multiple vendors SmartNICs.] He also mentioned that you will need NIC based applications, and a fabric with infrastrucutre applications so that NICs on opposite sides of a fabric can be coordinated

1:21:30 Pradeep, An App Store at this point may be premature. In the long term something like an App Store will happen.

1:22:25 Michael, things are moving much faster these days, maybe just another year for SmartNICs and an App Store.

1:23:45 Gordon, we’ve been working with Pensando and others on the PNA concept with P4 for some time.

1:28:40 Vipin, ..more coming as I listen again on Wednesday.

For those curious the final vote was three for DPU and two for SmartNIC, but in the end the customer is the real winner.

Kobayashi Maru and Linkedin’s SSI

Klingon Battle Cruisers

Fans of Star Trek immediately know the Kobayashi Maru as the no-win test given to all Starfleet officer candidates to see how they respond to a loss. After being one of Linkedin’s first million members, I recently found out that there is a score by which Linkedin determines how effectively you use their platform. This score is out of 100, and it is composed of four pillars, each with a value of 25 points. If you overachieve in any given pillar, you can’t earn more than 25 points; it’s a hard cap. Like the Kobayashi Maru, the only way to beat Linkedin’s Social Selling Index (SSI), is to learn as much as you can about the innards of how it works, then hack or more accurately “game the system.” Here is a link to your score. There are several articles out there that explain how the SSI is computed, some build on slides that Linkedin supplied at some point, but here are the basics that I’ve uncovered, and how you can “game the SSI.” 

How Linkedin computes the SSI is extremely logical. Someone can effectively start with the platform and leverage it to become a successful sales professional in very little time. As mentioned earlier, the SSI is computed from four 25-point pillars which to some degree, build on each other, and they are: 

  • Build your Brand 
  • Grow your Network 
  • Engage with your Network 
  • Develop Relationships with your Network

The first pillar, “Building your Brand,” is almost entirely within your own control, and can be mastered with a free membership. There are four elements to building your brand, and these are: complete your profile, including video in your profile, write articles, and get endorsements. The first three require only elbow grease, basic video skills, and some creative writing. All of these elements are skills that most professionals should have some reasonable degree of competency with, and if not, can be quickly learned. Securing endorsements requires you to leverage your network’s closest elements to submit small fragments of text about your performance when you worked with them. If you want to be aggressive, you could write these for your former coworkers and offer them up to put in their voice and submit on your behalf. Scoring 25 in this area is within reach of most folks; I scored 24.61 when I learned about the SSI.

To pull off a 25 in the second pillar, “Growing your Network” requires a paid membership with Linkedin and for optimum success a “Sales Navigator” membership at $80/month. If you’re a free member and you buy up to Sales Navigator, some documentation implies that this will give you an immediate 10-point boost in this category. Once you have a Sales Navigator membership, it then requires that you use the tool, “Lead Builder,” and connect with recommendations. The “free” aspects of this pillar are doing people searches, viewing these profiles, especially 3rd-degree folks and people totally outside your network. While I had a paid membership, it was not a Sales Navigator membership when I discovered SSI, but when I bought up to Sales Navigator, my score in this pillar remained at 15.25. After going through the Sales Navigator training, my score did go up to 15.32, but clearly, I need to make effective use of Sales Navigator to pull my score up in this pillar. The expectation for those hitting 25 in this pillar is that you’ve used their tools to find leads and convert them into members of your network, and perhaps customers. 

Engagement is the third pillar, and here Linkedin uses the following four metrics to determine your score. You need to share posts WITH pictures, give and get likes, repost content from others, comment and reshare on posts from others, join at least 50 groups, and finally send Inmails and get responses. Inmails only come with a paid membership, so again you can’t achieve 25 in this pillar without a paid membership. In this section, I started at 14.35. I never send Inmails, so that’s something that is going to change. Nor was I big on reposting content from others, or resharing posts by others. I do like posts from others and get likes from others, so perhaps that’s a good contributing factor. I was already a member of 52 groups, and from what I’ve read, adding more above 50 doesn’t contribute to increasing your score.

Finally, the last pillar is Relationships. This score is composed of the number of connections you have and the degree to which you interact with those connections. For a score of 25 in this group, it’s been said that you need at least 5,000 connections, this is not true. If you carefully curate who you invite, you can get close to 25 with under 2,000 quality connections. If you’re a VP or higher, you get additional bonus points, and connections in your network that are VP or higher earn you more points than entry-level connections. The SSI is all about the value of the network you’ve built and can sell to. If your network is made up of decision-makers versus contributors or influences, then it’s more effective and hence valuable. Here you get bonus points for connections with coworkers and for a high connection rate acceptance ratio. In other words, if you spam a bunch of people with connection requests that you have nothing in common with, then you’re wasting your time. These people will likely not accept your request, and if they do, Linkedin will know you were spamming and that those people who did accept were just being polite, but aren’t valuable network contacts. Here my score started at 22.8, and just over 24 hours, I was able to run it up to 24.05, a 1.25-point gain. Now It should be clear that I had 1,700 or so connections to start, so I skillfully ran it up to 1,815 connections knowing everything above, and it paid off. I went through my company and offered to connect with anyone that I shared at least five connections. Also, I ground through those in Linkedin who had jobs near me geographically that also shared five connections with me and invited those people. The combination of these two activities yielded just over two hundred open connection requests, and very nearly half accepted within 24-hours.

After 24 hours, some rapid course corrections, and a few hours working my network while on a car ride on a Saturday, I’ve brought my score up 1.35 points. Now that you know what I do about the SSI, I wish you all the best. Several people that have written articles about SSI are at or very close to 100. At 78, I’m still a rookie, but give me a few weeks. 

SSI Score 79 – Sunday, June 28th, 2020

SSI Score 82 – Monday, June 29th, 2020 – Clearly what I learned above is working, five points in only a few days. Actually the score was 81.62, but Linkedin rounds.

SSI Score 82 – Tuesday, June 30th, 2020 – Actually 81.77, only a minor gain from yesterday, as I throttled back to see if there was “momentum.” Below is my current screenshot from today, here you can see that I’ve maxed out on “Build Relationships” at 25 and have nearly maxed “Establishing my Brand” at 24.78. Therefore my focus moving forward needs to be “Engage with Insight” and “Finding the Right People”. Engagement means utilizing all my Inmails with the intent of getting back a reply of some kind. To improve my Finding the Right People I need to leverage Sales Navigator to find leads to send Inmails to, perhaps two birds with one stone.

SSI Score 84 – Sunday, July 5th, 2020 – So the gain was five points in a week, but for the most part I took Thursday through Sunday off for the US holiday and had to move my mom out of the FL Keys (I live in Raleigh, so we had to fly down and back to Miami). Thankfully, there was clearly some momentum going into the weekend.

Banned from the Internet

First Ever Picture of a Black Hole

After nearly seventeen years with IBM, in July of 2000, I left for a startup called Telleo founded by four IBM Researchers I knew and trusted. From 1983 through April 1994, I worked at IBM Research in NY and often dealt with colleagues at the Almaden Research Center in Silicon Valley. When they asked me to join, there was no interview; I already had impressed all four of them years earlier, this was in May of 2000. In March of 2001, the implosion of Telleo was evident. Although I’d not been laid off, I voluntarily quit just before Telleo stopped paying on their IBM lease, which I’d negotiated. The DotCom bubble burst in late 2000, so by early 2001, you were toast if you weren’t running on revenue. Now, if you didn’t live in Silicon Valley during 2001, imagine a large mining town where the mine had closed, this was close to what it was like, just on a much grander scale. Highway 101 had gone from packed during rush hour to what it typically looked like during the weekend. Venture Capitalists drew the purse strings closed, and if you weren’t running on revenue, you were out of business. Most dot-com startups bled red monthly and eventually expired.

Now imagine being an unemployed technology executive in the epicenter of the worst technology employment disaster in history, up until that point, with a wife who volunteered and two young kids. I was pretty motivated to find gainful employment. For the past few years, a friend of mine had run a small Internet Service Provider and had allowed me to host my Linux server there in return for some occasional consulting.

I’d set Nessus up on that server, along with several other tools, so it could be used to ethically hack client’s Internet servers, only by request, of course. One day when I was feeling particularly desperate, I wrote a small Perl script that sent a simple cover letter to jobs@X.com. Where “X” was a simple string starting with “aa” and eventually ending at “zzzzzzzz”. It would wait a few seconds between each email, and since these were to jobs@x.com I figured it was an appropriate email blast. Remember this was 2001, before SPAM was a widely used term. I thought “That’s what the “jobs” account is for anyway, right?” My email was very polite and requested a position and briefly highlighted my career.

Well, somewhere around 4,000 emails later, I got shut down, and my Internet domain, ScottSchweitzer.com was Black Holed. For those not familiar with the Internet version of this term, it essentially means no email from your domain even enters the Internet. If your ISP is a friend and he fixes it for you, he can run the risk of getting sucked in, and all the domains he hosts get sucked into the void as well. Death for an ISP. Fortunately, my friend that ran the ISP was a life-long IBMer, and he had networking connections at some of the highest levels in the Internet, so the ban stopped with my domain. 

To clean this up required some emails and phone calls to fix the problem from the top down. It took two weeks and a fair amount of explaining to get my domain back online to the point where I could once again send out emails. Fortunately, I always have at least several active email accounts, and domains. Also, this work wasn’t in vain, as I’d received a few consulting gigs as a result of the email blast. So now you know someone who was banned from the Internet!

Analog Time in a Digital World

1880s Self-Winding Clock

Imagine buying a product today that your family would cherish and still be using in 2160? I’m not talking about a piece of quality furniture or artwork, but an analog clock with some parts that are in continuous motion. This past weekend I once again got my grandfather clock, pictured to the right, functioning. This is a pendulum clock initially made for and installed at the NY Stock Exchange, then moved into a law office, and later a private residence. I inherited it some forty years ago, before becoming a teen, and it has operated a few times since it was placed in my care. This timepiece was manufactured in the 1880s, and it is a self-winding, battery-powered unit. Batteries were a very new technology in 1880. This clock shipped with two wet cells that the new owner then had to set up. The instructions called for pouring powered Sulfuric Acid from paper envelopes into each of two glass bottles, adding water, then stirring. The lids of the glass bottles contained the anode and cathode of the cell. The two cells were then wired in series and produced a three-volt battery.

When it was installed at the Exchange, around the time of Thomas Edison, it was modified with the addition of a red button on the side designed to synchronize the clock with the others on the Exchange. The button on this clock, and all others like it at the Exchange, would be pressed on the hour, prior to the opening of the market. It has since been rewired, so the button triggers an out of sync winding. During my childhood, this clock ran for a few years until it fell silent as a result of dead batteries. Over my adult life, it has run continuously several times, often only for a year or two at a stretch until the batteries were depleted. The issue, more often than not, was simply access to replacement batteries. In the 1950s, the batteries this clock required, a pair of dry-cell No. 6, were available in most hardware stores. Many devices were designed to use this No.6 cell, including some of the earliest automobiles, early in the 1900s it was a very popular power source.

We moved recently, and one of my wife’s conditions on hanging my grandfather clock in the living room was that it function. In 2005 after an earlier move from California to North Carolina, we hired a local clock repairman to restore this family timepiece. He cleaned out the old lubrication, replaced a few worn parts, hung the clock, and sold us his last pair of original No. 6 dry-cell batteries. For those not familiar with the No. 6, it was a 1.5-volt battery the size of a large can of beer, but the standout attribute of this battery was that it could provide high instantaneous current for a brief period of time. In the 1990s, No. 6 cells were banned in the US because they used Mercury. The replacements offered had the same size and fit, but couldn’t produce the required instantaneous current.

Last week after some research, and a little math, I realized that four, dual D-cell battery boxes connected via terminal strips to limit current loss, could produce about 30% more instantaneous current at three volts than the original pair of No. 6 cells. So, I glued the boxes together to form a maintainable brick, added two five terminal strips, one positive and one negative, then tested all the wiring and batteries. After rehanging the clock, leveling the case, installing the new battery box, and pressing the wind button, I raised the pendulum and let it go. The escapement rocked back and forth, enabling the secondhand gear to creep ahead one tooth at a time, but after ten minutes, the clock fell silent once again.

Several more attempts, each roughly ten minutes, and the internal switch eventually kicked in, and the batteries did their job. The clock wound automatically for the first time in well over a decade; I was elated. Alas though another few minutes later, the clock came to rest once more. After some additional research into how the clock was losing energy, I came across a few suggestions. The hands were placed correctly, secured, and didn’t touch anything. I shoved my iPhone camera into the side of the case to get a view of the pendulum hanging on the escapement and found that it was hanging a bit askew. I then sprayed a small amount of synthetic lubricant on the escapement, crossed my fingers, and gave the pendulum another nudge. Fifteen minutes later, it coasted to a halt. Tinkering a few more times with the pendulum, over the next few hours, and a bit more lubricant, the clock was eventually sustaining movement. That was Sunday, it’s now Tuesday night, and the clock is going strong and hasn’t stopped since. As of this morning, the clock was losing about three minutes every twenty-four hours.

Now the chase is one to improve the accuracy. This is done by changing the pendulum length through a nut below the pendulum’s bob. If you loosen the nut, it lowers the bob making the pendulum longer, and the clock runs slower. Tighten the nut, and the pendulum is shorter, and the clock runs faster. Perhaps a successive series of turns over the next few days will get this 140-year-old device down to a few seconds a day!

Update Friday, June 12, after some very gentle tightening of the pendulum, thereby shortening its length and speeding up its swing, we’re down from three minutes to 61 seconds a day. There may be a few threads left, so hopefully, we can get this loss down to less than 30 seconds a day.

Update Monday, July 6, the clock is running strong on the original eight D-cells from last month, I’m expecting about a year on each set, and it still sounds strong. Some additional tweaks and we’re now only losing 25 seconds a day or one every hour. So I just need to add a minute every two days, not bad.

Update Sunday, June 27, 2021, the clock has run non-stop for the past year and is still running strong on the initial set of eight Duracell D-cells. At this time the clock is still winding regularly every hour, and it is losing less than 15 seconds/day so I often adjust it every week or so setting it two minutes ahead. After inspecting all the cells for leakage, they look fine, and testing each one I’ve found they are all 1.44V plus or minus 0.001V, which is impressive. The four unused, lets call them control cells, have all maintained a voltage of 1.61V. From my reading it appears that until the voltage drops below 1.3V I should still get good performance out of them. So they will remain in service for another year. What an engineering marvel.

Google, Memcache, and How Solarflare May Have Come Out on Top

This post was originally made in January of 2015, but due to a take-down letter, I received a week later this story has remained unpublished for the last seven years.

January 2015 Take-Down

Memcached was released in May of 2003 by Danga Interactive. This free, open-source software layer provides a general-purpose distributed in-memory cache that can be used by both web & app servers. Five years later, Google released version 1.1.0 of its App Engine, which also included its version of an in-memory cache called Memcache. This capability to store objects in a huge pool of memory spread across a large distributed fabric of servers is integral to the performance inherent in many of Google’s products. Google showed in a presentation earlier this year on Memcache that a query from a typical data requires 60-100 milliseconds, while a similar Memcache query only needed 3-8 milliseconds, a 20X improvement. It’s no wonder Google is a big fan of Memcache. This is why in March of 2013, Google acquired both technology and people from a small networking company called Myricom, who cracked the code to accelerate the network path to Memcache.

To better understand what Google acquired, we need to roll back to 2009 when Myricom had ported its MX communications stack to Ethernet. Mx over Ethernet (MXoE) was originally crafted for the High-Performance Computing (HPC) market, to create a very limited, but extremely low latency, stack for their then two-year-old 10GbE NICs. MXoE was reformulated using UDP instead of MX, and this new low latency driver was then named DBL. It was then engineered to service the High-Frequency Traders (HFT) on Wall Street. Throughout 2010 Myricom had evolved this stack by further adding limited TCP functionality. At that time, half round trip performance (one send plus one receive) over 10GbE was averaging 10-15 microseconds, DBL did it 4 microseconds. Today (2015) by comparison Solarflare, the leader in ultra-low latency network adapters, does this in well under 2 microseconds. So in 2012, Google was searching for a method to apply ultra-low latency networking tricks to a new technology called Memcache. In the fall of 2012 Google invited several Myricom engineers to discuss how DBL might be altered to service Memcache directly. Also they were interested in known if Myricom’s new silicon, due out in early 2013, might also be a good fit for this application. By March of 2013 Google had tendered an offer to Myricom to acquire their latest 10/40GbE chip, which was about to go into production, along with 12 of their PhDs who handled both the hardware & software architecture. Little is publicly known if that chip or the underlying accelerated driver layer for Memcache has ever made it into production at Google.

Fast forward to today (2015), and earlier this week, Solarflare released a new whitepaper highlighting how they’ve accelerated the publicly available free open-source layer called Memcached using their ultra-low latency driver layer called OpenOnload. In this whitepaper, Solarflare demonstrated performance gains of 2-3 times that of a similar Intel 10GbE adapter. Imagine Google’s Memcache farm being only 1/3 the size it is today? We’re talking serious performance gains here. For example, leveraging a 20 core server, the Intel dual 10GbE adapter supported 7.4 million multiple get operations while Solarflare provided 21.9 million, nearly a 200% increase in the number of requests. If we look at mixed throughput (get/set in a ratio of 9:1), Intel delivered 6.3 million operations per second while Solarflare delivered 13.3 million, a 110% gain. That’s throughput, how about latency performance? Using all 20 cores, and batches of 48 get requests Solarflare clocked in at 2,000 microseconds, and Intel at 6,000 microseconds. Across all latency tests, Solarflare reduced network latency by, on average, 69.4% (the lowest reduction was 50%, and the highest 85%). Here is a link to the 10-page Solarflare whitepaper with all the details.

While Google was busy acquiring technology & staff to improve their own Memcache performance, Solarflare delivered it for their customers and documented the performance gains.

Half a Billion Req/Sec!

Can your In-Memory Key-Value Store handle a half-billion requests per second?

Many applications from biological to financial and Web2.0 utilize in-memory databases because of their cutting-edge performance, often delivering several orders of magnitude faster response time than traditional relational databases. When these in-memory databases are moved to their own machines in a multi-tier application environment, they often can serve 10 million requests per second, and that’s turning all the dials to 11 on a high-end dual-processor server. Much of this is due to how applications communicate with the kernel and the network.

Last year, my team used one of these databases, Redis, we then bypassed the kernel, connected up a 100Gbps network, and took that 10 million requests per second to almost 50 million. Earlier this year we began working with Algo-Logic, Dell, and CC Integration to blow way beyond that 50 million target. At RedisConf2020 in May, Algo-Logic announced a 1U Dell server they’ve customized that can service nearly a half-billion requests per second. To process these requests, we’re spreading the load across two AMD EPYC CPUs and three Xilinx FPGAs. All requests are serviced directly from local memory using an in-memory key-value store system. For those requests serviced by the FPGAs, their response time is measured in billionths of a second. Perhaps we should explain how Algo-Logic got here, and why this number is significant.

Some time ago, a new form of databases came back into everyday use, and they were classified as NoSQL, because they didn’t use Structured Query language, and were non-relational. These databases rely on clever algorithmic tricks to rapidly store and retrieve information in memory; this is very different from how traditional relational databases function. These NoSQL systems are sometimes referred to as key-value stores. With these systems, you pass in a key, and a value is returned. For example, pass in “12345,” and the value “2Z67890” might be returned. In this case, the key could be an order number, and the value returned is the tracking number or status, but the point is you made a simple request and got back a simple answer, perhaps in a few billionths of a second. What Algo-Logic has done is they wrote an application for the Xilinx Alveo U50 that turns the 40 Gbps Ethernet port on this card into four smoking fast key-value stores each with access to the cards 8GB of High Bandwidth Memory (HBM). Each Alveo U50 card with Algo-Logics KVS can service 150 million requests per second. Here is a high-level architectural diagram showing all the various components:

Architecture for 490M Requests/Second into a 1U Server

There are five production network ports on the back of the server, three 40Gbps and two 25Gbps. Each of the Xilinx Alveo U50 cards has a single 40Gbps port, and the dual 25Gbps ports are on an OCP-3 form factor card called the Xilinx XtremeScale X2562 which carries requests into the AMD EPYC CPU complex. Algo-Logic’s code running in each of the Xilinx Alveo cards breaks the 40Gbps channel into four 10Gbps channels and processes requests on each individually. This enables Algo-Logic to make the best use possible of the FPGA resources available to them.

Furthermore, to overcome network overhead issues, Algo-Logic has packed 44 get requests into a single 1408 byte packet. For those familiar with Redis, this is similar to an MGET, multiple get, request. Usually, a single 32-byte get request can easily fit into the smallest Ethernet payload, which is 40 bytes, but then Ethernet adds an additional 24-bytes of routing and a 12-byte frame gap — using a single request per-packet results in networking overhead consuming 58% of the available bandwidth. This is huge, and can clearly impact the total request per second rate. Packing 44 requests of 32 bytes each into a single packet means that the network overhead drops to 3% of the total bandwidth, which means significantly greater requests per second rates.

What Algo-Logic has done here is extraordinary. They’ve found a way to tightly link the 8GB of High Bandwidth Memory on the Xilinx Alveo U50 to four independent key-value store instances that can service requests in well under a micro-second. To learn more consider reaching out to John Hagerman at Algo-Logic Systems, Inc.

SmartNICs, the Next Wave in Server Acceleration

As system architects, we seriously contemplate and research the components to include in our next server deployment. First, we break the problem being solved into its essential parts; then, we size the components necessary to address each element. Is the problem compute, memory, or storage-intensive? How much of each element will be required to craft a solution today? How much of each will be needed in three years? As responsible architects, we have to design for the future, because what we purchase today, our team will still be responsible for three years from now. Accelerators complicate this issue because they can both dramatically breath new life into existing deployed systems, or significantly skew the balance when designing new solutions.

Today foundational accelerator technology comes in four flavors: Graphical Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), Multi-Processor Systems on a Chip (MPSoCs) and most recently Smart Network Interface Cards (SmartNICs). In this market, GPUs are the 900-pound gorilla, but FPGAs have made serious market progress the past few years with significant deployments in Amazon Web Services (AWS) and Microsoft Azure. MPSoCs, and now SmartNICs, blend many different computational components into a single chip package, often utilizing a mix of ARM cores, GPU cores, Artificial Intelligence (AI) engines, FPGA logic, Digital Signal Processors (DSPs), as well as memory and network controllers. For now, we’re going to skip MPSoCs and focus on SmartNICs.

SmartNICs place acceleration technology at the edge of the server, as close as possible to the network. When computational processing of network intense workloads can be accomplished at the network edge, within a SmartNIC, it can often relieve the host CPU of many mundane networking tasks. Normal server processes require that the host CPU spend, on average, 30% of it’s time managing network traffic, this is jokingly referred to as the data center tax. Imagine how much more you could get out of a server if just that 30% were freed up, and what if more could be made available?

SmartNICs that leverage ARM cores and or FPGA logic cells exist today from a growing list of companies like Broadcom, Mellanox, Netronome, and Xilinx. SmartNICs can be designed to fit into a Software-Defined Networking (SDN) architecture. They can accelerate tasks like Network Function Virtualization (NVF), Open vSwitch (OvS), or overlay network tunneling protocols like Virtual eXtensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE). I know, networking alphabet soup, but the key here is that complex routing, and packet encapsulation tasks can be handed off from the host CPU to a SmartNIC. In virtualized environments, significant amounts of host CPU cycles can be consumed by these tasks. While they are not necessarily computationally intensive, they can be volumetrically intense. With datacenter networks moving to 25GbE and 50GbE, it’s not uncommon for host CPUs to process millions of packets per second. This processing is happening today in the kernel or hypervisor networking stack. With a SmartNIC packet routing and encapsulation can be handled at the edge, dramatically limiting the impact on the host CPU.

If all you were looking for from a SmartNICs is to offload the host CPU from having to do networking, thereby saving the datacenter networking tax of 30%, this might be enough to justify their expense. Most of the SmartNIC product offerings from the companies mentioned above run in the $2K to $4K price range. So suppose you’re considering a SmartNIC that costs $3K, with the proper software, and under load testing, you’ve found that it returns 30% of your host CPU cycles, what is the point at which the ROI makes sense? A simplistic approach would suggest that $3K divided by 30% yields a system cost of $10K. So if the cost of your servers is north of $10K, then adding a $3K SmartNIC is a wise decision, but wait, there’s more.

SmartNICs can also handle many complex tasks like key-value stores, encryption, and decryption (IPsec, MACsec, soon even SSL/TLS), next-generation firewalls, electronic trading, and much more. Frankly, the NIC industry is at an inflection point similar to when video cards evolved into GPUs to support the gaming and virtualization market. While Sony coined the term GPU with the introduction of the Playstation in 1994, it was Nvidia five years later in 1999 who popularized the GPU with the introduction of the GeForce 256. I doubt that in the mid-1990s, while Nvidia was designing the NV10 chip, the heart of the GeForce 256, that their engineers were also pondering how it might be used in high-performance computing (HPC) applications a decade later that had nothing to do with graphic rendering. Today we can look at all the ground covered by GPU and FPGA accelerators over the past two decades and quickly see a path forward for SmartNICs where they may even begin offloading the primary computational tasks of a server. It’s not inconceivable to envision a server with a half dozen SmartNICs all tasked with encoding video, or acting as key-value stores, web caches, or even trading stocks on various exchanges. I can see a day soon where the importance of SmartNIC selection will eclipse server CPU selection when designing a new solution from the ground up.

The Role of Trust in the IoT

Shutterstock Image

I’m actively seeking a new job, please see the above subtitle. If you want to learn more consider visiting my Linkedin Profile.

A distinctive characteristic of our species, perhaps the most unique, and the one that has separated us from all the others, is the compounding effect of our technology. Each generation has added to our collective knowledge, improved our processes, and accelerated our development. Today we regularly craft complex products, with billions of internal components, from devices that are one hundredth the diameter of a blood cell. Regardless of how small technology enables us to shrink our creations, another issue that remains constant is that we still need to place our trust in this technology for it to make a difference in our lives. Few people understand how Alexa takes our verbal request and turns it into an answer, the how is unimportant to most. What is important is that when she provides us with information, we can trust and act upon that information. Without trust, technology loses all its advantage; it will fall into disuse and eventually be pruned from our collective knowledge base. Trust is the cement that binds one innovation on top of the next, and it is vital to the advancement of technology. Trust is a fragile construct, though, which can easily be destroyed.

My childhood was enjoyed in a suburb an hour north of the Big Apple. From the mid-1960s through most of the 1970s we’d never locked our doors, even the garage was often left open overnight. Our home was a simple raised ranch tract structure built in a sleepy little town, only a decade more advanced than Mayberry. Most Saturday mornings, I’d ride my bike five miles into town to turn in my paper route money. Then I’d take my wages, buy a slushy near the firehouse, stop at the Radio Shack to see what’s new, and finish with a Big Mac lunch at McDonald’s and arrive home by early afternoon. If the weather were beautiful, I’d swing by the house, pick up my rod and head down to one of a half dozen or more fishing spots on the nearby reservoir. I only needed to be home for dinner. Life was simple, and trust wasn’t earned, it was our default setting. This was decades before my first pager or cell phone, but I always had a dime in my pocket to call home from a payphone in the event the weather turned, or my bike failed. One Sunday, when I was twelve, we came home early from church to find our next-door neighbor’s son sitting on the back steps with several of our prized belongings in his hands. My parents, especially my mom’s trust, was shattered. This single event changed everything and established a new paradigm. We started locking our doors, and my mom gave me a brass key for the first time in my life.

Trust is an interesting attribute; we give it away for free, then we’re shocked when it’s abused or entirely disregarded.
The above brass key represented a simple technological solution designed to bridge the trust my mom had lost in our neighbors. It’s interesting to see how a single small piece of metal, nothing more than a token with a single function, can replace trust lost. Many years later, as a security professional, I learned how easily that custom piece of brass could be supplanted by two generic pieces of spring steel, some skill, and a few seconds. Technology is the distillation of our expertise, processes, and techniques in the production of goods or services, so why is trust important?

As we glide into the age of the Internet of Things (IoT), everything will become interconnected, and trust will be the cement in the foundation on which all this technology depends. I’m in the process of building a new home. It will feature the latest IoT: locks, garage door opener, doorbell, thermostats, smoke detectors, light fixtures, outlets, appliances, speakers, cameras, and even an elevator. Everything will be interconnected, and Alexa will have dominion over it all. As I come home, my garage door will open, and it will trigger a series of events throughout the house if nobody else is already back. The HVAC system will make the necessary adjustments based on my preferences and the time of year. Depending on the time of day, lights may come on in a predetermined sequence, and music will be playing. If my programming works out properly, the TV will display anomalous events since my departure skimmed from the various logs of all these IoT devices. I’ll then know if doors were opened while I was absent, and if so, I can call up and review all motion video captured at each of these points of entry. All of this will require each piece trusting that the others are performing correctly.

This is not to say that we haven’t seen trust in IoT devices be bypassed in the recent past. Three common agents can violate the trust inherent in any system: insiders, outsiders, or the manufacturer. By insiders, I generally mean the average non-technical system user; in the example stated above, it will be my wife, daughter, or parents when they visit. Outsiders are folks with a malicious intent, whose objectives are not aligned with the users, and their goal is the exploitation of the system, often for some revenue-generating purpose. Finally, there is the manufacturer, until the past decade this was a non-issue, but we’ve seen a growth in state-sponsored exploitation of technology in both design and within the supply chain.

A story came out last year where a Nest camera was used by a malicious outsider to terrorize an eight-year-old girl in her bedroom. While the camera was “hacked,” it was later released that the homeowner had a trivial password for the camera and had NOT enabled two-factor authentication (2FA). The attacker used nothing more than a basic web crawling service to find the addresses of Nest Cameras; then, they likely proceeded to use a tool like Hydra to see if any of those cameras had a trivial password without 2FA enabled. Ultimately it was the homeowner who had left the “door open” for this attacker to walk through. While Nest shouldn’t make 2FA mandatory, they could have easily prevented the homeowner from assigning a trivial password to their account.

We’ve seen reports over the years that various SmartPhones have been susceptible to HotMic vulnerabilities by hackers. This malicious code is installed via a targeted spear-phishing attack or social engineering. Once the code is executed that SmartPhones Mic can be enabled or disabled at will be the attacker. This enables the attacker to not only listen in on phone calls, but all the sounds captured by that smartPhone regardless of what application is running or what state the phone is in (unless of course it’s off).

Finally, we have manufacturers who have been both knowingly and unknowingly duped into, including spyware into their products. Laptops have been a common platform for concern in this space, and several spyware apps have shipped with new laptops over the past decade. Servers are a bit harder to infect as they often have no pre-installed applications with the possible exception of the OS. Here we’ve heard stories of supply chains being compromised and covert spy hardware being physically inserted into these products, possibly without the manufacturers being aware of the transgression. Here it’s hard to know the true story.

So as IoT consumers, what can we do? Well, we have four possible courses of action:

1. Become a Luddite, ignore the trend in IoT, and remove all technology from your life. While this is a choice, if you’re reading this, it isn’t one any of us would find acceptable.

2. Be a sheep, blindly trust everyone, buy the latest gear, and auto-install every update. For the vast majority of folks, this is the only viable option. They likely aren’t technology literate much beyond creating a password, and their lives are focused on other more important pursuits.

3. Trust, but read industry news and form your own opinion, then upgrade when your confident it’s appropriate and an improvement. This is where the vast majority of IT folks will land. They’ll stay current with trends, follow Reddit, form their own opinions, and provide support for their families and friends.

4. Trust, but verify by actively doing your network captures. Here is the elite core of bleeding-edge folks who watch their home network on their smartphone for new devices. At least one or more times a year, they’ll do some network captures during quiet times to see what devices might be overly chatty and if there are any latent security threats. They may even have small autonomous systems like Raspberry Pis actively looking for threats, and perhaps even posing as honeypots.

Since IoT devices are always on, they are ideal for co-opting as a distributed denial-of-service (DDoS) attack platform. We’ve seen this happen a number of times over the past few years, one security hole and thousands or even millions of products become launch platforms. IoT manufacturers need to enforce strong passwords on their gear and promote 2FA. They should also annually hire security professionals to test their products and services, and consider sharing those results with their customers in public Reddit groups. Often times customers provide the best feedback to improve a products feature set and security stance.

Autopilot, the Next Killer Application

Tesla Autopilot on the Highway

Yesterday during one of my many calls each week with my seventy-something mom, she mentioned that she might pass on going to her close friend’s 80th birthday party. When I asked why she said that the four and a half-hour drive up the Florida Turnpike was becoming too scary, she said that people are continually cutting her off, and it makes her very fearful. Mom hasn’t had an accident in decades, and she doesn’t have any of the usual scratches and small dents that often deface the autos of our greatest generation. Her vision is excellent, memory is intact, and reflexes are still acceptable. My dad passed seven years ago of lung cancer, and in the final weeks of his life, we had to insist that he no longer drive. At that time, the O2 saturation in his blood would often drop when he sat for a few minutes, and he’d fall asleep due to no fault of his own. Insisting your parent no longer drive and removing access to their car is not a pleasant task.

On relaying this story yesterday to a friend, she mentioned that her mom, also well into her seventies, had significant macular degeneration and was still driving. It wasn’t until her daughter had noticed a dent that her mom volunteered her medical condition. Once that was exposed, they too had to face the task of removing her freedom to travel at will. Another friend has a mom with mild dementia, and while her driving skills are still sharp, she sometimes forgets where she is going or how to get home. They chose to put a tracker on her car and geofence around her house, church, and market so that if she stays within a half-mile of this triangle, she can roam at will. If she gets worried or “lost” family members can quickly look up on their smartphones where she is and calmly provide her with verbal directions to guide her to her destination. While I don’t agree with this approach, it’s not my place to tell them otherwise. Driving is a privilege, but over a certain age, we often perceive it as a right, and taking that away from someone can be mentally crippling. Autopilot should be a fantastic feature for this demographic, but unfortunately, they aren’t, and never will be, intellectually prepared to adopt this feature. We need to get there in steps.

Many were surprised by a Super Bowl commercial this year aptly named “Smaht Pahk” where a 2020 Hyundai Sonata parks itself into an otherwise tight spot. This feature is made possible because of a new breed of computer chips that fuse computing and sensor processing on the same chip. When we say sensor processing, in this case, we’re talking about receiving live data from 12 ultrasonic sensors around the car, four 180-degree fisheye cameras, two 120-degree front, and rear-facing cameras, GPS and an inertial measurement unit (IMU). This is then all consumed by some extremely smart Artificial Intelligence, which then finds and steers the car into a safe parking spot. This article, though, is about Autopilot, so why are we talking about self-parking?

As technology marketers, we’ve learned that cutting edge features, will quickly become a boat anchor if consumers aren’t intellectually prepared to accept it. My favorite example is the IBM Simon; arguably, the first smart phone brought to market 13 years before Steve Jobs debuted the “revolutionary” Apple iPhone. The Simon was on the market for only seven months and sold a mere 50K units. Even more surprising, the prototype was shown two years earlier at the November 1992 COMDEX. There will always be affluent bleeding-edge, early adopters, in the above case 50K, who will purchase revolutionary products, but the gulf between sales to these consumers and the mass market can often be enormous. IBM was correct in pulling the Simon so quickly after its introduction because mass-market consumers were at least a decade behind in adoption. We needed to experience MP3 players in 1998 to accept the Apple iPod three years later in 2001. We also needed to carry around a wide assortment of cell phones, personal organizers, and multifunction calculators. Every one of these devices prepared consumers for the iPhone in 2007. As technology marketers, we need to help consumers walk before we can expect them to run.

Self-driving cars have appeared in science fiction movies many times over the years, one of my favorite scenes being Sandra Bullock in “Demolition Man” (1993) set in 2032. Self-driving isn’t even mentioned; she’s busy face-timing with her boss as her car speeds down the highway. In the foreground, the steering wheel is retracted and moving on its own. We need to slow-roll the public into becoming comfortable yielding control of driving over to the car itself. Technologies like “Auto Emergency Braking” and accepting help from “Lane Keeping Assist” along with “Smart Park” are feature inroads that will make self-driving commonplace. Given how consumers adopt technology, it wouldn’t be surprising at all if its 2032 before self-driving becomes standard in most vehicles. Now Elon Musk, and his team at Tesla, are all brilliant people, as were the IBM Simon team. The difference, though, is that Tesla is selling a car first while delivering a mobile computational platform. The IBM Simon was viewed as a digital assistant first and a phone second. The primary functionality is critical to consumer perception. Consumers know how to buy a car, heck we have a century of experience in this market. Conversely, if Tesla had chosen to market their technology as a mobile computing platform, they’d have gone out of business years ago. I’m sure some readers are still scratching their heads at the notion of a mobile computing platform.

Consumers have become comfortable with their smartwatches and phones, tablets, and computers, all autonomously upgrading while we sleep, so why should their car be any different? Imagine a car whose features are updated remotely and autonomously at night while it is charging. Today Tesla’s Autopilot is restricted to highway driving, with smart features like lane centering, adaptive cruise control, self-parking, automatic lane changing, and summon. Later this year, via a nightly update, some models will pick up recognizing and responding to traffic lights and stop signs, then automatically driving on city streets. So how is this possible? It all goes back to the technology behind self-park.

For all these advanced driving features to take place, we need to put computing as close as possible to where the data originates. Also, these computations need to be instantiated in hardware, easily reprogrammable, ruggedized and run as fully autonomous systems. General-purpose CPUs or even GPUs won’t cut it; these applications are ideal for FPGAs coupled with complete systems on a chip. People aren’t going to wait while their car boots up, then loads software into all its systems. We are accustomed to pressing a button to start the car, shifting it into gear and going.

A truly intelligent autopilot that could go from the home garage to a parking space at the destination and back would address all the above issues for our greatest generation. My mom, who can still drive, should be content supervising a car while it maintains a reasonable highway speed and deftly avoids the automobiles around it. She could then roam from her home in the Florida Keys up both coasts to visit friends because she’d once again be confident behind the wheel. Autopilot is the solution our aging boomers require to maintain their freedom till the very end. Unfortunately, many are too old to accept it intellectually, my mom included. The tail end of the Boomers, perhaps those born in the early 1960s, are the older side of Tesla’s core demographic for this $7,000 Autopilot feature. It’s a shame that the underlying technology and its application came to late for my mom, and her generation.

Could I Mine Bitcoin and Turn a Profit?

Large Scale Mining

When people find out that I’m connected to digital currency mining the first question they often ask is the one above. Sadly, as an individual, the answer is no. It would cost you roughly $200 to get started, and $1.74/day, yes, it’s a money pit right now, to call yourself a Bitcoin miner. Oh, and you’d probably drive your family crazy with all the noise. Here are the economics behind Bitcoin (BTC) mining at this moment.

Today, and today is important because all the numbers below are very fluid, a new Bitcoin block is mined every 11 minutes and produces a block reward of 12.73 BTC. This includes additional fees that are earned in the process. Using the following formula, we can see that roughly 70 BTC are earned hourly:

(12.73 BTC / Block) * (1 Block / 11 Minutes) * (60 Minutes / Hour) = 69.44BTC / Hour

24 hours / day * 69.44BTC / hour = 1,666.56 BTC / Day

At this moment in time, the total computational power working to mine BTC is 82,030 Peta Hashes per second or to convert it to more standard units that are 82,030,000 Tera Hashes per second. One of the most affordable and efficient miners available now is the Bitmain Antminer S9K, which retails for $101, but after import taxes and shipping from China, expect it will run you $200. This box produces 14TH/sec, so if you put one of these online, you’d represent 0.0000001707 of the total capacity right now. Multiply that by the daily BTC reward, and you could earn 0.0002344 BTC a day. With BTC trading at $7,608USD, that would mean you could earn $1.78USD/day before mining pool fees and the cost of power. Pool fees often run at 5%, so this brings your earnings down to $1.69USD/day. The S9K miner draws 1.19KW, so at $0.12/KW, this means it requires $3.43/day in electricity, so you’d be out of pocket $1.74/day.

If we were to calculate things back a bit, we would find that if BTC were trading at something over $14,633/BTC, we would be breaking even.

If we ever want to earn back our $200 capital investment, we would then assume a six-month return, the industry rule of thumb in mining today for ASIC rigs. This would require earning at least a $1/day after costs, so that would require that the price of BTC remain at or above $18,900.

Now one could adopt the famous “mine and hold” strategy, meaning that you hold onto every BTC you earn, and then sometime six months or more in the future when BTC is trading above $18,900 if you were to sell you’d at least break even.

It should be noted that in May of 2020, Bitcoin will go through a halving event and rewards for each block after that will be cut in half, and at that point, the price of BTC is expected to climb significantly. So mine and hold could pay off in spades.   

One final thing to consider, these BTC mining rigs are essentially two 5.25″ fans blowing air over hundreds of chips, so they are noisy and hot. They are 1200W space heaters that happen to produce a little BTC. So if you do want to venture into this market as an individual, you should consider doing it in a sound-proof room and then venting the heat to someplace useful, perhaps grandma’s room, she’s always cold. 

*Note: This numbers were on November 21st when this piece was first written. Since then Bitcoin has gone from $7,608 to $6,632, so its now even less profitable.