After nearly seventeen years with IBM, in July of 2000, I left for a startup called Telleo founded by four IBM Researchers I knew and trusted. From 1983 through April 1994, I worked at IBM Research in NY and often dealt with colleagues at the Almaden Research Center in Silicon Valley. When they asked me to join, there was no interview; I already had impressed all four of them years earlier, this was in May of 2000. In March of 2001, the implosion of Telleo was evident. Although I’d not been laid off, I voluntarily quit just before Telleo stopped paying on their IBM lease, which I’d negotiated. The DotCom bubble burst in late 2000, so by early 2001, you were toast if you weren’t running on revenue. Now, if you didn’t live in Silicon Valley during 2001, imagine a large mining town where the mine had closed, this was close to what it was like, just on a much grander scale. Highway 101 had gone from packed during rush hour to what it typically looked like during the weekend. Venture Capitalists drew the purse strings closed, and if you weren’t running on revenue, you were out of business. Most dot-com startups bled red monthly and eventually expired.
Now imagine being an unemployed technology executive in the epicenter of the worst technology employment disaster in history, up until that point, with a wife who volunteered and two young kids. I was pretty motivated to find gainful employment. For the past few years, a friend of mine had run a small Internet Service Provider and had allowed me to host my Linux server there in return for some occasional consulting.
I’d set Nessus up on that server, along with several other tools, so it could be used to ethically hack client’s Internet servers, only by request, of course. One day when I was feeling particularly desperate, I wrote a small Perl script that sent a simple cover letter to jobs@X.com. Where “X” was a simple string starting with “aa” and eventually ending at “zzzzzzzz”. It would wait a few seconds between each email, and since these were to firstname.lastname@example.org I figured it was an appropriate email blast. Remember this was 2001, before SPAM was a widely used term. I thought “That’s what the “jobs” account is for anyway, right?” My email was very polite and requested a position and briefly highlighted my career.
Well, somewhere around 4,000 emails later, I got shut down, and my Internet domain, ScottSchweitzer.com was Black Holed. For those not familiar with the Internet version of this term, it essentially means no email from your domain even enters the Internet. If your ISP is a friend and he fixes it for you, he can run the risk of getting sucked in, and all the domains he hosts get sucked into the void as well. Death for an ISP. Fortunately, my friend that ran the ISP was a life-long IBMer, and he had networking connections at some of the highest levels in the Internet, so the ban stopped with my domain.
To clean this up required some emails and phone calls to fix the problem from the top down. It took two weeks and a fair amount of explaining to get my domain back online to the point where I could once again send out emails. Fortunately, I always have at least several active email accounts, and domains. Also, this work wasn’t in vain, as I’d received a few consulting gigs as a result of the email blast. So now you know someone who was banned from the Internet!
Imagine buying a product today that your family would cherish and still be using in 2160? I’m not talking about a piece of quality furniture or artwork, but an analog clock with some parts that are in continuous motion. This past weekend I once again got my grandfather clock, pictured to the right, functioning. This is a pendulum clock initially made for and installed at the NY Stock Exchange, then moved into a law office, and later a private residence. I inherited it some forty years ago, before becoming a teen, and it has operated a few times since it was placed in my care. This timepiece was manufactured in the 1880s, and it is a self-winding, battery-powered unit. Batteries were a very new technology in 1880. This clock shipped with two wet cells that the new owner then had to set up. The instructions called for pouring powered Sulfuric Acid from paper envelopes into each of two glass bottles, adding water, then stirring. The lids of the glass bottles contained the anode and cathode of the cell. The two cells were then wired in series and produced a three-volt battery.
When it was installed at the Exchange, around the time of Thomas Edison, it was modified with the addition of a red button on the side designed to synchronize the clock with the others on the Exchange. The button on this clock, and all others like it at the Exchange, would be pressed on the hour, prior to the opening of the market. It has since been rewired, so the button triggers an out of sync winding. During my childhood, this clock ran for a few years until it fell silent as a result of dead batteries. Over my adult life, it has run continuously several times, often only for a year or two at a stretch until the batteries were depleted. The issue, more often than not, was simply access to replacement batteries. In the 1950s, the batteries this clock required, a pair of dry-cell No. 6, were available in most hardware stores. Many devices were designed to use this No.6 cell, including some of the earliest automobiles, early in the 1900s it was a very popular power source.
We moved recently, and one of my wife’s conditions on hanging my grandfather clock in the living room was that it function. In 2005 after an earlier move from California to North Carolina, we hired a local clock repairman to restore this family timepiece. He cleaned out the old lubrication, replaced a few worn parts, hung the clock and sold us his last pair of original No. 6 dry-cell batteries. For those not familiar with the No. 6 it was a 1.5-volt battery the size of a large can of beer, but the standout attribute of this battery was that it could provide high instantaneous current for a brief period of time. In the 1990s, No. 6 cells were banned in the US because they used Mercury. The replacements offered had the same size and fit, but couldn’t produce the required instantaneous current.
Last week after some research, and a little math, I realized that four, dual D-cell battery boxes connected via terminal strips to limit current loss, could produce about 30% more instantaneous current at three volts than the original pair of No. 6 cells. So, I glued the boxes together to form a maintainable brick, added two five terminal strips, one positive and one negative, then tested all the wiring and batteries. After rehanging the clock, leveling the case, installing the new battery box, and pressing the wind button, I raised the pendulum and let it go. The escapement rocked back and forth, enabling the secondhand gear to creep ahead one tooth at a time, but after ten minutes, the clock fell silent once again.
Several more attempts, each roughly ten minutes, and the internal switch eventually kicked in, and the batteries did their job. The clock wound automatically for the first time in well over a decade; I was elated. Alas though another few minutes later, the clock came to rest once more. After some additional research into how the clock was losing energy, I came across a few suggestions. The hands were placed correctly, secured, and didn’t touch anything. I shoved my iPhone camera into the side of the case to get a view of the pendulum hanging on the escapement and found that it was hanging a bit askew. I then sprayed a small amount of synthetic lubricant on the escapement, crossed my fingers, and gave the pendulum another nudge. Fifteen minutes later, it coasted to a halt. Tinkering a few more times with the pendulum, over the next few hours, and a bit more lubricant, the clock was eventually sustaining movement. That was Sunday, it’s now Tuesday night, and the clock is going strong and hasn’t stopped since. As of this morning, the clock was losing about three minutes every twenty-four hours.
Now the chase is one to improve the accuracy. This is done by changing the pendulum length through a nut below the pendulum’s bob. If you loosen the nut, it lowers the bob making the pendulum longer, and the clock runs slower. Tighten the nut, and the pendulum is shorter, and the clock runs faster. Perhaps a successive series of turns over the next few days will get this 140-year-old device down to a few seconds a day!
Update Friday, June 12, after some very gentle tightening of the pendulum, thereby shortening its length and speeding up its swing, we’re down from three minutes to 61 seconds a day. There may be a few threads left, so hopefully, we can get this loss down to less than 30 seconds a day.
Update Monday, July 6, the clock is running strong on the original eight D-cells from last month, I’m expecting about a year on each set, and it still sounds strong. Some additional tweaks and we’re now only losing 25 seconds a day or one every hour. So I just need to add a minute every two days, not bad.
This post was originally made in January of 2015, but due to a take-down letter, I received a week later this story has remained unpublished for the last seven years.
Memcached was released in May of 2003 by Danga Interactive. This free, open-source software layer provides a general-purpose distributed in-memory cache that can be used by both web & app servers. Five years later, Google released version 1.1.0 of its App Engine, which also included its version of an in-memory cache called Memcache. This capability to store objects in a huge pool of memory spread across a large distributed fabric of servers is integral to the performance inherent in many of Google’s products. Google showed in a presentation earlier this year on Memcache that a query from a typical data requires 60-100 milliseconds, while a similar Memcache query only needed 3-8 milliseconds, a 20X improvement. It’s no wonder Google is a big fan of Memcache. This is why in March of 2013, Google acquired both technology and people from a small networking company called Myricom, who cracked the code to accelerate the network path to Memcache.
To better understand what Google acquired, we need to roll back to 2009 when Myricom had ported its MX communications stack to Ethernet. Mx over Ethernet (MXoE) was originally crafted for the High-Performance Computing (HPC) market, to create a very limited, but extremely low latency, stack for their then two-year-old 10GbE NICs. MXoE was reformulated using UDP instead of MX, and this new low latency driver was then named DBL. It was then engineered to service the High-Frequency Traders (HFT) on Wall Street. Throughout 2010 Myricom had evolved this stack by further adding limited TCP functionality. At that time, half round trip performance (one send plus one receive) over 10GbE was averaging 10-15 microseconds, DBL did it 4 microseconds. Today (2015) by comparison Solarflare, the leader in ultra-low latency network adapters, does this in well under 2 microseconds. So in 2012, Google was searching for a method to apply ultra-low latency networking tricks to a new technology called Memcache. In the fall of 2012 Google invited several Myricom engineers to discuss how DBL might be altered to service Memcache directly. Also they were interested in known if Myricom’s new silicon, due out in early 2013, might also be a good fit for this application. By March of 2013 Google had tendered an offer to Myricom to acquire their latest 10/40GbE chip, which was about to go into production, along with 12 of their PhDs who handled both the hardware & software architecture. Little is publicly known if that chip or the underlying accelerated driver layer for Memcache has ever made it into production at Google.
Fast forward to today (2015), and earlier this week, Solarflare released a new whitepaper highlighting how they’ve accelerated the publicly available free open-source layer called Memcached using their ultra-low latency driver layer called OpenOnload. In this whitepaper, Solarflare demonstrated performance gains of 2-3 times that of a similar Intel 10GbE adapter. Imagine Google’s Memcache farm being only 1/3 the size it is today? We’re talking serious performance gains here. For example, leveraging a 20 core server, the Intel dual 10GbE adapter supported 7.4 million multiple get operations while Solarflare provided 21.9 million, nearly a 200% increase in the number of requests. If we look at mixed throughput (get/set in a ratio of 9:1), Intel delivered 6.3 million operations per second while Solarflare delivered 13.3 million, a 110% gain. That’s throughput, how about latency performance? Using all 20 cores, and batches of 48 get requests Solarflare clocked in at 2,000 microseconds, and Intel at 6,000 microseconds. Across all latency tests, Solarflare reduced network latency by, on average, 69.4% (the lowest reduction was 50%, and the highest 85%). Here is a link to the 10-page Solarflare whitepaper with all the details.
While Google was busy acquiring technology & staff to improve their own Memcache performance, Solarflare delivered it for their customers and documented the performance gains.
Many applications from biological to financial and Web2.0 utilize in-memory databases because of their cutting-edge performance, often delivering several orders of magnitude faster response time than traditional relational databases. When these in-memory databases are moved to their own machines in a multi-tier application environment, they often can serve 10 million requests per second, and that’s turning all the dials to 11 on a high-end dual-processor server. Much of this is due to how applications communicate with the kernel and the network.
Last year, my team used one of these databases, Redis, we then bypassed the kernel, connected up a 100Gbps network, and took that 10 million requests per second to almost 50 million. Earlier this year we began working with Algo-Logic, Dell, and CC Integration to blow way beyond that 50 million target. At RedisConf2020 in May, Algo-Logic announced a 1U Dell server they’ve customized that can service nearly a half-billion requests per second. To process these requests, we’re spreading the load across two AMD EPYC CPUs and three Xilinx FPGAs. All requests are serviced directly from local memory using an in-memory key-value store system. For those requests serviced by the FPGAs, their response time is measured in billionths of a second. Perhaps we should explain how Algo-Logic got here, and why this number is significant.
Some time ago, a new form of databases came back into everyday use, and they were classified as NoSQL, because they didn’t use Structured Query language, and were non-relational. These databases rely on clever algorithmic tricks to rapidly store and retrieve information in memory; this is very different from how traditional relational databases function. These NoSQL systems are sometimes referred to as key-value stores. With these systems, you pass in a key, and a value is returned. For example, pass in “12345,” and the value “2Z67890” might be returned. In this case, the key could be an order number, and the value returned is the tracking number or status, but the point is you made a simple request and got back a simple answer, perhaps in a few billionths of a second. What Algo-Logic has done is they wrote an application for the Xilinx Alveo U50 that turns the 40 Gbps Ethernet port on this card into four smoking fast key-value stores each with access to the cards 8GB of High Bandwidth Memory (HBM). Each Alveo U50 card with Algo-Logics KVS can service 150 million requests per second. Here is a high-level architectural diagram showing all the various components:
There are five production network ports on the back of the server, three 40Gbps and two 25Gbps. Each of the Xilinx Alveo U50 cards has a single 40Gbps port, and the dual 25Gbps ports are on an OCP-3 form factor card called the Xilinx XtremeScale X2562 which carries requests into the AMD EPYC CPU complex. Algo-Logic’s code running in each of the Xilinx Alveo cards breaks the 40Gbps channel into four 10Gbps channels and processes requests on each individually. This enables Algo-Logic to make the best use possible of the FPGA resources available to them.
Furthermore, to overcome network overhead issues, Algo-Logic has packed 44 get requests into a single 1408 byte packet. For those familiar with Redis, this is similar to an MGET, multiple get, request. Usually, a single 32-byte get request can easily fit into the smallest Ethernet payload, which is 40 bytes, but then Ethernet adds an additional 24-bytes of routing and a 12-byte frame gap — using a single request per-packet results in networking overhead consuming 58% of the available bandwidth. This is huge, and can clearly impact the total request per second rate. Packing 44 requests of 32 bytes each into a single packet means that the network overhead drops to 3% of the total bandwidth, which means significantly greater requests per second rates.
What Algo-Logic has done here is extraordinary. They’ve found a way to tightly link the 8GB of High Bandwidth Memory on the Xilinx Alveo U50 to four independent key-value store instances that can service requests in well under a micro-second. To learn more consider reaching out to John Hagerman at Algo-Logic Systems, Inc.
As system architects, we seriously contemplate and research the components to include in our next server deployment. First, we break the problem being solved into its essential parts; then, we size the components necessary to address each element. Is the problem compute, memory, or storage-intensive? How much of each element will be required to craft a solution today? How much of each will be needed in three years? As responsible architects, we have to design for the future, because what we purchase today, our team will still be responsible for three years from now. Accelerators complicate this issue because they can both dramatically breath new life into existing deployed systems, or significantly skew the balance when designing new solutions.
Today foundational accelerator technology comes in four flavors: Graphical Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), Multi-Processor Systems on a Chip (MPSoCs) and most recently Smart Network Interface Cards (SmartNICs). In this market, GPUs are the 900-pound gorilla, but FPGAs have made serious market progress the past few years with significant deployments in Amazon Web Services (AWS) and Microsoft Azure. MPSoCs, and now SmartNICs, blend many different computational components into a single chip package, often utilizing a mix of ARM cores, GPU cores, Artificial Intelligence (AI) engines, FPGA logic, Digital Signal Processors (DSPs), as well as memory and network controllers. For now, we’re going to skip MPSoCs and focus on SmartNICs.
SmartNICs place acceleration technology at the edge of the server, as close as possible to the network. When computational processing of network intense workloads can be accomplished at the network edge, within a SmartNIC, it can often relieve the host CPU of many mundane networking tasks. Normal server processes require that the host CPU spend, on average, 30% of it’s time managing network traffic, this is jokingly referred to as the data center tax. Imagine how much more you could get out of a server if just that 30% were freed up, and what if more could be made available?
SmartNICs that leverage ARM cores and or FPGA logic cells exist today from a growing list of companies like Broadcom, Mellanox, Netronome, and Xilinx. SmartNICs can be designed to fit into a Software-Defined Networking (SDN) architecture. They can accelerate tasks like Network Function Virtualization (NVF), Open vSwitch (OvS), or overlay network tunneling protocols like Virtual eXtensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE). I know, networking alphabet soup, but the key here is that complex routing, and packet encapsulation tasks can be handed off from the host CPU to a SmartNIC. In virtualized environments, significant amounts of host CPU cycles can be consumed by these tasks. While they are not necessarily computationally intensive, they can be volumetrically intense. With datacenter networks moving to 25GbE and 50GbE, it’s not uncommon for host CPUs to process millions of packets per second. This processing is happening today in the kernel or hypervisor networking stack. With a SmartNIC packet routing and encapsulation can be handled at the edge, dramatically limiting the impact on the host CPU.
If all you were looking for from a SmartNICs is to offload the host CPU from having to do networking, thereby saving the datacenter networking tax of 30%, this might be enough to justify their expense. Most of the SmartNIC product offerings from the companies mentioned above run in the $2K to $4K price range. So suppose you’re considering a SmartNIC that costs $3K, with the proper software, and under load testing, you’ve found that it returns 30% of your host CPU cycles, what is the point at which the ROI makes sense? A simplistic approach would suggest that $3K divided by 30% yields a system cost of $10K. So if the cost of your servers is north of $10K, then adding a $3K SmartNIC is a wise decision, but wait, there’s more.
SmartNICs can also handle many complex tasks like key-value stores, encryption, and decryption (IPsec, MACsec, soon even SSL/TLS), next-generation firewalls, electronic trading, and much more. Frankly, the NIC industry is at an inflection point similar to when video cards evolved into GPUs to support the gaming and virtualization market. While Sony coined the term GPU with the introduction of the Playstation in 1994, it was Nvidia five years later in 1999 who popularized the GPU with the introduction of the GeForce 256. I doubt that in the mid-1990s, while Nvidia was designing the NV10 chip, the heart of the GeForce 256, that their engineers were also pondering how it might be used in high-performance computing (HPC) applications a decade later that had nothing to do with graphic rendering. Today we can look at all the ground covered by GPU and FPGA accelerators over the past two decades and quickly see a path forward for SmartNICs where they may even begin offloading the primary computational tasks of a server. It’s not inconceivable to envision a server with a half dozen SmartNICs all tasked with encoding video, or acting as key-value stores, web caches, or even trading stocks on various exchanges. I can see a day soon where the importance of SmartNIC selection will eclipse server CPU selection when designing a new solution from the ground up.
I’m actively seeking a new job, please see the above subtitle. If you want to learn more consider visiting my Linkedin Profile.
A distinctive characteristic of our species, perhaps the most unique, and the one that has separated us from all the others, is the compounding effect of our technology. Each generation has added to our collective knowledge, improved our processes, and accelerated our development. Today we regularly craft complex products, with billions of internal components, from devices that are one hundredth the diameter of a blood cell. Regardless of how small technology enables us to shrink our creations, another issue that remains constant is that we still need to place our trust in this technology for it to make a difference in our lives. Few people understand how Alexa takes our verbal request and turns it into an answer, the how is unimportant to most. What is important is that when she provides us with information, we can trust and act upon that information. Without trust, technology loses all its advantage; it will fall into disuse and eventually be pruned from our collective knowledge base. Trust is the cement that binds one innovation on top of the next, and it is vital to the advancement of technology. Trust is a fragile construct, though, which can easily be destroyed.
My childhood was enjoyed in a suburb an hour north of the Big Apple. From the mid-1960s through most of the 1970s we’d never locked our doors, even the garage was often left open overnight. Our home was a simple raised ranch tract structure built in a sleepy little town, only a decade more advanced than Mayberry. Most Saturday mornings, I’d ride my bike five miles into town to turn in my paper route money. Then I’d take my wages, buy a slushy near the firehouse, stop at the Radio Shack to see what’s new, and finish with a Big Mac lunch at McDonald’s and arrive home by early afternoon. If the weather were beautiful, I’d swing by the house, pick up my rod and head down to one of a half dozen or more fishing spots on the nearby reservoir. I only needed to be home for dinner. Life was simple, and trust wasn’t earned, it was our default setting. This was decades before my first pager or cell phone, but I always had a dime in my pocket to call home from a payphone in the event the weather turned, or my bike failed. One Sunday, when I was twelve, we came home early from church to find our next-door neighbor’s son sitting on the back steps with several of our prized belongings in his hands. My parents, especially my mom’s trust, was shattered. This single event changed everything and established a new paradigm. We started locking our doors, and my mom gave me a brass key for the first time in my life.
Trust is an interesting attribute; we give it away for free, then we’re shocked when it’s abused or entirely disregarded. The above brass key represented a simple technological solution designed to bridge the trust my mom had lost in our neighbors. It’s interesting to see how a single small piece of metal, nothing more than a token with a single function, can replace trust lost. Many years later, as a security professional, I learned how easily that custom piece of brass could be supplanted by two generic pieces of spring steel, some skill, and a few seconds. Technology is the distillation of our expertise, processes, and techniques in the production of goods or services, so why is trust important?
As we glide into the age of the Internet of Things (IoT), everything will become interconnected, and trust will be the cement in the foundation on which all this technology depends. I’m in the process of building a new home. It will feature the latest IoT: locks, garage door opener, doorbell, thermostats, smoke detectors, light fixtures, outlets, appliances, speakers, cameras, and even an elevator. Everything will be interconnected, and Alexa will have dominion over it all. As I come home, my garage door will open, and it will trigger a series of events throughout the house if nobody else is already back. The HVAC system will make the necessary adjustments based on my preferences and the time of year. Depending on the time of day, lights may come on in a predetermined sequence, and music will be playing. If my programming works out properly, the TV will display anomalous events since my departure skimmed from the various logs of all these IoT devices. I’ll then know if doors were opened while I was absent, and if so, I can call up and review all motion video captured at each of these points of entry. All of this will require each piece trusting that the others are performing correctly.
This is not to say that we haven’t seen trust in IoT devices be bypassed in the recent past. Three common agents can violate the trust inherent in any system: insiders, outsiders, or the manufacturer. By insiders, I generally mean the average non-technical system user; in the example stated above, it will be my wife, daughter, or parents when they visit. Outsiders are folks with a malicious intent, whose objectives are not aligned with the users, and their goal is the exploitation of the system, often for some revenue-generating purpose. Finally, there is the manufacturer, until the past decade this was a non-issue, but we’ve seen a growth in state-sponsored exploitation of technology in both design and within the supply chain.
A story came out last year where a Nest camera was used by a malicious outsider to terrorize an eight-year-old girl in her bedroom. While the camera was “hacked,” it was later released that the homeowner had a trivial password for the camera and had NOT enabled two-factor authentication (2FA). The attacker used nothing more than a basic web crawling service to find the addresses of Nest Cameras; then, they likely proceeded to use a tool like Hydra to see if any of those cameras had a trivial password without 2FA enabled. Ultimately it was the homeowner who had left the “door open” for this attacker to walk through. While Nest shouldn’t make 2FA mandatory, they could have easily prevented the homeowner from assigning a trivial password to their account.
We’ve seen reports over the years that various SmartPhones have been susceptible to HotMic vulnerabilities by hackers. This malicious code is installed via a targeted spear-phishing attack or social engineering. Once the code is executed that SmartPhones Mic can be enabled or disabled at will be the attacker. This enables the attacker to not only listen in on phone calls, but all the sounds captured by that smartPhone regardless of what application is running or what state the phone is in (unless of course it’s off).
Finally, we have manufacturers who have been both knowingly and unknowingly duped into, including spyware into their products. Laptops have been a common platform for concern in this space, and several spyware apps have shipped with new laptops over the past decade. Servers are a bit harder to infect as they often have no pre-installed applications with the possible exception of the OS. Here we’ve heard stories of supply chains being compromised and covert spy hardware being physically inserted into these products, possibly without the manufacturers being aware of the transgression. Here it’s hard to know the true story.
So as IoT consumers, what can we do? Well, we have four possible courses of action:
1. Become a Luddite, ignore the trend in IoT, and remove all technology from your life. While this is a choice, if you’re reading this, it isn’t one any of us would find acceptable.
2. Be a sheep, blindly trust everyone, buy the latest gear, and auto-install every update. For the vast majority of folks, this is the only viable option. They likely aren’t technology literate much beyond creating a password, and their lives are focused on other more important pursuits.
3. Trust, but read industry news and form your own opinion, then upgrade when your confident it’s appropriate and an improvement. This is where the vast majority of IT folks will land. They’ll stay current with trends, follow Reddit, form their own opinions, and provide support for their families and friends.
4. Trust, but verify by actively doing your network captures. Here is the elite core of bleeding-edge folks who watch their home network on their smartphone for new devices. At least one or more times a year, they’ll do some network captures during quiet times to see what devices might be overly chatty and if there are any latent security threats. They may even have small autonomous systems like Raspberry Pis actively looking for threats, and perhaps even posing as honeypots.
Since IoT devices are always on, they are ideal for co-opting as a distributed denial-of-service (DDoS) attack platform. We’ve seen this happen a number of times over the past few years, one security hole and thousands or even millions of products become launch platforms. IoT manufacturers need to enforce strong passwords on their gear and promote 2FA. They should also annually hire security professionals to test their products and services, and consider sharing those results with their customers in public Reddit groups. Often times customers provide the best feedback to improve a products feature set and security stance.
Yesterday during one of my many calls each week with my seventy-something mom, she mentioned that she might pass on going to her close friend’s 80th birthday party. When I asked why she said that the four and a half-hour drive up the Florida Turnpike was becoming too scary, she said that people are continually cutting her off, and it makes her very fearful. Mom hasn’t had an accident in decades, and she doesn’t have any of the usual scratches and small dents that often deface the autos of our greatest generation. Her vision is excellent, memory is intact, and reflexes are still acceptable. My dad passed seven years ago of lung cancer, and in the final weeks of his life, we had to insist that he no longer drive. At that time, the O2 saturation in his blood would often drop when he sat for a few minutes, and he’d fall asleep due to no fault of his own. Insisting your parent no longer drive and removing access to their car is not a pleasant task.
On relaying this story yesterday to a friend, she mentioned that her mom, also well into her seventies, had significant macular degeneration and was still driving. It wasn’t until her daughter had noticed a dent that her mom volunteered her medical condition. Once that was exposed, they too had to face the task of removing her freedom to travel at will. Another friend has a mom with mild dementia, and while her driving skills are still sharp, she sometimes forgets where she is going or how to get home. They chose to put a tracker on her car and geofence around her house, church, and market so that if she stays within a half-mile of this triangle, she can roam at will. If she gets worried or “lost” family members can quickly look up on their smartphones where she is and calmly provide her with verbal directions to guide her to her destination. While I don’t agree with this approach, it’s not my place to tell them otherwise. Driving is a privilege, but over a certain age, we often perceive it as a right, and taking that away from someone can be mentally crippling. Autopilot should be a fantastic feature for this demographic, but unfortunately, they aren’t, and never will be, intellectually prepared to adopt this feature. We need to get there in steps.
Many were surprised by a Super Bowl commercial this year aptly named “Smaht Pahk” where a 2020 Hyundai Sonata parks itself into an otherwise tight spot. This feature is made possible because of a new breed of computer chips that fuse computing and sensor processing on the same chip. When we say sensor processing, in this case, we’re talking about receiving live data from 12 ultrasonic sensors around the car, four 180-degree fisheye cameras, two 120-degree front, and rear-facing cameras, GPS and an inertial measurement unit (IMU). This is then all consumed by some extremely smart Artificial Intelligence, which then finds and steers the car into a safe parking spot. This article, though, is about Autopilot, so why are we talking about self-parking?
As technology marketers, we’ve learned that cutting edge features, will quickly become a boat anchor if consumers aren’t intellectually prepared to accept it. My favorite example is the IBM Simon; arguably, the first smart phone brought to market 13 years before Steve Jobs debuted the “revolutionary” Apple iPhone. The Simon was on the market for only seven months and sold a mere 50K units. Even more surprising, the prototype was shown two years earlier at the November 1992 COMDEX. There will always be affluent bleeding-edge, early adopters, in the above case 50K, who will purchase revolutionary products, but the gulf between sales to these consumers and the mass market can often be enormous. IBM was correct in pulling the Simon so quickly after its introduction because mass-market consumers were at least a decade behind in adoption. We needed to experience MP3 players in 1998 to accept the Apple iPod three years later in 2001. We also needed to carry around a wide assortment of cell phones, personal organizers, and multifunction calculators. Every one of these devices prepared consumers for the iPhone in 2007. As technology marketers, we need to help consumers walk before we can expect them to run.
Self-driving cars have appeared in science fiction movies many times over the years, one of my favorite scenes being Sandra Bullock in “Demolition Man” (1993) set in 2032. Self-driving isn’t even mentioned; she’s busy face-timing with her boss as her car speeds down the highway. In the foreground, the steering wheel is retracted and moving on its own. We need to slow-roll the public into becoming comfortable yielding control of driving over to the car itself. Technologies like “Auto Emergency Braking” and accepting help from “Lane Keeping Assist” along with “Smart Park” are feature inroads that will make self-driving commonplace. Given how consumers adopt technology, it wouldn’t be surprising at all if its 2032 before self-driving becomes standard in most vehicles. Now Elon Musk, and his team at Tesla, are all brilliant people, as were the IBM Simon team. The difference, though, is that Tesla is selling a car first while delivering a mobile computational platform. The IBM Simon was viewed as a digital assistant first and a phone second. The primary functionality is critical to consumer perception. Consumers know how to buy a car, heck we have a century of experience in this market. Conversely, if Tesla had chosen to market their technology as a mobile computing platform, they’d have gone out of business years ago. I’m sure some readers are still scratching their heads at the notion of a mobile computing platform.
Consumers have become comfortable with their smartwatches and phones, tablets, and computers, all autonomously upgrading while we sleep, so why should their car be any different? Imagine a car whose features are updated remotely and autonomously at night while it is charging. Today Tesla’s Autopilot is restricted to highway driving, with smart features like lane centering, adaptive cruise control, self-parking, automatic lane changing, and summon. Later this year, via a nightly update, some models will pick up recognizing and responding to traffic lights and stop signs, then automatically driving on city streets. So how is this possible? It all goes back to the technology behind self-park.
For all these advanced driving features to take place, we need to put computing as close as possible to where the data originates. Also, these computations need to be instantiated in hardware, easily reprogrammable, ruggedized and run as fully autonomous systems. General-purpose CPUs or even GPUs won’t cut it; these applications are ideal for FPGAs coupled with complete systems on a chip. People aren’t going to wait while their car boots up, then loads software into all its systems. We are accustomed to pressing a button to start the car, shifting it into gear and going.
A truly intelligent autopilot that could go from the home garage to a parking space at the destination and back would address all the above issues for our greatest generation. My mom, who can still drive, should be content supervising a car while it maintains a reasonable highway speed and deftly avoids the automobiles around it. She could then roam from her home in the Florida Keys up both coasts to visit friends because she’d once again be confident behind the wheel. Autopilot is the solution our aging boomers require to maintain their freedom till the very end. Unfortunately, many are too old to accept it intellectually, my mom included. The tail end of the Boomers, perhaps those born in the early 1960s, are the older side of Tesla’s core demographic for this $7,000 Autopilot feature. It’s a shame that the underlying technology and its application came to late for my mom, and her generation.
When people find out that I’m connected to digital currency mining the first question they often ask is the one above. Sadly, as an individual, the answer is no. It would cost you roughly $200 to get started, and $1.74/day, yes, it’s a money pit right now, to call yourself a Bitcoin miner. Oh, and you’d probably drive your family crazy with all the noise. Here are the economics behind Bitcoin (BTC) mining at this moment.
Today, and today is important because all the numbers below are very fluid, a new Bitcoin block is mined every 11 minutes and produces a block reward of 12.73 BTC. This includes additional fees that are earned in the process. Using the following formula, we can see that roughly 70 BTC are earned hourly:
24 hours / day * 69.44BTC / hour = 1,666.56 BTC / Day
At this moment in time, the total computational power working to mine BTC is 82,030 Peta Hashes per second or to convert it to more standard units that are 82,030,000 Tera Hashes per second. One of the most affordable and efficient miners available now is the Bitmain Antminer S9K, which retails for $101, but after import taxes and shipping from China, expect it will run you $200. This box produces 14TH/sec, so if you put one of these online, you’d represent 0.0000001707 of the total capacity right now. Multiply that by the daily BTC reward, and you could earn 0.0002344 BTC a day. With BTC trading at $7,608USD, that would mean you could earn $1.78USD/day before mining pool fees and the cost of power. Pool fees often run at 5%, so this brings your earnings down to $1.69USD/day. The S9K miner draws 1.19KW, so at $0.12/KW, this means it requires $3.43/day in electricity, so you’d be out of pocket $1.74/day.
If we were to calculate things back a bit, we would find that if BTC were trading at something over $14,633/BTC, we would be breaking even.
If we ever want to earn back our $200 capital investment, we would then assume a six-month return, the industry rule of thumb in mining today for ASIC rigs. This would require earning at least a $1/day after costs, so that would require that the price of BTC remain at or above $18,900.
Now one could adopt the famous “mine and hold” strategy, meaning that you hold onto every BTC you earn, and then sometime six months or more in the future when BTC is trading above $18,900 if you were to sell you’d at least break even.
It should be noted that in May of 2020, Bitcoin will go through a halving event and rewards for each block after that will be cut in half, and at that point, the price of BTC is expected to climb significantly. So mine and hold could pay off in spades.
One final thing to consider, these BTC mining rigs are essentially two 5.25″ fans blowing air over hundreds of chips, so they are noisy and hot. They are 1200W space heaters that happen to produce a little BTC. So if you do want to venture into this market as an individual, you should consider doing it in a sound-proof room and then venting the heat to someplace useful, perhaps grandma’s room, she’s always cold.
*Note: This numbers were on November 21st when this piece was first written. Since then Bitcoin has gone from $7,608 to $6,632, so its now even less profitable.
It doesn’t matter if you’re panning for gold, drilling for oil, or mining Bitcoin, your success is bounded by your best answers to what, how, when, and where? Often the “what” and “how” are tightly linked. If you own oil drilling equipment, you’re probably going to continue drilling for oil. If you buy an ASIC based Bitcoin mining rig, you can only mine Bitcoin. Traditionally “when” and “where” are the most fluid variables to address. A barrel of crude oil today is $57, but over the past year, it has fluctuated between $42 and $66. Similarly, Bitcoin, during the same year, has swung between $3,200 and $12,900, so answering the “when” can be very important. Fortunately, digital currencies can easily be mined and held, which allows us to artificially shift the “when” until the offer price of the commodity achieves the necessary profitability. In digital currency mining, the term is sometimes written HODL, originally a typo, but it has since morphed into “Hold On for Dear Life” until the currency is worth more than it cost you. Finally, we have the “where”, and I’m sure some are wondering why “where” matters in digital currency mining.
Moving backward through the above questions and drilling down specifically into digital currency mining as the application. “Where” is the easiest one, you want to install your mining equipment wherever you can get the cheapest power, manage the excess heat, and tolerate the noise. Recently two of the most extensive mining facilities, both around 300MW, have or are being stood up in former Aluminum plants. When making Aluminum, the single most costly component in the process is electricity, and it requires access to vast volumes of electricity. Often these facilities are located near hydro-electric plants where electricity is below $0.03 KW/h. Also, since every watt of power is converted into heat or sound, you need a method for cost-effectively dealing with these byproducts. One of the mining operations mentioned earlier is located in the far northern region of Russia, which makes cooling exceptionally easy. Also, with “where” you need a local government that is friendly to digital-currency mining. In the Russian example mentioned above, it took nearly two years to secure the proper legal support. Some countries like China, until recently, were not supportive of digital-currency mining. For enthusiasts like myself, we locate our mining gear in out of the way places like basements or closets, perhaps even insulating them for sound and channeling away the excess heat to somewhere useful.
Concerning “when,” that should be now. The general strategy executed by most of us currently mining is known as “mine and hold.” With the Bitcoin halving coming in May, the expectation is that Bitcoin will see a run-up to that point. In the prior two Bitcoin halvings, the price remained roughly the same before and after the event. The last halving was in July 2016, and since then, Bitcoin has gone from a niche commodity to a mainstream offering. In the previous week, Fidelity was awarded a trust license to operate its digital assets business; further proof Bitcoin has gone mainstream. As Bitcoin is the dominant digital currency, it is believed that as it rises, so shall many of the other currencies that use it as a benchmark. So, holding some of the other mainstream digital currencies like Ethereum should also see a significant benefit from a substantial increase in the value of Bitcoin.
Back to the “what” and “how.” With digital currency mining, you have two criteria to consider when answering the “how,” efficiency or flexibility. If you purchase a highly efficient solution, then it will be an ASIC based mining rig. You will then soon learn, if you haven’t already, that it has been designed to mine a single currency, and that’s ALL it can ever mine. Conversely, if you want flexibility, then an FPGA or GPU miner affords you various degrees of freedom, but again the choice between efficiency and flexibility comes into play. FPGA mining rigs are often 5X more efficient per watt than GPU based rigs, but the selection of FPGA bitstreams is finite, but growing monthly. Both FPGA and GPU rigs can easily switch from mining one coin to another with nominal effort; it’s the efficiency and what can be mined that separate the two.
Finally, I’ve neglected to address the most obvious questions “why?” This is both the root of our motivation to mine and the fabric of our most social network. “Our only hope, our only peace is to understand it, to understand the why. ‘Why’ is what separates us from them, you from me. Why’ is the only real social power, without it you are powerless. And this is how you come to me, without why,’ without power.” – Merovingian, “Matrix Reloaded” 2003
With Bitcoin turning ten years old we’re still debating whether it is a currency, commodity or security? Actually, we’re talking about this whole family of digital tokens which are granted as a reward for doing some unit of digital work. The legal status of crypto was brought up again earlier this week as the popular communications platform Telegram was seeking to issue a new token called a Gram. The Securities and Exchange Commission (SEC) has alleged that Telegram’s token will be classified as a security, and as such, they secured a court order blocking the issuance of the Gram last month.
In response, Telegram is seeking the court’s assistance to legally define the status of the Gram token. In their filing Telegram stated that the Plaintiff, the SEC’s, “claims are without merit as Telegram‘s private placement to highly sophisticated, accredited investors was conducted pursuant to valid exemptions to registration under the federal securities laws and Grams will not be securities when they are created at the time of launch on the TON Blockchain,” the filing reads. Furthermore the “[…] Plaintiff has engaged in improper ‘regulation by enforcement’ in this nascent area of the law, failed to provide clear guidance and fair notice of its views as to what conduct constitutes a violation of the federal securities laws, and has now adopted an ad hoc legal position that is contrary to judicial precedent and the publicly expressed views of its own high-ranking officials,” it adds.
The Telegram team had “voluntarily engaged” with the SEC prior to their filing in an effort to be on the proper side of the governing laws and regulations. Unfortunately, the SEC failed to assist Telegram and only engaged when they moved to enforce their newly established position. Telegram has since suspended the launch of the Gram. Perhaps when Securities is in the name of your agency you default to viewing new nebulous tokens as securities thereby expanding your domain of control and further establishing your relevance in the new rapidly expanding crypto market.
We often seek to classify Bitcoin as a currency, but honestly, it fails even the most basic test. A currency is a “generally accepted medium of exchange for goods or services” and I’m sorry, but Bitcoin isn’t “generally accepted” even after a decade of use, perhaps one day, but not today. I’ve bought mining hardware with Bitcoin, and there have been some trendy businesses willing to accept it, but it is by no means generally accepted. Wikipedia does view Bitcoin and some of its popular cousins like Ethereum, Litecoin, and Monero as “Alternative Currencies”, but honestly, they are used as commodities. Wikipedia defines a commodity as a “marketable item produced to satisfy wants or needs.” That sounds more like crypto, furthermore, the “price of a commodity good is typically determined as a function of its market as a whole.” In crypto, there are three ways to turn a profit. You can mine blocks for the reward, receive transaction fees or trade. There are variations in the origin of mining revenue from how the effort is applied, proof of work, proof of stake, and perhaps others, but in general, it is a reward for maintaining the blockchain and the network.
A hobbyist can still squeeze out a few bucks a day mining alternative currency, while Bitcoin and Ethereum mining are for the really big boys, the institutional miners. Outside of these big institutional miners these days most of us make money in crypto by trading it. Dozens of applications have cropped up for our Smart Phones to enable us to trade crypto 24/365. How many of us have stagnate limit orders to sell off some overpriced crypto we picked up in the run last week? All of this will change again when China releases its digital currency in the coming months or when Facebook issues Libra next year. This week “Facebook Pay” was rolled out as the next step to condition Facebook users into paying for goods and services within the platform. As nearly two billion users become familiar with using the platform as a method of payment then it will be trivial for Facebook to slot in Libra and over time phase out PayPal and other more traditional methods of payment. As Libra becomes accepted as a “generally accepted” medium of payment crypto will have made the jump from commodity to currency.