Digital Currency: Money, Our Second Social Network

This is the first in a series designed to dispel the mystique of digital currency, think Bitcoin, but trust me we’ll go way beyond that. My goal is to explain in common language all you’ll need to know about digital currency, so you can confidently answer questions when you sit down with your first social network, your family.

As a species, our most significant evolutionary trait is our capability for building social networks. While social networks are found in many other high order mammals, we really do take it to the next level. Let’s face it for our first 36 months outside the womb we’re pretty much a defenseless bag of water rolling around wherever we’re placed. We can’t even effectively flee from even the most basic predator without assistance. From the moment we’re born we establish strong bonds with our parents and siblings, who become our first social network, our family. In our formative years, this network fills all our basic needs. It is this first network that introduces us to the second network we join, often very early in life, and that is the network of money.

Initially, we learn how to spend our first social network’s money as we roll down the aisle of the market and point out the foods we like. Soon members of our first network are sharing their money with us in return for our time when we do a task or achieve a milestone. Many of you up to this point possibly never viewed money as a social network, until recently it hadn’t occurred to me, but in fact, it is. On its face money is nothing more than a worthless token designed exclusively to be exchanged. It is these exchanges that form a network of commerce. If you closely examine your tightest social bonds, outside of your first network, they very likely were started or fueled by money. Outside of family one of my closest friends exists because over a decade ago I exchanged money in return for joining another social network. While I haven’t been associated with that network for years, this friend still shows up whenever I need him most, and it’s no longer about money.   

Terminology is critical to our understanding. Mentally we often have trouble grasping something until we can assign it a name. For years you’ve carried money around in your pocket, but have you ever considered it as your fiat currency? Here in the US, our fiat currency is the dollar. It’s very possible you’ve taken it for granted so long you never even viewed it this way. A fiat currency is the one officially sanctioned and managed by the government, another network, under which you live. If you only need one currency, in my case the dollar, then the concept of fiat becomes mute, but what happens when you begin to use more than one?

Sitting here in North Carolina a month ago for the first time in my life I need to spend some Bitcoin which I’d been gifted a few years earlier. Bitcoin is the most well-known digital currency, but it is NOT a fiat currency because it hasn’t been issued by a government. The vendor I wished to purchase a small digital currency mining rig from in China ONLY accepted Bitcoin, NOT my US dollars via credit card. At the time Bitcoin was trading around $10,000 USD so buying something for $250 USD meant I was spending a fraction of a Bitcoin. I’m only aware of a single fractional unit of a Bitcoin and that’s called a Satoshi which is equal to one ten-millionth of a Bitcoin. So, I shelled out roughly 2.5M Satoshi for my rig and anxiously awaited its arrival. We’ll dive more into Bitcoin and Satoshi in future posts, but I thought it a prudent example where the fiat currency wasn’t accepted. Next year we’ll have Libra, Facebook’s digital currency, and that’s already giving those who manage our fiat currencies serious concerns.

In computer science, there’s a concept called Metcalfe’s law which states that the value of a network is equal to the square of the number of nodes or members in that network. My extended family has roughly fifty members, so its potential value is fifty squared or 2,500. My friend has nearly 100 in his extended family, so his family’s value is 10,000. Metcalfe’s law came along with computer networking, long after Alexander Graham Bell had invented the telephone, but Bell was aware that the value of his invention would only be truly realized once it had been widely adopted. Within 10 years of its invention over 100,000 phones had been installed in the US. Bell died 46 years after his invention knowing that his new network had changed the world.  

Facebook is the largest social network our species has ever created. With over two billion active users this means that one person in four uses this network monthly. Right now, the clear majority of those two billion people live their lives with their fiat currency, and unless they travel they rarely if ever deal with another. Next year Facebook will issue its fiat currency, Libra, and it will change everything. Now it should be noted that Facebook isn’t a country so the concept of them issuing currency has raised some serious concerns from those who do issue currency. Countries around the globe are taking Facebook’s Libra head-on because they know it represents a Pandora’s box of problems for their own monetary systems, and here’s the main reason why.

Looking at Facebook as a network we could say its value is two squared or four, for now, let’s drop the all the billions as it just makes the numbers incomprehensibly large. The US has a population of 0.327 billion so the value of it as a network (again without the billions) is 0.1 and China has a population of 1.386 billion, so the value of its network is 1.9. If we now view these countries populations as networks, we see that Facebook’s value is twice that of the US and China combined. Extend that to currencies and you can why all the fuss, and why you might need to understand digital currency.

Next, we’ll dismiss the intrinsic value argument, that’s where grandpa says Bitcoin is worthless because there’s nothing behind it. You can then counter with the US went off the gold standard decades ago so why isn’t the dollar worthless? We’ll provide you with that side of the argument.    

Size Matters, Especially in Computing

Yes, this is a regular size coffee cup

The only time someone says size doesn’t matter is when they have an abundance of what it is that’s being discussed. Back in the 1980s some of us took logic design and used discrete 7400 series chips to build out our projects. A 7400 has four two-input NAND gates, with four corresponding outputs, as well as power and ground pins. It is a simple 14 pin package about 3/4 of an inch long and maybe a quarter-inch wide that contains a grand total of sixteen transistors. Many of the basic gates we needed for our designs used that same exact package form factor which made for great fun. Thankfully we had young eyes back then because often times we’d be up till all hours of the night breadboarding our projects. We knew it was too late when someone would invariably slip up, insert a chip backward, and we’d all enjoy the faint whiff of burnt silicon.

Earlier this month Xilinx set a new world record by producing a field-programmable gate array (FPGA) chip which is a distant cousin of the 7400 called the Virtex Ultrascale+ VU19P. Instead of 16 transistors, it has 35 billion, with a “B”. Also, instead of four simple two-input, one output logic gates, it has nine million programmable system logic cells. A system logic cell is a “box” with six inputs and one output that is fully configurable and highly networked. Each individual little “box” is programmed by providing a logic table that maps all the possible six input combinations to the single output. So why does size matter?

Imagine you gave one child a quart-sized Ziplock bag of Legos and another several huge tackle boxes of pre-sorted bricks including Lego’s own robotics kit. Assuming both children have similar abilities and creativity which do you think will create the most compelling model? The first child’s solution wouldn’t be much larger than an apple and entirely static. While it could be revolutionary, it is limited to the constraints of the set of blocks provided. By contrast, the second child could produce a two-foot-tall robot that senses distance and moves freely about the room without bumping into walls. Which solution would you find compelling? In this case size matters in both the number and type of bricks available to the builder.  

The system logic cells mentioned above are much like small Lego bricks in that they can easily replicate the capability of more complex bricks by combining several smaller ones. FPGAs are also like Legos in that you can quickly tear down a model and re-use the build blocks to assemble a new model. For the past 30 years, FPGAs have had limitations that have prevented them from going mainstream. First, it was their speed and size, then it was the complexity of programming them. FPGAs were hard to configure, but the companies behind this technology learned from the Graphical Processing Unit (GPU) market and realized they needed tools to make programming FPGAs easier. Today new tools exist to port C/C++ programs into FPGA bitstreams. Some might think that the decade of 2010 was the age of the GPU, while 2020 is shaping up to become the age of the FPGA.     

x86 Has Hit the Wall, and Now Come the Accelerators – Part 3

TV’s Original A-Team

Accelerators are like calling in a special forces team to address a serious competitive threat. By design, a special forces team, known as “A Detachment” or “A-Team” consists of two officers and ten sergeants, all of which are cross-trained in five different skill areas: weapons, engineering, medical, communications, and operations intelligence. This enables the detachment to survive for months or even years in a hostile area without any operational support. Accelerators are the computational equivalent.

A well-designed accelerator has different blocks of silicon to address each of the four primary computational workloads we discussed in part two:

  • Scaler, working with integers, and letters
  • Floating-point, the real numbers with decimal points
  • Vector, one-dimensional arrays of floating-point numbers
  • Artificial Intelligence (AI), vectors with low precision floating point mixed with integers

If workload types are much like special forces skills then what types of physical computational cores can we leverage in an accelerator design that are optimized to address these specific tasks?

For scaler problems, Intel’s x86 platform has led for decades as far back as the early 1980s. Quietly over the last 25 years, the ARM architecture has evolved. In the past five years, ARM has demonstrated everything necessary for it to be a serious data center player. Add to that ARM’s architecture licensing model which has led to third parties developing their cores which are instruction set compatible. Both of these factors have resulted in at least a dozen companies from Apple to Samsung developing their ARM core designs. Today ARM cores can be found in everything from Nest Thermostats to Apple iPhones. Today the most popular architecture for workload acceleration is the ARMv8-A. Specifically, the Cortex-A72 design which supports both 32bit and 64bit computing, with 1-4 computational cores. Today the Broadcom Stingray, Mellanox Bluefield, NXP Layerscape, and Xilinx Versal all use the ARM Cortex-A72.

When it comes to accelerating floating point, the current trend has been towards Graphical Processing Units (GPUs). While GPUs have been around for about a decade now, it wasn’t until the NVIDIA Tesla debuted that they were viewed as a real computational accelerator. GPUs are also suitable for the third workload model vector processing. In essence, GPUs can kill two birds with one computational stone. Another solution that can also accelerate certain types of floating-point operations are digital signal processing (DSP) engines. DSPs are very good at real-world computational problems that have a high degree of multiple-accumulates and matrix operations. Here is where some accelerator boards are stronger than others. While the Broadcom Stingray only has a cryptographic engine designed to handle single-pass hashing and encryption/decryption (both scaler tasks) it lacks any sort of added acceleration for floating-point math. Mellanox’s Bluefield chip also doesn’t include any silicon specifically dedicated to floating-point. What they do promote is the fact that Bluefield provides GPUDirect so the processor can communicate directly with GPUs on another PCIe card. NXP only has ARM cores, so no additional floating-point support is provided. By contrast, Xilinx’s Versal architecture includes anywhere from 472 to 3,984 DSP engines depending on the chip series and model.

Artificial Intelligence (AI) workloads leverage vector processing but instead of high precision floating point, they only require low precision or integer numbers. Again Broadcom, Mellanox, and NXP all fall short as they don’t include any silicon to process these workloads directly. Mellanox as mentioned earlier, does support GPUDirect for passing AI workloads to another PCIe board but that’s a far cry from on-chip dedicated silicon. Xilinx’s Versal architecture includes anywhere from 128 to 400 AI engines for accelerating these workloads.

Finally, the most significant differentiator is the inclusion of FPGA logic, also known as adaptable engines. This is something unique to only Xilinx accelerator cards. This is the capability to take frequently called upon routines written in C/C++ and port them over to dedicated logic which can improve the performance of a routine by at least 8X.

In the case of Xilinx’s new Versal architecture, the senior officer is an ARM Cortex-R5 for real-time workloads. The junior one officer, and the one who does much of the work, is an ARM Cortex-A72 quad-core processor. The two ARM engines are primarily for control plane functions. Then Versal has AI cores, DSP engines, and adaptable engines (FPGA logic) to accelerate the volumetric workloads. When it comes to application acceleration in hardware the Xilinx Versal is the A-Team!

x86 Has Hit the Wall, and Now Come the Accelerators – Part 2

Before we return to accelerators as a solution, we need to make a pit stop and explore the how behind the why. The why is simple; we buy a product or service to solve a problem. We intellectually evaluate stories and experiences, distill out the solutions that apply then affix those to tangible objects or services we can acquire. Rarely does someone buy an iPad to own an iPad, they have a specific use case in mind as their justification for that expense. The same holds for servers and accelerator cards. At this point in our technological evolution, the how for most remains a mystery which needs some explanation. 

When a technician visits your home to fix a broken appliance, they don’t just walk in with a lone flat-bladed screwdriver. They carry a pretty large toolbox which was explicitly assembled to repair appliances. The contents of that tool box are different than those of a carpenter’s or automotive mechanic’s. While all three might have a screwdriver, only the carpenter would have a wood chisel, and the mechanic a torque wrench. Different problems demand different tools. For the past several decades, many of us have viewed the x86 architecture as the computational tool to solve ALL our information processing issues. Guess what, a great many things don’t optimize well to the x86 model, but if you throw enough clock cycles and CPU cores at most problems, a solution will eventually be reached.

The High-Performance Compute (HPC) market realized this many years ago, so they built heterogeneous computing environments with schedulers for each type of problem. They classified problems into scaler, floating-point, and vector. Since then we’ve added, Artificial Intelligence (AI), also known as Machine Learning (ML). Scaler problems are the ones that deal with integers (numbers without a decimal point) which is often how we represent text. So, for example, a database lookup of your name to fetch your address is entirely a scaler problem. Next, we have floating-point, or calculations with a decimal point, the real numbers. These require different computational routines, and as early as 1983, we introduced special numerical co-processors (early accelerators) in our PCs to handle this specific class of problems (ex. Intel 8087). Today we can farm these class of problems out to Graphical Processing Units (GPUs) as they have many parallel cores explicitly designed for this purpose.

Then there’s the mysterious class called vector computing. A vector is a one-dimensional array of numbers. Some might argue that vectors are just a special case of floating-point problems, and they are, but their treatment at the processing level sets them far apart. Consider the Pythagorean theorem. Solving for C when you know A and B requires not only a floating-point processor but many steps to arrive at the value for C. For illustration let’s say it takes ten CPU instructions to arrive at a value for C, it’s probably more. Now imagine you have a set of 256 values for A and a corresponding set of 256 values for B, this would take 2,560 instructions to produce a solution, the complete set C. A vector processor will load the entire set of A and B values at the same time into CPU registers, square the results in one instruction, sum them in another, square-root the last result in another then present the solution set C in a final instruction, a few instructions instead of 2,560. Problems like weather forecasting map extremely well into the vector processing model.

Finally, there is the fourth, relatively new, class of problems that fall into the realm of AI or ML. Here the math being done is vector based, its a mix of both integer (scaler) and real numbers, but with intentionally low precision. The difference being that the value computed doesn’t always need to be perfect, just close enough. Much like when you do your taxes, and you leave off the change in your calculations. The IRS is okay with whole numbers because they’re good enough. Your self-driving car can drift an inch or so in any direction, and it won’t make any difference as it will still be more accurate than your Grandma Nat behind the wheel.

So now, back to the problem at hand, how do we accelerate today’s complicated workloads? For the past three decades, we’ve been taking a scaler platform, the x86 processor with floating-point capabilities, and using it as a double-ended screwdriver with both a flat and a Philips head to address every problem we have. How do we move forward?

Stay tuned for part three, where we cover hardware acceleration platforms.

x86 Has Hit the Wall, and Now Come the Accelerators

“… when you have access to the vastness of space, you realize there’s only one resource worth fighting over… even killing for: More time. Time is the single most precious commodity in the universe.”

— Kalique Abrasax, Jupiter Ascending (2015)

Computing is humanities purest quest to convert time into work. In 2000 IBM demonstrated slicing one second into 10 billion units (10GHz) and then squeezing computational work out of each unit. At the time IBM had defined a new 130-nanometer process they called “CMOS 9S“. It was planned for future generation PowerPC chips. In parallel IBM was ramping up production of the POWER4 at 1.9GHz. Now you may be asking yourself, “but wait a minute I’ve never seen any production 10GHz CPUs, especially not 20 years ago,” and you’re correct. IBM’s POWER6 was as close as we’ve gotten with one version of that chip advertised at 5GHz, and in the lab they achieved 6GHz. I’ve also heard IBM reps brag about 7GHz with POWER8 if you turn half the cores off. So why has computing hit the wall at 4-5GHz and computation not reached 10GHz over the last twenty years?

Intel explained this five years ago in the blog post, “Why has CPU frequency ceased to grow?” The problem has a name called the “conveyor level.” Imagine a CPU as a conveyor belt driven assembly line with four workstations labeled A through D. Since an assembly line is a serial process the worker at station B can’t start until the worker at station A finishes. Ideally, each station is designed to take the same amount of time to finish their work, so the following station isn’t impacted. The slowest worker then defines the speed of the conveyor on any given day. So if the most time-consuming stage in the CPU pipeline is 250 picoseconds, then the clock frequency is 4GHz. There is also the issue of heat.

As an electron races through a computer circuit, it experiences a form of friction, known as resistance. Just like rubbing your hands together on a cold day produces heat, so does an electron zipping through a computer circuit. When designing any chip heat is the enemy. The smaller the chip geometry, today its seven nanometers, the more devices you can pack into a given space on a chip. More devices mean more heat. That same square centimeter of space at 7nm still has the same thermal limitations it did at 130nm 20 years ago. Sure we can use fancy liquid systems to rapidly wick heat away from the chip, instead of relying on airflow over an area limited heat sink, but at the end of the day, every watt of power the chip consumes becomes heat. Now there are individual circuits throughout the chip specifically designed to detect and respond to over-heating situations. The last thing anyone wants is a smoldering piece of silicon where their CPU once was. In the 7GHz example above, the IBM representative said that if you viewed the POWER8 chip as a big chessboard and you turned off all the CPU cores on the white squares than all the cores on the black squares could be clocked at nearly twice the speed or 7GHz. Why is this interesting?

For some computational problems its much better to have two consecutive computations in the same unit of time than two unrelated ones. Electronic trading, also known as high-frequency trading (HFT) is the premier market-driven problem that benefits most from increasing clock frequency. Traders often ascribe a dollar value to a millionth of a second, and it varies from market to market based on the rules and volumes of each market. In the end, though it always boils down to the trader’s speed and response to a market signal. If I’m faster than you at making the right decision, then I win the business and book the profit. Sticking with HFT, where do accelerators fit in?

Traders lease connections to exchanges. The closer and faster they can respond to signals from those connections, the more competitive they will be. Suppose my trading platform requires signals from the market to travel through my server, then another switch on my private network, back through a second server, then finally out to the market. The networking alone, even with kernel bypass through two servers and a switch could easily be several microseconds. Add a few more microseconds for trading logic in both servers, and you could be looking at almost ten microseconds to submit a trade in response to a signal. Two years ago Solarflare with LDA Technologies demonstrated 98 nanoseconds tick to trade. This was using accelerator technology and compared to the trading platform mentioned above; it is three orders of magnitude faster. That’s the difference between walking from NYC to LAX versus flying at Mach 5 and arriving in an hour. Time matters and acceleration is not just for HFTs anymore. Why do you think Google bought Myricom, Amazon picked up Annapurna Labs, Nvidia purchased Mellanox, or Xilinx acquired Solarflare?

Please stay tuned, more to come in part two. In the meantime feel free to check out previous articles on this topic:

The Importance of “Local”

Binary translates to “Local”

We’ve all attended large industry international trade conferences hosting tens of thousands of people. These are spectacles designed to raise brand awareness, educate those in attendance about industry advances, network with colleagues you haven’t seen in a spell, all while promoting new products and services. By contrast there are also smaller regional industry trade shows that are scaled-down versions of these larger events with many of the same objectives, and then there are Security BSides events.

For those not familiar with BSides, they were started in 2009 to further educate folks on cybersecurity at the city and regional level. Think Blackhat, but on a Saturday at the local civic center, and with perhaps 200 people instead of 19,000. Let’s face it, most security engineers are introverts so socializing at significant events like Blackhat is uncomfortable. While bringing a few coworkers or friends on a Saturday to a BSides event can be downright fun. Let’s face who doesn’t want to sit for 20-30 minutes in the lock-pick village with their friends to test their skills on some of MasterLock, Schlage or Kwikset’s most common products. It’s heartwarming to teach a NOOB (short for a newbie) how to pick a lock, then watch their excitement when the hasp clicks open for the first time.

Then there’s always the Capture the Flag (CTF) or wireless CTF for when you’re not interested in the session(s) being offered. If you’ve not played a security capture the flag event before then you really are missing something. It is a challenging series of puzzles served up Jeopardy-style. Say 10 points if you can decrypt this phrase. Or 20 points if you can determine whose attacking your machine on five different ports. Perhaps another 50 points if you can write a piece of code that can read a web page, unscramble five words, and post the five proper words back to the website in three seconds before the clock expires and the words are no longer valid. It’s an intellectual problem solving competition at its finest, and did I mention there is a leaderboard. Often projected high on the wall for all to see throughout the day are the teams with the highest scores. It really warms the heart when your team is the second on the board and it stays in the top five most of the day. While we were the second on the board at BSides Asheville, we didn’t stay in the top five for long.

More seriously though, for a $20 entry fee (which includes a T-shirt) these BSides events offer an affordable local event for cybersecurity engineers and hobbyists. BSides enables socially challenged people the opportunity to step out of their shell, and reach out to similar like-minded individuals while networking in a comfortable and technical space. You can bond over lock-picking, a CTF challenge, during lunch or between sessions. Bring one of your nerd friends as a wingman, or better yet several to form a CTF team, and make a day of it. If you’d like to check out an online CTF one of our favorites is RingZer0. If you want to see the hacker side of the Technology Evangelist, W3bMind5, or read about his team’s experiences at BSides Asheville then they can be found at RedstoneCTF.

The RedstoneCTF team may be attending BSidesCLT on September 28th and BSidesRDU on October 19th.

7nm, Miniaturization to Integration

Last night while channel surfing I came across Men in Black III, and was dropped right into the scene where a 1969 Tommy Lee Jones was placing Will Smith into the Neuralizer pictured on the left. For those not familiar with the original 1997 MiB franchise a Neuralizer is a cigar-sized plot device for washing peoples memories of an alien encounter that is normally carried inside their jacket pocket. The writers were clearly poking fun at miniaturization and how much humanity has come to take it for granted.

Those of us who grew up in the 1960s and 70s lived through the miniaturization wave as the Japanese led the industry by shrinking radios and televisions from cabinet sized living room appliances to handheld devices. One year for Father’s Day in the late 70s we bought my dad a portable black and white TV with a radio that ran on batteries so he could watch it on the boat in the evenings. It was roughly the size of three laptops stacked on top of one another. It may sound corny now, but it was amazing back then. Today we watch theater quality movies in color, on a much larger screen from a device that drops into our pocket and don’t think twice about it. We’ve grown accustom to technology improving at a rapid rate, and it’s now expected, but what happens when that rate is no longer sustainable?

Last year the industry began etching chips with a new seven nanometer process, which is equivalent to Intel’s 10nm process. Apple’s A12 Bionic chip that powers their XR and XS series iPhones is one of the first using this new 7nm process. This chip contains 6,900 million transistors and is arguably one of the most advanced devices every produced by mankind. By contrasts, my first computer in 1983 was a TRS-80 Model III powered by the Zilog Z80 processor. The Z80 used a 4,000nm process and only contained 8,500 transistors. So in 35 years we’ve reduced the process size by three orders of magnitude resulting in a transistor density improvement of six orders of magnitude, wow! How do we top that, and where are we in the grand scheme of the physics of miniaturization?

In a 1965 paper by Gordon Moore, then founder of Fairchild Semiconductor and later CEO of Intel, Gordon stated that the density of integrated circuits would double every year, now known as Moore’s Law. From 1970 through 2014 this “law” had essentially proved true. Before Intel’s current 10nm geometry their prior generation was 14nm and that was achieved in 2014 so it’s taken them five years to accomplish 10nm. Not exactly Moore’s law, but that’s the tip of the iceberg. As the industry goes from 14nm to 7nm/10nm physics is once again throwing up a roadblock, this hasn’t been the first one, but it could be the last one. Chips are made using Silicon, and Silicon atoms have a diameter of about 0.2 nanometers. So at a seven nanometers node size, we’re talking 35 or so silicon atoms, which isn’t a very large number. It turns out that below seven nanometers, as we have fewer and fewer silicon atoms to manage electron flows, things get dicey. Chips begin to experience quantum effects, most notably those pesky electrons, which are about a millionth of a nanometer in size, begin to exhibit something called quantum tunneling. This means that they no longer behave like they are supposed to and they move between devices etched into the silicon with a sort of reckless disregard for the “normal” rules of physics. This has been known though for some time.

Back in 2016 a team at Lawrence Berkley National Labs demonstrated a one nanometer transistor device, but that leveraged Carbon nanotubes to manage electron flow and stave off the quantum tunneling effect. For those not familiar with Carbon nanotubes think teeny tiny diamond straws where the wall of the straw is one atom thick. While using Carbon nanotubes to solve the problem is ingenious, it doesn’t fit into how we make chips today as you can’t etch a Carbon nanotube using conventional chip fabrication processes. So while it’s a solution to the problem it’s one that can’t easily be utilized. So we may be working at 7nm for some time to come. This only means that one aspect of miniaturization has ground to a halt. When I’ve used the term chip above to represent an integrated circuit the more precise term is actually a “die.”

Until recently it was common practice to place a single “die” inside a package. A package is what most of us think of as the chip as it has a bunch of metal pins coming out of the bottom or sides. In recent years the industry has developed new techniques that allow us to layer multiple dies onto one another within the same physical package enabling the creation of very complex chips. This is similar to a seven-layer cake where different types of cake can be in each layer and the icing can be used to convey flavors across the cake layers. This means that a chip can contain several and eventually many dies, or layers. A recent example of this is Xilinx’s new Versal chip line.

Within the Versal chip package there are multiple dies that contain two different pairs of ARM CPU cores, hundreds of Artificial Intelligence (AI) engines, thousands of Digital Signal Processors (DSP), a huge Field Programmable Gate Array (FPGA) area, several classes of memory, and multiple programmable memory, PCIe and Ethernet controllers. The Versal platform is a flexible toolbox of computational power, with the ARM cores handling traditional CPU and real-time processing tasks. The AI cores churn through new machine learning workloads while the DSPs are leveraged for advanced signal processing, think 5G, and the FPGA can be used as the versatile computational glue to pull all these complex engines together. Finally, we have the memory, PCIe and Ethernet controllers to interface with the real world. So while Intel and AMD focus on scaling the number CPU cores on the chip and NVidia works to improve Graphical Processing Unit (GPU) density Xilinx’s is the first to go all-in on chip-level workload integration. This is the key to accelerating the data center going forward.

So until we solve the quantum tunneling problem, with new fabrication techniques, we can utilize advances in integration as shown above to move the industry forward.

Spearfishing vs. Spear Phishing

While in Hawaii recently on vacation my millennial son tossed out a bucket list suggestion that we both go deep water Spearfishing. Immediately the iconic battle from the James Bond movie “Thunderball” leaped to mind. It’s the scene where the villain Largo’s minions in black wetsuits wage war against a platoon of US Navy Seals in red wetsuits. The whole sequence is fought with untethered spearguns and dive knives, safety first! Not one to back down from a challenge I arranged the dive and along the way we learned a few things worthy of sharing.

To further set the stage, back in 1992 I earned my PADI Open Water dive certification and have since made hundreds of dives, so pulling on a wetsuit, donning flippers, a mask and snorkel is nothing new, or so I thought. This was a 2mm one-piece wetsuit design which offered both thermal protection from the water as well as solar protection from burning exposed skin. The difference between this suit and my normal warm water one is that this one is decorated with an open water camouflage design. The purpose of the camouflage is to make the wearer look like a mass of seaweed to attack the smaller fish to the shade. The mask and snorkel are typical, but the fins were a whole different game. When spearfishing your objective is to not scare off the small fish which then alert the larger game fish. To do this you must minimize ALL your movements, including your kicks. Most of your time is spent drifting on the surface and lying in wait for your prey. Did I mention the chum, yes cut up bait fish are introduced into the water near where you’re drifting to draw in larger game fish, and sometimes sharks. Towards this end when spearfishing you use free diving fins which are nearly a meter long, three feet for my friends in the US. This enables the diver to make subtle ankle movements that gently propel them through the water.

When prey arrives the hunter slowly moves the one-meter long wood speargun from their side into a position in front of them. They then lock out their dominant arm holding the gun, support the stock with their free hand, and slowly scan left and right to ensure that no other divers are in harm’s way. Finally, the hunter aligns the gun with the target and squeezes the trigger. The bolt travels a maximum of five meters, with the optimum killing distance between three and five meters. Yes, you have to be very close to the fish, move with extreme care, and you have to make your only shot count. If your shot is true and you hit the fish solidly in the head then you’re instructed to drop the gun. Now there are a few caveats that I’ve not yet covered. The dive master instructed us to NOT shoot any fish that appears to be larger that 100 pounds. It turns out that connected to the back of the speargun is about 100 feet of floating line (1/2″ thick) that ends with a buoy. Divers can easily get tangled up in this line if they’re not careful while drifting. A 100-pound fish, with some room to run after being speared, can generate enough momentum to pull a fully grown diver under water, potentially resulting in their death. We were instructed that if a fish is in the area that is larger than 100 pounds, but less than 200 pounds, to slowly pass the gun to the dive master so they could then double check the area before taking a more experienced shot. Death from accidentally being speared, or dragged under by a fish, was represented as a very tangible threat. We had two spear guns, five divers, and five hours of hunting, and yet there was only one clear shot that proved fruitless. The fish felt the spear but it did not penetrate its skin because the spear had reached the end of the line attaching it to the gun as it touched the fish. So what does all this have to do with Spear phishing?

Phishing is the process of using emails containing malware designed to compromise the computer reading these emails. Spear phishing is the act of specifically targeting a single individual using a very custom crafted email and phishing attachment. While generic phishing attacks are often “spray and pray” based assaults, sometimes the employees of a given company or industry, spear phishing attacks are laser-focused on a single person. The attacker thoroughly researches their target, combing the web, social media and perhaps even doing some real-world social engineering and recognizance, to learn everything they can. The attacker’s objective is to select the most attractive strategy designed to elicit a response that results in the target opening an infected attachment. As in spearfishing, you may only get one shot so it has to be your best.

In both, the above cases the hunter thoroughly researches their prey looking for the most opportune places to hunt, the proper times, and the most alluring baits. They then choose the appropriate weapon, and thoroughly practice the use of that weapon to ensure that they can make it function properly with the single shot they might get on their target. They then select and distribute the proper baits, and lie in wait for their prey.

Something that is common and often overlooked is that in both Spearfishing and Spear Phishing the hunter is far more exposed, and hence significantly more vulnerable than they might be had they used ANY other method of attack. In Spearfishing the hunter is in the water only meters from his prey, and if they’re successful they need to move fast to land their catch on the boat before the arrival of sharks. A wounded fish instantly spills blood into the water and flails around in an effort to free itself. Sharks can detect blood in the water up to 1/3 of a mile away, and when they are near sense the electrical impulses from a fish’s muscles in distress and their splashing to zero in very quickly on what is now “their” prey. Sharks aren’t known for being discriminating eaters, so it is not uncommon at this point for the hunter to also become the hunted. In Spear Phishing if the attacker isn’t meticulous in covering their tracks during their research, social engineering efforts, bait selection (phishing email), and weapon design (phishing exploit used within the email) these can often be used to uncover their identity.

So be ever vigilant as you approach your email, there will be times when you’re only one click away from being speared, and your system becoming compromised!

In Security, Hardware Trumps Software


Since the dawn of time humanity has needed to protect both people and things. Initial security methods were all “software based” in the sense that they relied on the user putting their trust in a process, people and social conventions. At first, it was cavemen hiding what they most valued, leveraging security through obscurity or they posted a trusted associate to watch the entrance. Finally, we expanded our security methods to include some form of “Keep Out” signs through writings and carvings. Then in 600BC along comes Theodorus of Samos, who invented the key. Warded locks had existed about three hundred years before Theodorus, but the “key” was just designed to bypass obstructions to its rotation making it slightly more challenging to access the hidden trip lever inside. For a Warded lock the “key” often looked like what we call a skeleton key today.

It could be argued that the lock represented our first “hardware based” security system as the user placed their trust in a physical token or key based system. Systems secured in hardware require that the user present their token in person, it is then validated, and if it passes, the security measures are removed. It should be noted that we trust this approach because it’s both the presence of the token and the accountability of a person in the vicinity who knows how to execute the exact process with the token to ensure success.

Now every system man invents can also be defeated. One of the first skills most hackers teach themselves is how to pick a lock. This allows us to dynamically replicate the function of the key using two very simple and compact tools (a torsion bar and a pick). Whenever we pick a lock we risk exposure, something we avoid at all cost, because the process of picking a lock looks visually different than that of using a key. Picking a lock using the tools mentioned above requires two hands. One provides a steady rotational force using the torsion bar. While the other manipulates the pick to raise the pins until each aligns with the cylinder and hangs up. Both hands require a very fine sense of touch, too heavy handed with the torsion bar and you can snap the last pin or two while freeing the lock. This will break it for future key users, and potentially expose your attempted tampering. Too light or heavy with the pick and you won’t feel the pins hanging up, it’s more skill than a science. The point is that while using a key takes seconds picking a lock takes much longer, somewhere between a few seconds to well over a minute, or never, depending on the complexity of the cylinder, and the person’s skill. The difference between defeating a software system and a hardware one is typically this aspect of presence. While it’s not always the case, often to defeat hardware-based systems it requires that the attacker be physically present because defeating hardware commonly requires hardware. Hackers often operate from countries far outside the reach of law enforcement, so physical presence is not an option. Attackers are driven by a risk-reward model, and showing up in person is considered very high risk, so the reward needs to be exponentially greater.

Today companies hide their most valuable assets in servers located in large secure data centers. There are plenty of excellent real-world hardware and software systems in place to ensure proper physical access to these systems. These security measures are so good that hackers rarely try to evade them because the risk of detection and capture is too high. Yet we need only look at the past month, April 2019, to see that companies like Microsoft, Starwood, Toyota, GA Tech and Questcare have all reported breaches. In Microsoft’s case, 6% of all MSN, HotMail, and Outlook accounts were breached, but they’ve not disclosed the details or the number of accounts. This is possible because attackers need to only break into a single system within the enterprise to reach the data center and establish a beachhead from which they can then land and expand. Attackers usually obtain a secure foothold through a phishing email or clickbait.

It takes only one undereducated employee to open a phishing email in outlook, launch a malicious attachment, or click on a rogue webpage link and it’s game over. Lockheed did extensive research in this area and they produced their now famous Cyber Kill Chain model. At a high level, it highlights the process by which attackers seize control of an enterprise. Anyone of these attack vectors can result in the installation of a remote access trojan (RAT) or a Zero-Day exploit that will give the attacker near unlimited access to the employee’s system. From there the attacker will seek out a poorly secured server in the office or data center to establish a beachhead from which they’ll launch their attack. The compromised employee system may not always be available, but it does makes for a great point to retreat back to in the event that the primary beachhead server system is discovered and sanitized.

Once an attacker has a foothold in the data center its game over. Very often they can easily move laterally, east-west, through the data center to other systems. The MITRE ATT&CK (Adversarial Tactics Techniques & Common Knowledge) framework, while similar to Lockheed’s approach, drills down much further. Specifically, on the lateral movement strategies, Mitre uncovered 17 different methods for compromising internal servers. This highlights the point that very few defenses exist in the traditional data center and those that do are often very well understood by attackers. These defenses are typically OS based firewalls that all seasoned hackers know how to disable. Hackers will disable logging, then tear down the firewall. They can also sometimes leverage an island hopping attack to a vendor or customer systems through private networks or gateways. Or in the case of the Starwood breach of Marriott the attackers got lucky and when their IT systems were merged so were the exploited systems. This is known as a data lemon, an acquisition that comes with infected and unsecured systems. Also, it should be noted that malicious insiders, employees that are aware of a pending termination or just seeking to augment their income, make up over 30% of the reported breaches. In this attack example, a malicious insider simply leverages their access and knowledge to drain all the value from their employer’s systems. So what hardware countermeasures can be put in place to limit east-west or lateral attacks within the data center? Today you have three hardware options to secure your data center servers against east-west attacks. We have switch access control lists (ACLs), top of rack firewalls or something uniquely innovative Solarflare’s ServerLock enabled NICs.

Often enterprises leverage ACLs in their top of rack 10/25/100G switches to protect east-west traffic within the data center. The problem with this approach is one of scale. IT teams can easily exhaust these resources when they attempt comprehensive application level segmentation at the server. These top of rack switches provide between 100 and 1,000 ACLs per port. By contrast, Solarflare’s ServerLock provides 5,000 ACLs per NIC, along with some foundational subnet level filtering.

In extreme cases, companies might leverage hardware firewalls internally to further zone off systems they are looking to secure. Here the problem is one of volume. Since these firewalls are used within the data center they will be tasked with filtering enormous amounts of network data. Typically the traffic inside a data center is 10X the traffic volume entering the data center. So for mission-critical clusters or server groups, they will demand high bandwidth, and these firewalls can become very expensive and directly impact application performance. Some of the fastest appliance-based firewalls designed to handle these kinds of high volumes are both expensive and add another 2.5 to 3.5 microseconds of latency in each direction. This means that if an intranet server were to fetch information from a database behind an internal firewall the transaction would see an additional delay of 5-6 microseconds. While this honestly doesn’t sound like much think of it like compound interest. If the transaction is simple and there’s only one request, then 5-6 microseconds will go unnoticed, but what happens when that employee’s request decomposes into hundreds or even thousands of database server calls? Delays then become seconds. By comparison, Solarflare’s ServerLock NIC based ACL approach adds only 0.25 to 0.75 microseconds of latency in each direction.

Finally, we have Solarflare’s ServerLock solution which executes entirely within the hardware of the server’s own Network Interface Card (NIC). There are NO server side services or agents, so there is no attackable software surface area of any kind. Think about that for a moment, a server-side security solution with ZERO ATTACKABLE SURFACE AREA. Once ServerLock is engaged through the binding process with a centralized ServerLock DirectorOne controller the local control plane for the NIC that manages security is torn down. This means that even if a hacker or malicious insider were to elevate their privilege to root they would NOT be able to see or affect the security settings on the NIC. ServerLock can test up to 5,000 ACLs against a network packet within the NIC in just over 250 nanoseconds. If your security policies leverage subnet wildcards the worst case latency is under 750 nanoseconds. Both inbound and outbound network traffic is checked in hardware. All of the Solarflare NICs within a data center can be managed by ServerLock DirectorOne controllers. Today a single ServerLock DirectorOne can manage up to 1,000 NICs.

ServerLock DirectorOne is a bundle of code that is delivered as an ISO image and can be installed onto a bare metal server, into a VM or a container. It is designed to manage all the ServerLock NICs within an infrastructure domain. To engage ServerLock on a system you run a simple binding process that facilitates an exchange of secrets between the DirectorOne controller and the ServerLock NIC. Once engaged the ServerLock NIC will begin sharing new network flows with the DirectorOne controller. DirectorOne provides visibility to all the network flows across all the ServerLock enabled systems within your infrastructure domain. At that point, you can then begin defining security policies and place them in compliance or enforcement mode. In compliance mode, no traffic through the NIC will be filtered, but any traffic that is not in compliance with the defined security policies for that NIC will generate alerts. Once a policy is moved into “enforcement” mode all out of policy packets will have the default action applied to them.

If you’re looking for the most secure solution to protect your companies servers you should consider Solarflare’s ServerLock. It is the most affordable, and secure way to protect your valuable corporate assets.

828ns – A Legacy of Low Latency

Electronic trading, like no other industry, can directly link time and money. A decade ago when I started selling 10GbE NICs to Wall Street traders, they often shared with me the value of a single microsecond (millionth of a second) improvement in trading. Today these same traders are measuring gains in nanoseconds (billionths of a second). With each passing quarter our financial markets evolve, and trade execution times decrease. Trading platforms leveraging older hardware and software often can’t remain competitive as other traders continue to invest in the latest products which further reduce trade execution latency and improve order determinism.

For the past decade, Solarflare has led the market in accelerating server-side UDP/TCP networking for electronic trading with our Onload® software acceleration stack. In addition, Solarflare has regularly delivered a new generation of 10GbE network adapters that have further reduced network latency by 20-30% while also reducing jitter. Often these advances were the result of improvements in the hardware, but there were many significant enhancements to the Onload stack that contributed substantially to the overall system performance increases. Keep in mind that Onload is fully compliant to the BSD Sockets standard, which means that developers don’t have to change their code to use Onload. The table below shows this reduction in Onload latency over time along with the gain from each new generation of Solarflare adapters.

In the below graph (click on it to enlarge) you’ll see how latency with Onload compares between Solarflare’s SFN8522 and X2522 as message size increases. We’ve also included our next closest competitor, Mellanox, with their ConnectX-5 adapter and VMA offload stack.

About five years ago, Solarflare saw an opportunity to revisit TCP/UDP networking stacks within Onload and determined that it is possible to squeeze another 35-50% in performance gains if developers were willing to use a new C language application programming interface (API). This new API was built from the ground up focused on performance, and it implements only a subset of the complete BSD Sockets API. Every API call has been highly tuned to deliver optimum performance. On the road to formulating this API Solarflare has patented several new innovations, and in 2016 it leaped forward again by introducing this API and branding it TCPDirect. Initially, TCPDirect improved latency on Solarflare’s SFN8522 adapter by an astonishing 38%!

Recently TCPDirect was tested with the Solarflare’s latest X2522 cards, and it delivered an improved 48% latency reduction over Onload on the same adapter (click on the graph below). Today TCPDirect with the X2522 provides an amazing 828ns of latency with TCP. So how does this compare with Mellanox? The X2522 with TCPDirect is 39% faster than the Mellanox ConnectX-5 with VMA and Exasock! This gain is shown in the graph below. It should be noted that this testing was done using an older more performant Intel Skylake processor with a 3.6Ghz clock. Intel’s newest Cascade Lake processors burst up to 4.4Ghz, but they were not available at the time of this testing. Recent testing indicates that they should produce even more impressive results.

Trading and Time are interwoven into a single fabric, one cannot exist without the other. When trades are executing with a precision measured in nanoseconds you need a technology partner that is leading the industry, not following it. Solarflare also provides a precision time protocol (PTP) daemon that includes both IEEE-1588 (2008) and enterprise profiles. Additionally, Solarflare makes available an optional PCIe bracket kit enabling the direct connection of an external hardware master clock that can deliver a highly accurate one pulse per second (1PPS) signal.  This kit and Solarflare’s PTP daemon enable the adapter to maintain system time synchronization to within 200ns of the external master clock. Mellanox has stated that their PTP implementation “can see time locked to reference well within 500 nanoseconds of variation.”

Numerous STAC reports over the past decade with all the major OEMs and the Linux distributions used in finance have validated that Solarflare networking technology is the standard by which all others are measured. Innovations like those discussed above are the reason why over 90% of the stock exchanges, global investment banks, hedge funds, and cutting-edge high-frequency traders’ architect their systems with Solarflare hardware and software. Outside of the Linux kernel’s own communications stack, no other TCP/UDP user-space communications stack is more heavily tested or in wider production than Solarflare’s Onload platform. Today the world economy exists across hundreds of thousands of servers spread throughout the globe, and nearly all of those servers depend on Solarflare to provide the industry’s best performance with the lowest jitter possible. Below are recent STAC Research reports from the past two years that back up our claims.

June 2018 – SFC180604b– UDP over 10GbE using Solarflare OpenOnload on Red Hat OpenShift 3.10 (pre-release) with RHEL 7.5 and Solarflare XtremeScale X2522 Adapters on Supermicro SYS-1029UX-LL1-S16 Servers

June 2018 – SFC180604a– UDP over 10 GbE Solarflare OpenOnload on Red Hat Enterprise Linux 7.5 with Solarflare XtremeScale X2522 adapters on Supermicro SYS-1029UX-LL1-S16Servers

October 2017 – SFC170831– STAC-T0: Solarflare SFN8522-ONLOAD NIC with LDA Technologies LightSpeed TCP on an Alpha Data FPGA in a Penguin Computing Relion XE1112 Server

Febuary 2017 – SFC170206– UDP over 10GbE using OpenOnload on RHEL 6.6 with Solarflare SFN 8522-PLUS Adapters on HPE ProLiant XL170r Gen9 Trade & Match Servers