Is Serverless Also Trafficless?

Recently I read the article “Why Now Is The Time To Go Serverless” by Romi Stein, the CEO of OpenLegacy, a composable platform company. While I agree with Romi on several points he made around the importance of APIs, micro-service architectures, and cloud computing. I agree that serverless doesn’t truly mean computing without a server, but rather computing on servers owned and provisioned by major cloud providers. My main point of contention is that large businesses executing mission-critical functions in public clouds may eventually come to regret this move to a “Serverless” architecture as it may also be “Trafficless.” Recently we’ve seen a rash of colossal security vulnerabilities from companies like Solarwinds and Microsoft (Outlook Server). Events like these should make us all pause and rethink how we handle security. Threat detection, and the resulting aftermath of a breach, especially in a composable enterprise highly dependent on a public cloud infrastructure, may be impossible because key data doesn’t exist or isn’t available.

Getty Images

In a traditional on-premises environment, it is generally understood that the volume of network traffic within the enterprise is often 10X that of the traffic entering and leaving the enterprise. One of the more essential strategies for detecting a potential breach within an enterprise is to examine; hopefully, in near-real-time, both the internal and external network flows looking for irregular traffic patterns. If you are notified of a breach, an analysis of these traffic patterns is often used to confirm a breach has occurred. To service both of these tasks copies are made of network traffic in flight, its called traffic capture. The data may then be reduced and eventually shipped off to Splunk, or run through a similar tool, hopefully locally. Honestly, I was never a big fan of shipping off copies of a company’s network traffic to a third party for analysis; many of a company’s trade secrets reside in these digital breadcrumbs.

Is a serverless environment also trafficless? Of course not, that’s ridiculous, but are private cloud providers willing to, or even capable of, sharing copies of all the network traffic your serverless architecture generates? If they were, what would you do with all that data? Wait, here’s another opportunity for the public cloud guys. They could sell everyone another service that captures and analyzes all your serverless network traffic to tell you when you’ve been breached! Seriously, this is something worthy of consideration.

What is Confidential Computing

Confidential Computing Consortium

Data exists in three states, at rest, in-flight, and in-use. Over recent years the security industry has done an excellent job of providing solutions for securing data at rest, such as data stored on a hard drive and in-flight think web pages via HTTPS. Unfortunately, those looking to steal data are very familiar with these advances, so they probe the entire system searching for new vulnerabilities. They’ll look at code that hasn’t been touched in years, or even decades (Shellshock), and architectural elements of the system which were previously trusted like memory (Meltdown), cache, and CPU registers (Spectre). Confidential Computing address this third state, data in-use, by providing a hardware-based trusted execution environment (TEE). Last spring, the Linux Foundation realized that extensive reliance on public clouds demanded a more advanced holistic approach to security. Hence, they launched the Confidential Computing Consortium.  

The key to Confidential Computing is building a TEE entirely in hardware. The three major CPU platforms all support some form of a TEE, and they are Intel’s Software Guard Extensions (SGX), AMD’s Secure Encrypted Virtualization (SEV), and ARM’s TrustZone. Developers can leverage these TEE platforms, but each is different, so code written for SGX will not work on an AMD processor. To defeat a TEE and access your most sensitive data, an attacker will need to profile the server hardware to determine which processor environment is in use. They will then need to find and deploy the appropriate vulnerability for that platform if one exists. They also need to ensure that their exploit has no digital or architectural fingerprints that would make attribution back to them a possibility when the exploit is eventually discovered. 

Creating a trusted execution environment in hardware requires the host CPU to be intimately involved in the process. AMD, ARM, and Intel each provide their own hardware support for building a TEE, and each has its benefits. Two security researchers, one from Wayne State University and the other from the University of Houston, produced an excellent comparison of AMD and Intel’s platforms. For Intel, they stated:

“We conclude that Intel SGX is suited for highly security-sensitive but small workloads since it enforces the memory integrity protection and has a limited amount of secure resources.”

Concerning AMD

“AMD SME and SEV do not provide memory integrity protection. However, providing a greater amount of secure resources to applications, performing faster than Intel SGX (when an application requires a large amount of secure memory), and no code refactoring, make them more suitable for complex or legacy applications and services.” 

Based on the work of these researches it would seem that AMD has a more comprehensive platform and that their solution is considerably more performant than Intel’s SGX.

So how does confidential computing establish a trusted execution environment? Today the Confidential Computing Consortium has three contributed projects, and each has its take on this as their objective:

For the past two decades, the US and the UK have used the construct of a Sensitive Compartmented Information Facility (SCIF but pronounced SKIF) to manage classified data. A SCIF is an enclave, a private space surrounded by public space, with a very well-defined set of procedures for securely using data within this private space and moving data into and out of the private space. Intel adopted some of these same concepts when they defined the Software Guard Extensions (SGX). SGX is a new set of processor instructions that first appeared in Skylake. When SGX instructions are used, it enables the processor to build a private enclave in memory where all the data and code in that memory region is encrypted. That region is further zoned off from all other processes, so they don’t have access to it, even those with a higher privilege. As the processor fetches instructions or data from that enclave, it then decrypts them in-flight, and if a result is to be stored back in the enclave, it will then be encrypted in-flight before it is stored. 

When Intel rolled out SGX in 2015, it immediately became the safe that all safe crackers wanted to defeat. In computer science, safe crackers are security researchers, and in the five years since SGX was released, we’ve seen seven well-documented exploits. The two that exposed the most severe flaws in SGX were Prime+Probe and Foreshadow. Prime+Probe was able to grab the RSA keys that secured the encrypted code and data in the enclave. Within six months, a countermeasure was published to disable Prime+Probe. Foreshadow was a derivative of Spectre, and used flaws in speculative execution and buffer overflow to attack the secure enclave. SGX is a solid start with regard to building a trusted execution environment in hardware. WolfSSL also adopted SGX and tied it to its popular TLS/SSL stack to provide a secure connection into and out of an SGX enclave. 

The Open Enclave SDK claims it is hardware agnostic, a software-only platform for creating a trusted execution environment. The Open Enclave SDK requires SGX with Flexible Launch Control (FLC) as a prerequisite for installation. It is an extension of SGX and only runs on Intel hardware. Recently, a technology preview was made available for the Open Portable Trusted Execution Environment OS available on ARM that leverages TrustZone. At this point, there appears to be no support for AMD’s platform. 

Enarx is also a hardware-agnostic, but it is an application launcher designed to support Intel’s SGX and AMD’s Secure Encrypted Virtualization (SEV) platform. It does not require that applications be modified to use these trusted execution environments. When delivered, this would be a game-changer. “Enarx aims to make it simple to deploy workloads to a variety of different TEEs in the cloud, on your premises or elsewhere, and to allow you to have confidence that your application workload is as secure as possible.” At this point, Enarx hasn’t mentioned support for ARM’s TrustZone technology. There is tremendous promise in the work the Enarx team is doing, and they appear to be making some substantial progress.  

The Confidential Computing Consortium is still less than a year old, and it has attracted all the major CPU and data center players as members. Their goal is an ambitious one, but with projects like Enarx well underway, it’s hopeful that securing data in-use will soon become commonplace throughout on-premises and cloud environments. 

*Note this story was originally written for Linkedin on July 12, 2020. 

Banned from the Internet

First Ever Picture of a Black Hole

After nearly seventeen years with IBM, in July of 2000, I left for a startup called Telleo founded by four IBM Researchers I knew and trusted. From 1983 through April 1994, I worked at IBM Research in NY and often dealt with colleagues at the Almaden Research Center in Silicon Valley. When they asked me to join, there was no interview; I already had impressed all four of them years earlier, this was in May of 2000. In March of 2001, the implosion of Telleo was evident. Although I’d not been laid off, I voluntarily quit just before Telleo stopped paying on their IBM lease, which I’d negotiated. The DotCom bubble burst in late 2000, so by early 2001, you were toast if you weren’t running on revenue. Now, if you didn’t live in Silicon Valley during 2001, imagine a large mining town where the mine had closed, this was close to what it was like, just on a much grander scale. Highway 101 had gone from packed during rush hour to what it typically looked like during the weekend. Venture Capitalists drew the purse strings closed, and if you weren’t running on revenue, you were out of business. Most dot-com startups bled red monthly and eventually expired.

Now imagine being an unemployed technology executive in the epicenter of the worst technology employment disaster in history, up until that point, with a wife who volunteered and two young kids. I was pretty motivated to find gainful employment. For the past few years, a friend of mine had run a small Internet Service Provider and had allowed me to host my Linux server there in return for some occasional consulting.

I’d set Nessus up on that server, along with several other tools, so it could be used to ethically hack client’s Internet servers, only by request, of course. One day when I was feeling particularly desperate, I wrote a small Perl script that sent a simple cover letter to jobs@X.com. Where “X” was a simple string starting with “aa” and eventually ending at “zzzzzzzz”. It would wait a few seconds between each email, and since these were to jobs@x.com I figured it was an appropriate email blast. Remember this was 2001, before SPAM was a widely used term. I thought “That’s what the “jobs” account is for anyway, right?” My email was very polite and requested a position and briefly highlighted my career.

Well, somewhere around 4,000 emails later, I got shut down, and my Internet domain, ScottSchweitzer.com was Black Holed. For those not familiar with the Internet version of this term, it essentially means no email from your domain even enters the Internet. If your ISP is a friend and he fixes it for you, he can run the risk of getting sucked in, and all the domains he hosts get sucked into the void as well. Death for an ISP. Fortunately, my friend that ran the ISP was a life-long IBMer, and he had networking connections at some of the highest levels in the Internet, so the ban stopped with my domain. 

To clean this up required some emails and phone calls to fix the problem from the top down. It took two weeks and a fair amount of explaining to get my domain back online to the point where I could once again send out emails. Fortunately, I always have at least several active email accounts, and domains. Also, this work wasn’t in vain, as I’d received a few consulting gigs as a result of the email blast. So now you know someone who was banned from the Internet!

SmartNICs, the Next Wave in Server Acceleration

As system architects, we seriously contemplate and research the components to include in our next server deployment. First, we break the problem being solved into its essential parts; then, we size the components necessary to address each element. Is the problem compute, memory, or storage-intensive? How much of each element will be required to craft a solution today? How much of each will be needed in three years? As responsible architects, we have to design for the future, because what we purchase today, our team will still be responsible for three years from now. Accelerators complicate this issue because they can both dramatically breath new life into existing deployed systems, or significantly skew the balance when designing new solutions.

Today foundational accelerator technology comes in four flavors: Graphical Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), Multi-Processor Systems on a Chip (MPSoCs) and most recently Smart Network Interface Cards (SmartNICs). In this market, GPUs are the 900-pound gorilla, but FPGAs have made serious market progress the past few years with significant deployments in Amazon Web Services (AWS) and Microsoft Azure. MPSoCs, and now SmartNICs, blend many different computational components into a single chip package, often utilizing a mix of ARM cores, GPU cores, Artificial Intelligence (AI) engines, FPGA logic, Digital Signal Processors (DSPs), as well as memory and network controllers. For now, we’re going to skip MPSoCs and focus on SmartNICs.

SmartNICs place acceleration technology at the edge of the server, as close as possible to the network. When computational processing of network intense workloads can be accomplished at the network edge, within a SmartNIC, it can often relieve the host CPU of many mundane networking tasks. Normal server processes require that the host CPU spend, on average, 30% of it’s time managing network traffic, this is jokingly referred to as the data center tax. Imagine how much more you could get out of a server if just that 30% were freed up, and what if more could be made available?

SmartNICs that leverage ARM cores and or FPGA logic cells exist today from a growing list of companies like Broadcom, Mellanox, Netronome, and Xilinx. SmartNICs can be designed to fit into a Software-Defined Networking (SDN) architecture. They can accelerate tasks like Network Function Virtualization (NVF), Open vSwitch (OvS), or overlay network tunneling protocols like Virtual eXtensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE). I know, networking alphabet soup, but the key here is that complex routing, and packet encapsulation tasks can be handed off from the host CPU to a SmartNIC. In virtualized environments, significant amounts of host CPU cycles can be consumed by these tasks. While they are not necessarily computationally intensive, they can be volumetrically intense. With datacenter networks moving to 25GbE and 50GbE, it’s not uncommon for host CPUs to process millions of packets per second. This processing is happening today in the kernel or hypervisor networking stack. With a SmartNIC packet routing and encapsulation can be handled at the edge, dramatically limiting the impact on the host CPU.

If all you were looking for from a SmartNICs is to offload the host CPU from having to do networking, thereby saving the datacenter networking tax of 30%, this might be enough to justify their expense. Most of the SmartNIC product offerings from the companies mentioned above run in the $2K to $4K price range. So suppose you’re considering a SmartNIC that costs $3K, with the proper software, and under load testing, you’ve found that it returns 30% of your host CPU cycles, what is the point at which the ROI makes sense? A simplistic approach would suggest that $3K divided by 30% yields a system cost of $10K. So if the cost of your servers is north of $10K, then adding a $3K SmartNIC is a wise decision, but wait, there’s more.

SmartNICs can also handle many complex tasks like key-value stores, encryption, and decryption (IPsec, MACsec, soon even SSL/TLS), next-generation firewalls, electronic trading, and much more. Frankly, the NIC industry is at an inflection point similar to when video cards evolved into GPUs to support the gaming and virtualization market. While Sony coined the term GPU with the introduction of the Playstation in 1994, it was Nvidia five years later in 1999 who popularized the GPU with the introduction of the GeForce 256. I doubt that in the mid-1990s, while Nvidia was designing the NV10 chip, the heart of the GeForce 256, that their engineers were also pondering how it might be used in high-performance computing (HPC) applications a decade later that had nothing to do with graphic rendering. Today we can look at all the ground covered by GPU and FPGA accelerators over the past two decades and quickly see a path forward for SmartNICs where they may even begin offloading the primary computational tasks of a server. It’s not inconceivable to envision a server with a half dozen SmartNICs all tasked with encoding video, or acting as key-value stores, web caches, or even trading stocks on various exchanges. I can see a day soon where the importance of SmartNIC selection will eclipse server CPU selection when designing a new solution from the ground up.

Bitcoin Isn’t Anonymous

There is this misconception that one of the key features of Bitcoin as a currency is that it is anonymous, nothing could be further from the truth. In fact, it is even less anonymous than using your credit card as the transaction is posted publicly on the Bitcoin blockchain. Last Wednesday, October 16th, 338 people across 38 countries worldwide learned this first hand. That day the US Department of Justice unsealed indictments against “Welcome to Video” (WTV) and its partners, distributors, and customers. With over one million users WTV was the largest child pornography site ever shut down by law enforcement. Think “Plato’s Boys” and Ron, of “Ron’s Coffee” in the June 2015 “Mr. Robot” pilot, only three times bigger! WTV was executing the complete dark web playbook for conducting illicit activity. They leveraged TOR, The Onion Router network, to distribute content, and Bitcoin to obfuscate funds distribution. What they didn’t know was that companies like Chainalysis exist which crawl through the Bitcoin blockchain and build transaction dependency graphs.   

Chainalysis Graph

Bitcoin was the first and is the most popular digital currency, which makes it easier to use, but it was never designed for anonymity. Think about it, you share the same public wallet ID repeatedly in the clear to accept or send a payment, how can this be anonymous? While a wallet ID isn’t as cut and dry as a credit card number or bank account number and routing ID it is easily traceable through the blockchain. In real life the proceeds from illicit transactions need to eventually be spent on goods and services, otherwise, what’s the point. To do this involves an Exchange that turns Bitcoin into a fiat currency, like US Dollars or UK Pounds Sterling. These exchanges hold the key to translating a public wallet ID into a name and financial institution.     

Law enforcement, working in concert with charities focused on eliminating human trafficking, obtained the public wallet ids used by WTV. Then through Chainalysis’s dependency graph, they could trace customer payments made to WTV as well as payments WTV made to their content suppliers and distributors.  WVT suggested six different Bitcoin exchanges to its customers and partners. From the unsealed indictment, samples were provided of at least three of those exchanges where they translated public wallet IDs into the end user’s name and their banking details. Just another case of following the money. Now I’m not saying that ALL digital currencies are not anonymous. There are at least five newer privacy-based coins like Monero, Dash, ZCash, Verge and Bitcoin Private that exist to provide anonymity, but they’re a story for another day.

The Importance of “Local”

Binary translates to “Local”

We’ve all attended large industry international trade conferences hosting tens of thousands of people. These are spectacles designed to raise brand awareness, educate those in attendance about industry advances, network with colleagues you haven’t seen in a spell, all while promoting new products and services. By contrast there are also smaller regional industry trade shows that are scaled-down versions of these larger events with many of the same objectives, and then there are Security BSides events.

For those not familiar with BSides, they were started in 2009 to further educate folks on cybersecurity at the city and regional level. Think Blackhat, but on a Saturday at the local civic center, and with perhaps 200 people instead of 19,000. Let’s face it, most security engineers are introverts so socializing at significant events like Blackhat is uncomfortable. While bringing a few coworkers or friends on a Saturday to a BSides event can be downright fun. Let’s face who doesn’t want to sit for 20-30 minutes in the lock-pick village with their friends to test their skills on some of MasterLock, Schlage or Kwikset’s most common products. It’s heartwarming to teach a NOOB (short for a newbie) how to pick a lock, then watch their excitement when the hasp clicks open for the first time.

Then there’s always the Capture the Flag (CTF) or wireless CTF for when you’re not interested in the session(s) being offered. If you’ve not played a security capture the flag event before then you really are missing something. It is a challenging series of puzzles served up Jeopardy-style. Say 10 points if you can decrypt this phrase. Or 20 points if you can determine whose attacking your machine on five different ports. Perhaps another 50 points if you can write a piece of code that can read a web page, unscramble five words, and post the five proper words back to the website in three seconds before the clock expires and the words are no longer valid. It’s an intellectual problem solving competition at its finest, and did I mention there is a leaderboard. Often projected high on the wall for all to see throughout the day are the teams with the highest scores. It really warms the heart when your team is the second on the board and it stays in the top five most of the day. While we were the second on the board at BSides Asheville, we didn’t stay in the top five for long.

More seriously though, for a $20 entry fee (which includes a T-shirt) these BSides events offer an affordable local event for cybersecurity engineers and hobbyists. BSides enables socially challenged people the opportunity to step out of their shell, and reach out to similar like-minded individuals while networking in a comfortable and technical space. You can bond over lock-picking, a CTF challenge, during lunch or between sessions. Bring one of your nerd friends as a wingman, or better yet several to form a CTF team, and make a day of it. If you’d like to check out an online CTF one of our favorites is RingZer0. If you want to see the hacker side of the Technology Evangelist, W3bMind5, or read about his team’s experiences at BSides Asheville then they can be found at RedstoneCTF.

The RedstoneCTF team may be attending BSidesCLT on September 28th and BSidesRDU on October 19th.

Spearfishing vs. Spear Phishing

While in Hawaii recently on vacation my millennial son tossed out a bucket list suggestion that we both go deep water Spearfishing. Immediately the iconic battle from the James Bond movie “Thunderball” leaped to mind. It’s the scene where the villain Largo’s minions in black wetsuits wage war against a platoon of US Navy Seals in red wetsuits. The whole sequence is fought with untethered spearguns and dive knives, safety first! Not one to back down from a challenge I arranged the dive and along the way we learned a few things worthy of sharing.

To further set the stage, back in 1992 I earned my PADI Open Water dive certification and have since made hundreds of dives, so pulling on a wetsuit, donning flippers, a mask and snorkel is nothing new, or so I thought. This was a 2mm one-piece wetsuit design which offered both thermal protection from the water as well as solar protection from burning exposed skin. The difference between this suit and my normal warm water one is that this one is decorated with an open water camouflage design. The purpose of the camouflage is to make the wearer look like a mass of seaweed to attack the smaller fish to the shade. The mask and snorkel are typical, but the fins were a whole different game. When spearfishing your objective is to not scare off the small fish which then alert the larger game fish. To do this you must minimize ALL your movements, including your kicks. Most of your time is spent drifting on the surface and lying in wait for your prey. Did I mention the chum, yes cut up bait fish are introduced into the water near where you’re drifting to draw in larger game fish, and sometimes sharks. Towards this end when spearfishing you use free diving fins which are nearly a meter long, three feet for my friends in the US. This enables the diver to make subtle ankle movements that gently propel them through the water.

When prey arrives the hunter slowly moves the one-meter long wood speargun from their side into a position in front of them. They then lock out their dominant arm holding the gun, support the stock with their free hand, and slowly scan left and right to ensure that no other divers are in harm’s way. Finally, the hunter aligns the gun with the target and squeezes the trigger. The bolt travels a maximum of five meters, with the optimum killing distance between three and five meters. Yes, you have to be very close to the fish, move with extreme care, and you have to make your only shot count. If your shot is true and you hit the fish solidly in the head then you’re instructed to drop the gun. Now there are a few caveats that I’ve not yet covered. The dive master instructed us to NOT shoot any fish that appears to be larger that 100 pounds. It turns out that connected to the back of the speargun is about 100 feet of floating line (1/2″ thick) that ends with a buoy. Divers can easily get tangled up in this line if they’re not careful while drifting. A 100-pound fish, with some room to run after being speared, can generate enough momentum to pull a fully grown diver under water, potentially resulting in their death. We were instructed that if a fish is in the area that is larger than 100 pounds, but less than 200 pounds, to slowly pass the gun to the dive master so they could then double check the area before taking a more experienced shot. Death from accidentally being speared, or dragged under by a fish, was represented as a very tangible threat. We had two spear guns, five divers, and five hours of hunting, and yet there was only one clear shot that proved fruitless. The fish felt the spear but it did not penetrate its skin because the spear had reached the end of the line attaching it to the gun as it touched the fish. So what does all this have to do with Spear phishing?

Phishing is the process of using emails containing malware designed to compromise the computer reading these emails. Spear phishing is the act of specifically targeting a single individual using a very custom crafted email and phishing attachment. While generic phishing attacks are often “spray and pray” based assaults, sometimes the employees of a given company or industry, spear phishing attacks are laser-focused on a single person. The attacker thoroughly researches their target, combing the web, social media and perhaps even doing some real-world social engineering and recognizance, to learn everything they can. The attacker’s objective is to select the most attractive strategy designed to elicit a response that results in the target opening an infected attachment. As in spearfishing, you may only get one shot so it has to be your best.

In both, the above cases the hunter thoroughly researches their prey looking for the most opportune places to hunt, the proper times, and the most alluring baits. They then choose the appropriate weapon, and thoroughly practice the use of that weapon to ensure that they can make it function properly with the single shot they might get on their target. They then select and distribute the proper baits, and lie in wait for their prey.

Something that is common and often overlooked is that in both Spearfishing and Spear Phishing the hunter is far more exposed, and hence significantly more vulnerable than they might be had they used ANY other method of attack. In Spearfishing the hunter is in the water only meters from his prey, and if they’re successful they need to move fast to land their catch on the boat before the arrival of sharks. A wounded fish instantly spills blood into the water and flails around in an effort to free itself. Sharks can detect blood in the water up to 1/3 of a mile away, and when they are near sense the electrical impulses from a fish’s muscles in distress and their splashing to zero in very quickly on what is now “their” prey. Sharks aren’t known for being discriminating eaters, so it is not uncommon at this point for the hunter to also become the hunted. In Spear Phishing if the attacker isn’t meticulous in covering their tracks during their research, social engineering efforts, bait selection (phishing email), and weapon design (phishing exploit used within the email) these can often be used to uncover their identity.

So be ever vigilant as you approach your email, there will be times when you’re only one click away from being speared, and your system becoming compromised!

In Security, Hardware Trumps Software


Since the dawn of time humanity has needed to protect both people and things. Initial security methods were all “software based” in the sense that they relied on the user putting their trust in a process, people and social conventions. At first, it was cavemen hiding what they most valued, leveraging security through obscurity or they posted a trusted associate to watch the entrance. Finally, we expanded our security methods to include some form of “Keep Out” signs through writings and carvings. Then in 600BC along comes Theodorus of Samos, who invented the key. Warded locks had existed about three hundred years before Theodorus, but the “key” was just designed to bypass obstructions to its rotation making it slightly more challenging to access the hidden trip lever inside. For a Warded lock the “key” often looked like what we call a skeleton key today.

It could be argued that the lock represented our first “hardware based” security system as the user placed their trust in a physical token or key based system. Systems secured in hardware require that the user present their token in person, it is then validated, and if it passes, the security measures are removed. It should be noted that we trust this approach because it’s both the presence of the token and the accountability of a person in the vicinity who knows how to execute the exact process with the token to ensure success.

Now every system man invents can also be defeated. One of the first skills most hackers teach themselves is how to pick a lock. This allows us to dynamically replicate the function of the key using two very simple and compact tools (a torsion bar and a pick). Whenever we pick a lock we risk exposure, something we avoid at all cost, because the process of picking a lock looks visually different than that of using a key. Picking a lock using the tools mentioned above requires two hands. One provides a steady rotational force using the torsion bar. While the other manipulates the pick to raise the pins until each aligns with the cylinder and hangs up. Both hands require a very fine sense of touch, too heavy handed with the torsion bar and you can snap the last pin or two while freeing the lock. This will break it for future key users, and potentially expose your attempted tampering. Too light or heavy with the pick and you won’t feel the pins hanging up, it’s more skill than a science. The point is that while using a key takes seconds picking a lock takes much longer, somewhere between a few seconds to well over a minute, or never, depending on the complexity of the cylinder, and the person’s skill. The difference between defeating a software system and a hardware one is typically this aspect of presence. While it’s not always the case, often to defeat hardware-based systems it requires that the attacker be physically present because defeating hardware commonly requires hardware. Hackers often operate from countries far outside the reach of law enforcement, so physical presence is not an option. Attackers are driven by a risk-reward model, and showing up in person is considered very high risk, so the reward needs to be exponentially greater.

Today companies hide their most valuable assets in servers located in large secure data centers. There are plenty of excellent real-world hardware and software systems in place to ensure proper physical access to these systems. These security measures are so good that hackers rarely try to evade them because the risk of detection and capture is too high. Yet we need only look at the past month, April 2019, to see that companies like Microsoft, Starwood, Toyota, GA Tech and Questcare have all reported breaches. In Microsoft’s case, 6% of all MSN, HotMail, and Outlook accounts were breached, but they’ve not disclosed the details or the number of accounts. This is possible because attackers need to only break into a single system within the enterprise to reach the data center and establish a beachhead from which they can then land and expand. Attackers usually obtain a secure foothold through a phishing email or clickbait.

It takes only one undereducated employee to open a phishing email in outlook, launch a malicious attachment, or click on a rogue webpage link and it’s game over. Lockheed did extensive research in this area and they produced their now famous Cyber Kill Chain model. At a high level, it highlights the process by which attackers seize control of an enterprise. Anyone of these attack vectors can result in the installation of a remote access trojan (RAT) or a Zero-Day exploit that will give the attacker near unlimited access to the employee’s system. From there the attacker will seek out a poorly secured server in the office or data center to establish a beachhead from which they’ll launch their attack. The compromised employee system may not always be available, but it does makes for a great point to retreat back to in the event that the primary beachhead server system is discovered and sanitized.

Once an attacker has a foothold in the data center its game over. Very often they can easily move laterally, east-west, through the data center to other systems. The MITRE ATT&CK (Adversarial Tactics Techniques & Common Knowledge) framework, while similar to Lockheed’s approach, drills down much further. Specifically, on the lateral movement strategies, Mitre uncovered 17 different methods for compromising internal servers. This highlights the point that very few defenses exist in the traditional data center and those that do are often very well understood by attackers. These defenses are typically OS based firewalls that all seasoned hackers know how to disable. Hackers will disable logging, then tear down the firewall. They can also sometimes leverage an island hopping attack to a vendor or customer systems through private networks or gateways. Or in the case of the Starwood breach of Marriott the attackers got lucky and when their IT systems were merged so were the exploited systems. This is known as a data lemon, an acquisition that comes with infected and unsecured systems. Also, it should be noted that malicious insiders, employees that are aware of a pending termination or just seeking to augment their income, make up over 30% of the reported breaches. In this attack example, a malicious insider simply leverages their access and knowledge to drain all the value from their employer’s systems. So what hardware countermeasures can be put in place to limit east-west or lateral attacks within the data center? Today you have three hardware options to secure your data center servers against east-west attacks. We have switch access control lists (ACLs), top of rack firewalls or something uniquely innovative Solarflare’s ServerLock enabled NICs.

Often enterprises leverage ACLs in their top of rack 10/25/100G switches to protect east-west traffic within the data center. The problem with this approach is one of scale. IT teams can easily exhaust these resources when they attempt comprehensive application level segmentation at the server. These top of rack switches provide between 100 and 1,000 ACLs per port. By contrast, Solarflare’s ServerLock provides 5,000 ACLs per NIC, along with some foundational subnet level filtering.

In extreme cases, companies might leverage hardware firewalls internally to further zone off systems they are looking to secure. Here the problem is one of volume. Since these firewalls are used within the data center they will be tasked with filtering enormous amounts of network data. Typically the traffic inside a data center is 10X the traffic volume entering the data center. So for mission-critical clusters or server groups, they will demand high bandwidth, and these firewalls can become very expensive and directly impact application performance. Some of the fastest appliance-based firewalls designed to handle these kinds of high volumes are both expensive and add another 2.5 to 3.5 microseconds of latency in each direction. This means that if an intranet server were to fetch information from a database behind an internal firewall the transaction would see an additional delay of 5-6 microseconds. While this honestly doesn’t sound like much think of it like compound interest. If the transaction is simple and there’s only one request, then 5-6 microseconds will go unnoticed, but what happens when that employee’s request decomposes into hundreds or even thousands of database server calls? Delays then become seconds. By comparison, Solarflare’s ServerLock NIC based ACL approach adds only 0.25 to 0.75 microseconds of latency in each direction.

Finally, we have Solarflare’s ServerLock solution which executes entirely within the hardware of the server’s own Network Interface Card (NIC). There are NO server side services or agents, so there is no attackable software surface area of any kind. Think about that for a moment, a server-side security solution with ZERO ATTACKABLE SURFACE AREA. Once ServerLock is engaged through the binding process with a centralized ServerLock DirectorOne controller the local control plane for the NIC that manages security is torn down. This means that even if a hacker or malicious insider were to elevate their privilege to root they would NOT be able to see or affect the security settings on the NIC. ServerLock can test up to 5,000 ACLs against a network packet within the NIC in just over 250 nanoseconds. If your security policies leverage subnet wildcards the worst case latency is under 750 nanoseconds. Both inbound and outbound network traffic is checked in hardware. All of the Solarflare NICs within a data center can be managed by ServerLock DirectorOne controllers. Today a single ServerLock DirectorOne can manage up to 1,000 NICs.

ServerLock DirectorOne is a bundle of code that is delivered as an ISO image and can be installed onto a bare metal server, into a VM or a container. It is designed to manage all the ServerLock NICs within an infrastructure domain. To engage ServerLock on a system you run a simple binding process that facilitates an exchange of secrets between the DirectorOne controller and the ServerLock NIC. Once engaged the ServerLock NIC will begin sharing new network flows with the DirectorOne controller. DirectorOne provides visibility to all the network flows across all the ServerLock enabled systems within your infrastructure domain. At that point, you can then begin defining security policies and place them in compliance or enforcement mode. In compliance mode, no traffic through the NIC will be filtered, but any traffic that is not in compliance with the defined security policies for that NIC will generate alerts. Once a policy is moved into “enforcement” mode all out of policy packets will have the default action applied to them.

If you’re looking for the most secure solution to protect your companies servers you should consider Solarflare’s ServerLock. It is the most affordable, and secure way to protect your valuable corporate assets.

Focus on the New Network Edge, the Server

For decades we’ve protected the enterprise at the network edges where the Internet meets our DMZ, and then again where our DMZ touches our Intranet. These two distinct boundary layers and the DMZ in-between makeup what we perceived as the network edge. It should be pointed out though that these boundaries were architected long before phishing and click-bate existed as part of our lexicon. Today anyone in the company can open an email, click on an attachment or a web page, and open Pandora’s box. A single errant click can covertly launch a platform that turns the computer into a beachhead for the attacker. This beachhead then circumvents all your usual well-designed edge focused defenses as it establishes an encrypted tunnel enabling the attacker access to your network whenever they like.

Once an attacker has established their employee hosted beachhead, they then begin the search for a secondary, server-based, vantage point from which to operate. A server affords them a more powerful hardware system and often one with a higher level of access across the entire enterprise. Finally, if the exploit is discovered in that server, the attacker can quickly revert to their fall back position on their initial beachhead system and wait out the discovery.

This is why enterprises must act as if they’ve already been breached. Accept the fact that there are latent attackers already inside your network seeking out your corporate jewels. So how do you prevent access to your companies most valuable data? Attackers are familiar with the defense in depth model so once they’re on your corporate networks, often all that stands between them and the data they desire is knowing where it is hidden, and obtaining the minimum required credentials to access it. So how do they find the good stuff?

They start by randomly mapping your enterprise network in hopes that you don’t have internal honey-pots or other mechanisms that might alert you to their activity. Once the network is mapped they’ll then use your DNS to assign names to the systems they’ve discovered in hopes that this might give them a clue where the good stuff resides. Next, they’ll do a selective port scan against the systems that look like possible targets to determine what applications are running on them to fill in their attack plan further. At this point, the attacker has a detailed network map of your enterprise, complete with system names, and the names of the applications running on those systems. The next step will be to determine the versions of the applications running on what appear to be the most critical systems, so they’ll know which exploits to leverage. It should be noted that even if your servers have a local OS based firewall, you’re still vulnerable. The attackers at this point know everything they need to, so if you haven’t detected the attack by this stage, then you’re in trouble because the next step is the exfiltration of data.

If we view each server within your enterprise as the new network edge, then how can we defend these systems? Solarflare will soon announce ServerLock, a system that leverages the Network Interface Card (NIC) in your server to provide a new defense in depth layer in hardware. A layer that not only shields it from attack, but it can also camouflage the server and report attempts made to access it. Two capabilities not found in OS based software firewalls. Furthermore, since all security is handled entirely within the NIC, there is no attackable surface area. So how does ServerLock provide both camouflage and reporting?

When a NIC has ServerLock enforcement enabled only network flows for which a defined policy exists are permitted to enter or exit that server. If a new connection request is made to that server which doesn’t align with a security policy, say from an invalid address or to an invalid port, then that network packet will be dropped, and optionally an alert can be generated. The attacker will not receive ANY response packet and assume that nothing is there. Suppose you are enforcing a ServerLock policy on your database servers which ONLY accepts connections from a pool of application servers, and perhaps two administrative workstations, on specific numeric ports. If a file server were compromised and used as an attack position once it reaches out to one of those database servers via a ping sweep or an explicit port scan it would get NOTHING back, the database server would appear as network dark space to the file server. On the ServerLock Manager console alerts would be generated, and the administrator would know in an instant that the file server was compromised. Virtually every port on every NIC that is under ServerLock enforcement is turned into a zero-interaction honeypot.

So suppose the attacker has established themselves on that file server, and the server then gets upgraded to ServerLock and put under enforcement. The moment that attacker steps beyond the security policies executing in that NIC on that server the jig is up. Assuming they’re on the server, once they attempt any outbound network access that falls outside the security policies those packets will be dropped in the NIC, and an alert will be raised at the ServerLock Management console. No data exfiltration today.

Also, it should be noted that ServerLock is not only firmware in the NIC to enforce security policies, but it is also an entire tamper-resistant platform within the NIC. Three elements make up this tamper-resistant platform, first only properly signed firmware can be executed, older firmware versions cannot be loaded, and any attempt to tamper with the hardware automatically destroys all the digital keys stored within the NIC. Valid NIC firmware must be signed with a 384-bit key utilizing elliptic curve cryptography. The Solarflare NIC contains the necessary keys to validate this signature, and as mentioned earlier tampering with the NIC hardware will result in fuses blowing that will corrupt the stored keys forever rendering the both unusable and unreadable.

Today enterprises should act as though they’ve already been compromised, and beef up their internal defenses to protect the new network edge, the server itself. In testing ServerLock, we put a web server protected by ServerLock directly on the Internet, outside the corporate firewall.

Compromised Server Supply Chains, Really?

2018 Was shaping up nicely to become “The Year of the CPU Vulnerability” what with Meltdown, Spectre, TLBleed, and Foreshadow we had something going then along came Bloomberg and “The Big Hack” story. Flawed CPU designs just weren’t enough; now we have to covertly install “system on a chip (SoC)” spy circuits directly into the server’s baseband management controller (BMC) at the factory. As if this weren’t enough today Bloomberg drops its second story in the series “New Evidence of Hacked Supermicro Hardware Found in U.S. Telecom” which exposes compromised RJ45 connectors in servers.

We learned recently that Edward Snowden’s cache of secret documents from five years ago included the idea of adding an extra controller chip to motherboards for remote command and control. Is it astonishing that several years later a nation-state might craft just such a chip? Today we have consumer products like the Adafruit Trinket Mini-Microcontroller, pictured below, at $7USD the whole board is 27mm x 15mm x 4mm. The Trinket is an 8Mhz 8bit Atmel ATtiny85 minicomputer that can be clocked up to 20Mhz, with 8K flash, 512 bytes of SRAM and 512 bytes of EEPROM ($0.54USD for just the microcontroller chip) in a single 4mm x 5mm x 1.5mm package. In the pervasive Maker culture that we live in today, these types of exploits aren’t hard to imagine. I’m sure we’ll see some crop up this fall using off the shelf parts like the one mentioned above.

In the latest Bloomberg story, one source Yussi Appleboum, revealed that the SMC motherboards he found had utilized a compromised RJ45 Ethernet connector. This rogue connector was encased in metal providing both camouflage for the hidden chip and as a heat sink to dissipate the power it consumes. In this case all one would need to do would be to craft a simple microcontroller with an eight pin package, one for each conductor in the RJ45 connector. This controller would then draw it’s power directly from the network while also sniffing packets entering and leaving the BMC. Inconceivable, hardly, the metal covering such a connector is somewhere around 12mm square, similar to the RJ45 on the Raspberry Pi shown to the right, that’s four times more area than the ATtiny85 referenced above. Other micro-controllers, like the one powering the Raspberry Pi Zero, could easily fit into this footprint and deliver several orders more processing power. The point is that if someone suggested this five years ago, at the time of the Snowden breach, I’d have said it was possible but unlikely as it would have required leading-edge technology in the form of custom crafted chips costing perhaps ten million or more US dollars. Today, I could recommend a whole suite of off the shelf parts, and something like this could very likely be assembled in a matter of weeks on a shoestring budget.

Moving forward OEMs need to consider how they might re-design, build, and validate to customers that they’ve delivered a tamper-proof server. Until then for OCP compatible systems you should consider Solarflare’s X2552 OCP-2 NIC which can re-route the BMC through their network ports and which includes Solarflare’s ServerLock™ technology that can then filter ALL network traffic entering and leaving the server. That is provided of course that you’ve disconnected the servers own Gigabit Ethernet ports. If you’d like a ServerLock™ sample white-list filter file that shows how to restrict a server to internal traffic only (10.x.y.z or 192.168.x.y) then please contact me to learn more.

UPDATE: This weekend I discovered the item shown to the right which is offered as both a complete product called the “LAN Tap Pro” for $40 in a discrete square black case or as this throwing star kit for $15 with all the parts, some assembly and soldering required. This product requires NO external power source, and as such, it can easily be hidden. The chip which makes the product possible, but which is not shown, should answer the question of whether or not the above hacking scenario is a reality. While this product is limited to 10/100Mb, and can not do GbE, it has a trick up its sleeve to down speed a connection so that the network can be easily tapped. When it comes to server monitoring/management ports these often do not require high-speed connections so it’s highly unlikely that down speeding the connection would likely even be noticed. The point of all this rambling is that it’s very likely that the second Bloomberg article is true if the parts necessary to accomplish the hacking task are easily available through a normal retail outlet like the Hacker Warehouse.