7 Things I Learned From the IEEE Hot Interconnects Panel on SmartNICs

For the second year, I’ve had the pleasure of chairing the panel on SmartNICs. Chairing this panel is like being both an interviewer and a ringmaster. You have to ask thoughtful questions and respond to answers from six knowledgeable panelists while watching the Slack channel for real-time comments or questions that might further improve the event’s content. Recently I finished transcribing the talk and discovered the following seven gems sprinkled throughout the conversation.

7. Software today shapes the hardware of tomorrow. This is almost a direct quote from one of the speakers, but nearly half of the participants echoed it several times in different ways. One said that vertical integration means moving stuff done today in code for an Arm core into gates tomorrow.

6. DPUs are evolving into storage accelerators. Perhaps the biggest vendor mentioned adding compression, which means they are serious about soon positioning their DPU as a computational storage controller. 

5. Side-Channel Attacks (SCA) are a consideration. Only one vendor brought up this topic, but it was on the mind of several. Applying countermeasures in silicon to thwart side-channel attacks nearly doubles the number of gates for that specific cryptographic block. I understand that the countermeasures essentially consume the inverse power while also generating the inverse electromagnetic effects so that external measurements of the chip package during cryptographic functions yield a completely fuzzed result. 

4. Big Vendors are Cross-pollinating. We saw this last year with the announcement of the NVIDIA BlueField 2X, which includes a GPU on their future SmartNIC, but this appeared to be a bolt-on. NVIDIA’s roadmap didn’t integrate the GPU into the DPU until BlueField 4 some several years out. Now Xilinx, who will soon be part of AMD, is hinting at similar things. Intel, who acquired Altera several years ago, is also bolting Xeons onto their Infrastructure Processing Unit (IPU).  

3. Monterey will be the Data Center OS. VMWare wasn’t on the panel, but some panelists had a lot to say on the topic. One mentioned that the data center is the new computer. This same panelist strongly implied that the future of DPUs lies in the control plane plugging into Monterey. Playing nicely with Monterey will likely become a requirement if you want to sell into future data centers.  

2. The CPU on the DPU is fungible. The company seeking to acquire Arm mentioned they want to use a CPU micro-architecture in their DPU that they can tune. In other words, extending the Arm instruction set found in their DPU with additional instructions designed to process network packets. Now that could be interesting.

Finally, this is a nerdy plumbing type thing, but it will change the face of SmartNICs and bring enormous advantages to them is the move to Chiplets. Today ALL SmartNICs or DPUs rely on a single die in a single package, then one or perhaps two packages on a PCIe card. In the future, a single chip package will contain multiple dies, each with different functional components, and possibly fabricated at other process nodes, so…. 

1. The inflection point for chiplet adoption is integrated photonics. Chiplets becoming commonplace in DPU packages will become popular when there is a need to connect optics directly to the die in the package. This will enable very high-speed connections over extremely short distances.

9 Practical Resin Printing Suggestions

Just over six weeks and three liters of resin ago I received my Elegoo Mars 2 Pro Mono and the strongly suggested Elegoo Mercury Plus 2 in 1 washing and cleaning station. I ordered both these on Amazon for about $500 and they were extremely easy to set up and get working. Along with this order, I added a five-pack of Elegoo Release Film, Elegoo 3D Rapid Resin in clear red, a gallon of 99% Isopropyl Alcohol, 400 Grit sandpaper, and an AiBob Gun Cleaning Pad 16”x60” (this is a must-have). I’ve printed in both translucent red (2.5L) and flat black (0.5L). Also, I’ve been careful to hollow out models in the slicer, Chitubox, so that I’m using the minimum amount of resin necessary to print my models, and I’ve printed many with very little waste.

My resin printer setup, and yes a magnifying glass.

This printer is amazing, my prior experience was a few months, two years ago, with my son’s Creality Ender 3D which is a Fused Deposition Printer (FDP), your typical 3D printer. Eventually, we got the Creality producing usable results, but the difference between the Creality and Elegoo units is night and day. It would often take several tries to get the Creality to produce a workable print, and I’d installed the unit in a cabinet in my office so the temperature and airflow were strictly managed, and we’d modified the printer to reduce the noise, upgraded the print heads and improved the fans, but this post is about resin printing. My first resin print and nearly everyone since has come out as expected. So here are my nine suggestions for those interested in trying resin printing using the Elegoo Mars 2P. 

  1. Don’t Print Flat. Never print your model flat on the build plate. Because the printer exposes a print layer then rises a bit in the build tank then lowers again this creates shearing forces on the supports and a flat model could fail early. Also, you always have to pry your model off the build plate so having a raft and supports which may take damage on removal is always better than scratching up or breaking your model. I’ve found that rotating my model so it’s inclined 10 degrees from the build surface then elevating it 10mm off the build plate produces the best results. Chitubox will then create a raft to bond to the build plate, and it will raise the edges to make prying your model off easier. 
  2. Supports, you can never have too many. Be generous, add more, but make sure you’re bridging them from existing supports, or adding supports that you can then bridge from. You can always sand your model with 400 grit paper later to remove support marks. For finished surfaces sometimes you can avoid supporting these surfaces provided the surface facing the build plate is fully supported. 
  3. Models should drain down. Make certain you orient your model so that it drains down into the build tank. Also, be sure you hollow out your model and set your wall thickness to something like 3mm. This can save you considerable resin, and using translucent resins with somewhat hollow models can create some interesting effects when viewing the model.   
  1. Different color resins require different slicer settings. For example, black requires almost 30% more exposure time over the translucent resins. The Chitubox V1.8.1 slicer is very flexible, it makes it easy to make these adjustments. Here is a table that is invaluable when switching between resins.
      
  2. Never run out of resin during printing. I had this happen once, this morning, now I’m soaking the tank with alcohol and will try and get resin that’s bonded to the clear film on the bottom removed. Otherwise, I’ll need to replace the film.
  3. Have a ceiling fan on during printing and curing. You can use your printer in an office environment with normal indoor temperatures and if you work carefully, gloves and a mask can be avoided. This printer is very quiet, and prints much more quickly than traditional FDP. Resin printers use a single stepper motor that is installed in the base and it drives a single screw to raise and lower the build plate. The printer is fully enclosed, and the 2P has a fan and carbon filter so there is only a small amount of smell that leaves the unit. I’ve had the printer, not the wash station, running in the background while on Zoom calls, and nobody has ever said anything about hearing it.
  4. Lay down a felt rubberized gun mat ($10) on your work surface before installing the printer and cleaning station. It makes for an ideal work surface and wicks up the few droplets of alcohol that often fall everywhere. Before transferring a model from the printer to the cleaning tank I tilt the build plate a little to drain off the excess resin, and I carefully move the build plate from the printer to the cleaning station without dropping resin on the pad. I’ve found that two inches of spacing between the printer and the cleaning station are enough to make lifting the covers and reaching around back to turn things off easy, while also limiting the travel distance for models that may still drip.
     
  5. Removing the Cover. Lift the Mars 2P lid with your fingers wrapping under the cover edge as you lift it off the printer. There is a silicone gasket on the bottom of the cover and it will often rub the supports for the build tank which will result in it fall off. So if you carefully lift it and roll your fingers under the silicone gasket you can prevent this from happening. I’ve considered glueing the gasket in-place on the cover, but I think that’ll create other issues.
  6. Make sure you configure Chitubox with your specific printer model so it scales the build plate size properly and make the other default settings. Chitubox and the Elegoo will allow the raft to slightly fall outside the build area and still print, but be careful. Chitubox is a simple slicer, and I’ve used several in the past, but it is very capable and does a nice job.  

Well, that’s it, for now. I’ll edit this a bit more later today, but I hope you have an awesome time with your new printer. I’m sure there will be people out there that will insist I wear a ventilator mask and rubber gloves when printing, cleaning, etc… but I have the ceiling fan on high, and my office is extremely clean and clutter-free so that’s what works for me.

Kobayashi Maru and Linkedin’s SSI

Klingon Battle Cruisers

Fans of Star Trek immediately know the Kobayashi Maru as the no-win test given to all Starfleet officer candidates to see how they respond to a loss. After being one of Linkedin’s first million members, I recently found out that there is a score by which Linkedin determines how effectively you use their platform. This score is out of 100, and it is composed of four pillars, each with a value of 25 points. If you overachieve in any given pillar, you can’t earn more than 25 points; it’s a hard cap. Like the Kobayashi Maru, the only way to beat Linkedin’s Social Selling Index (SSI), is to learn as much as you can about the innards of how it works, then hack or more accurately “game the system.” Here is a link to your score. There are several articles out there that explain how the SSI is computed, some build on slides that Linkedin supplied at some point, but here are the basics that I’ve uncovered, and how you can “game the SSI.” 

How Linkedin computes the SSI is extremely logical. Someone can effectively start with the platform and leverage it to become a successful sales professional in very little time. As mentioned earlier, the SSI is computed from four 25-point pillars which to some degree, build on each other, and they are: 

  • Build your Brand 
  • Grow your Network 
  • Engage with your Network 
  • Develop Relationships with your Network

The first pillar, “Building your Brand,” is almost entirely within your own control, and can be mastered with a free membership. There are four elements to building your brand, and these are: complete your profile, including video in your profile, write articles, and get endorsements. The first three require only elbow grease, basic video skills, and some creative writing. All of these elements are skills that most professionals should have some reasonable degree of competency with, and if not, can be quickly learned. Securing endorsements requires you to leverage your network’s closest elements to submit small fragments of text about your performance when you worked with them. If you want to be aggressive, you could write these for your former coworkers and offer them up to put in their voice and submit on your behalf. Scoring 25 in this area is within reach of most folks; I scored 24.61 when I learned about the SSI.

To pull off a 25 in the second pillar, “Growing your Network” requires a paid membership with Linkedin and for optimum success a “Sales Navigator” membership at $80/month. If you’re a free member and you buy up to Sales Navigator, some documentation implies that this will give you an immediate 10-point boost in this category. Once you have a Sales Navigator membership, it then requires that you use the tool, “Lead Builder,” and connect with recommendations. The “free” aspects of this pillar are doing people searches, viewing these profiles, especially 3rd-degree folks and people totally outside your network. While I had a paid membership, it was not a Sales Navigator membership when I discovered SSI, but when I bought up to Sales Navigator, my score in this pillar remained at 15.25. After going through the Sales Navigator training, my score did go up to 15.32, but clearly, I need to make effective use of Sales Navigator to pull my score up in this pillar. The expectation for those hitting 25 in this pillar is that you’ve used their tools to find leads and convert them into members of your network, and perhaps customers. 

Engagement is the third pillar, and here Linkedin uses the following four metrics to determine your score. You need to share posts WITH pictures, give and get likes, repost content from others, comment and reshare on posts from others, join at least 50 groups, and finally send Inmails and get responses. Inmails only come with a paid membership, so again you can’t achieve 25 in this pillar without a paid membership. In this section, I started at 14.35. I never send Inmails, so that’s something that is going to change. Nor was I big on reposting content from others, or resharing posts by others. I do like posts from others and get likes from others, so perhaps that’s a good contributing factor. I was already a member of 52 groups, and from what I’ve read, adding more above 50 doesn’t contribute to increasing your score.

Finally, the last pillar is Relationships. This score is composed of the number of connections you have and the degree to which you interact with those connections. For a score of 25 in this group, it’s been said that you need at least 5,000 connections, this is not true. If you carefully curate who you invite, you can get close to 25 with under 2,000 quality connections. If you’re a VP or higher, you get additional bonus points, and connections in your network that are VP or higher earn you more points than entry-level connections. The SSI is all about the value of the network you’ve built and can sell to. If your network is made up of decision-makers versus contributors or influences, then it’s more effective and hence valuable. Here you get bonus points for connections with coworkers and for a high connection rate acceptance ratio. In other words, if you spam a bunch of people with connection requests that you have nothing in common with, then you’re wasting your time. These people will likely not accept your request, and if they do, Linkedin will know you were spamming and that those people who did accept were just being polite, but aren’t valuable network contacts. Here my score started at 22.8, and just over 24 hours, I was able to run it up to 24.05, a 1.25-point gain. Now It should be clear that I had 1,700 or so connections to start, so I skillfully ran it up to 1,815 connections knowing everything above, and it paid off. I went through my company and offered to connect with anyone that I shared at least five connections. Also, I ground through those in Linkedin who had jobs near me geographically that also shared five connections with me and invited those people. The combination of these two activities yielded just over two hundred open connection requests, and very nearly half accepted within 24-hours.

After 24 hours, some rapid course corrections, and a few hours working my network while on a car ride on a Saturday, I’ve brought my score up 1.35 points. Now that you know what I do about the SSI, I wish you all the best. Several people that have written articles about SSI are at or very close to 100. At 78, I’m still a rookie, but give me a few weeks. 

SSI Score 79 – Sunday, June 28th, 2020

SSI Score 82 – Monday, June 29th, 2020 – Clearly what I learned above is working, five points in only a few days. Actually the score was 81.62, but Linkedin rounds.

SSI Score 82 – Tuesday, June 30th, 2020 – Actually 81.77, only a minor gain from yesterday, as I throttled back to see if there was “momentum.” Below is my current screenshot from today, here you can see that I’ve maxed out on “Build Relationships” at 25 and have nearly maxed “Establishing my Brand” at 24.78. Therefore my focus moving forward needs to be “Engage with Insight” and “Finding the Right People”. Engagement means utilizing all my Inmails with the intent of getting back a reply of some kind. To improve my Finding the Right People I need to leverage Sales Navigator to find leads to send Inmails to, perhaps two birds with one stone.

SSI Score 84 – Sunday, July 5th, 2020 – So the gain was five points in a week, but for the most part I took Thursday through Sunday off for the US holiday and had to move my mom out of the FL Keys (I live in Raleigh, so we had to fly down and back to Miami). Thankfully, there was clearly some momentum going into the weekend.

Autopilot, the Next Killer Application

Tesla Autopilot on the Highway

Yesterday during one of my many calls each week with my seventy-something mom, she mentioned that she might pass on going to her close friend’s 80th birthday party. When I asked why she said that the four and a half-hour drive up the Florida Turnpike was becoming too scary, she said that people are continually cutting her off, and it makes her very fearful. Mom hasn’t had an accident in decades, and she doesn’t have any of the usual scratches and small dents that often deface the autos of our greatest generation. Her vision is excellent, memory is intact, and reflexes are still acceptable. My dad passed seven years ago of lung cancer, and in the final weeks of his life, we had to insist that he no longer drive. At that time, the O2 saturation in his blood would often drop when he sat for a few minutes, and he’d fall asleep due to no fault of his own. Insisting your parent no longer drive and removing access to their car is not a pleasant task.

On relaying this story yesterday to a friend, she mentioned that her mom, also well into her seventies, had significant macular degeneration and was still driving. It wasn’t until her daughter had noticed a dent that her mom volunteered her medical condition. Once that was exposed, they too had to face the task of removing her freedom to travel at will. Another friend has a mom with mild dementia, and while her driving skills are still sharp, she sometimes forgets where she is going or how to get home. They chose to put a tracker on her car and geofence around her house, church, and market so that if she stays within a half-mile of this triangle, she can roam at will. If she gets worried or “lost” family members can quickly look up on their smartphones where she is and calmly provide her with verbal directions to guide her to her destination. While I don’t agree with this approach, it’s not my place to tell them otherwise. Driving is a privilege, but over a certain age, we often perceive it as a right, and taking that away from someone can be mentally crippling. Autopilot should be a fantastic feature for this demographic, but unfortunately, they aren’t, and never will be, intellectually prepared to adopt this feature. We need to get there in steps.

Many were surprised by a Super Bowl commercial this year aptly named “Smaht Pahk” where a 2020 Hyundai Sonata parks itself into an otherwise tight spot. This feature is made possible because of a new breed of computer chips that fuse computing and sensor processing on the same chip. When we say sensor processing, in this case, we’re talking about receiving live data from 12 ultrasonic sensors around the car, four 180-degree fisheye cameras, two 120-degree front, and rear-facing cameras, GPS and an inertial measurement unit (IMU). This is then all consumed by some extremely smart Artificial Intelligence, which then finds and steers the car into a safe parking spot. This article, though, is about Autopilot, so why are we talking about self-parking?

As technology marketers, we’ve learned that cutting edge features, will quickly become a boat anchor if consumers aren’t intellectually prepared to accept it. My favorite example is the IBM Simon; arguably, the first smart phone brought to market 13 years before Steve Jobs debuted the “revolutionary” Apple iPhone. The Simon was on the market for only seven months and sold a mere 50K units. Even more surprising, the prototype was shown two years earlier at the November 1992 COMDEX. There will always be affluent bleeding-edge, early adopters, in the above case 50K, who will purchase revolutionary products, but the gulf between sales to these consumers and the mass market can often be enormous. IBM was correct in pulling the Simon so quickly after its introduction because mass-market consumers were at least a decade behind in adoption. We needed to experience MP3 players in 1998 to accept the Apple iPod three years later in 2001. We also needed to carry around a wide assortment of cell phones, personal organizers, and multifunction calculators. Every one of these devices prepared consumers for the iPhone in 2007. As technology marketers, we need to help consumers walk before we can expect them to run.

Self-driving cars have appeared in science fiction movies many times over the years, one of my favorite scenes being Sandra Bullock in “Demolition Man” (1993) set in 2032. Self-driving isn’t even mentioned; she’s busy face-timing with her boss as her car speeds down the highway. In the foreground, the steering wheel is retracted and moving on its own. We need to slow-roll the public into becoming comfortable yielding control of driving over to the car itself. Technologies like “Auto Emergency Braking” and accepting help from “Lane Keeping Assist” along with “Smart Park” are feature inroads that will make self-driving commonplace. Given how consumers adopt technology, it wouldn’t be surprising at all if its 2032 before self-driving becomes standard in most vehicles. Now Elon Musk, and his team at Tesla, are all brilliant people, as were the IBM Simon team. The difference, though, is that Tesla is selling a car first while delivering a mobile computational platform. The IBM Simon was viewed as a digital assistant first and a phone second. The primary functionality is critical to consumer perception. Consumers know how to buy a car, heck we have a century of experience in this market. Conversely, if Tesla had chosen to market their technology as a mobile computing platform, they’d have gone out of business years ago. I’m sure some readers are still scratching their heads at the notion of a mobile computing platform.

Consumers have become comfortable with their smartwatches and phones, tablets, and computers, all autonomously upgrading while we sleep, so why should their car be any different? Imagine a car whose features are updated remotely and autonomously at night while it is charging. Today Tesla’s Autopilot is restricted to highway driving, with smart features like lane centering, adaptive cruise control, self-parking, automatic lane changing, and summon. Later this year, via a nightly update, some models will pick up recognizing and responding to traffic lights and stop signs, then automatically driving on city streets. So how is this possible? It all goes back to the technology behind self-park.

For all these advanced driving features to take place, we need to put computing as close as possible to where the data originates. Also, these computations need to be instantiated in hardware, easily reprogrammable, ruggedized and run as fully autonomous systems. General-purpose CPUs or even GPUs won’t cut it; these applications are ideal for FPGAs coupled with complete systems on a chip. People aren’t going to wait while their car boots up, then loads software into all its systems. We are accustomed to pressing a button to start the car, shifting it into gear and going.

A truly intelligent autopilot that could go from the home garage to a parking space at the destination and back would address all the above issues for our greatest generation. My mom, who can still drive, should be content supervising a car while it maintains a reasonable highway speed and deftly avoids the automobiles around it. She could then roam from her home in the Florida Keys up both coasts to visit friends because she’d once again be confident behind the wheel. Autopilot is the solution our aging boomers require to maintain their freedom till the very end. Unfortunately, many are too old to accept it intellectually, my mom included. The tail end of the Boomers, perhaps those born in the early 1960s, are the older side of Tesla’s core demographic for this $7,000 Autopilot feature. It’s a shame that the underlying technology and its application came to late for my mom, and her generation.

In Security, Hardware Trumps Software


Since the dawn of time humanity has needed to protect both people and things. Initial security methods were all “software based” in the sense that they relied on the user putting their trust in a process, people and social conventions. At first, it was cavemen hiding what they most valued, leveraging security through obscurity or they posted a trusted associate to watch the entrance. Finally, we expanded our security methods to include some form of “Keep Out” signs through writings and carvings. Then in 600BC along comes Theodorus of Samos, who invented the key. Warded locks had existed about three hundred years before Theodorus, but the “key” was just designed to bypass obstructions to its rotation making it slightly more challenging to access the hidden trip lever inside. For a Warded lock the “key” often looked like what we call a skeleton key today.

It could be argued that the lock represented our first “hardware based” security system as the user placed their trust in a physical token or key based system. Systems secured in hardware require that the user present their token in person, it is then validated, and if it passes, the security measures are removed. It should be noted that we trust this approach because it’s both the presence of the token and the accountability of a person in the vicinity who knows how to execute the exact process with the token to ensure success.

Now every system man invents can also be defeated. One of the first skills most hackers teach themselves is how to pick a lock. This allows us to dynamically replicate the function of the key using two very simple and compact tools (a torsion bar and a pick). Whenever we pick a lock we risk exposure, something we avoid at all cost, because the process of picking a lock looks visually different than that of using a key. Picking a lock using the tools mentioned above requires two hands. One provides a steady rotational force using the torsion bar. While the other manipulates the pick to raise the pins until each aligns with the cylinder and hangs up. Both hands require a very fine sense of touch, too heavy handed with the torsion bar and you can snap the last pin or two while freeing the lock. This will break it for future key users, and potentially expose your attempted tampering. Too light or heavy with the pick and you won’t feel the pins hanging up, it’s more skill than a science. The point is that while using a key takes seconds picking a lock takes much longer, somewhere between a few seconds to well over a minute, or never, depending on the complexity of the cylinder, and the person’s skill. The difference between defeating a software system and a hardware one is typically this aspect of presence. While it’s not always the case, often to defeat hardware-based systems it requires that the attacker be physically present because defeating hardware commonly requires hardware. Hackers often operate from countries far outside the reach of law enforcement, so physical presence is not an option. Attackers are driven by a risk-reward model, and showing up in person is considered very high risk, so the reward needs to be exponentially greater.

Today companies hide their most valuable assets in servers located in large secure data centers. There are plenty of excellent real-world hardware and software systems in place to ensure proper physical access to these systems. These security measures are so good that hackers rarely try to evade them because the risk of detection and capture is too high. Yet we need only look at the past month, April 2019, to see that companies like Microsoft, Starwood, Toyota, GA Tech and Questcare have all reported breaches. In Microsoft’s case, 6% of all MSN, HotMail, and Outlook accounts were breached, but they’ve not disclosed the details or the number of accounts. This is possible because attackers need to only break into a single system within the enterprise to reach the data center and establish a beachhead from which they can then land and expand. Attackers usually obtain a secure foothold through a phishing email or clickbait.

It takes only one undereducated employee to open a phishing email in outlook, launch a malicious attachment, or click on a rogue webpage link and it’s game over. Lockheed did extensive research in this area and they produced their now famous Cyber Kill Chain model. At a high level, it highlights the process by which attackers seize control of an enterprise. Anyone of these attack vectors can result in the installation of a remote access trojan (RAT) or a Zero-Day exploit that will give the attacker near unlimited access to the employee’s system. From there the attacker will seek out a poorly secured server in the office or data center to establish a beachhead from which they’ll launch their attack. The compromised employee system may not always be available, but it does makes for a great point to retreat back to in the event that the primary beachhead server system is discovered and sanitized.

Once an attacker has a foothold in the data center its game over. Very often they can easily move laterally, east-west, through the data center to other systems. The MITRE ATT&CK (Adversarial Tactics Techniques & Common Knowledge) framework, while similar to Lockheed’s approach, drills down much further. Specifically, on the lateral movement strategies, Mitre uncovered 17 different methods for compromising internal servers. This highlights the point that very few defenses exist in the traditional data center and those that do are often very well understood by attackers. These defenses are typically OS based firewalls that all seasoned hackers know how to disable. Hackers will disable logging, then tear down the firewall. They can also sometimes leverage an island hopping attack to a vendor or customer systems through private networks or gateways. Or in the case of the Starwood breach of Marriott the attackers got lucky and when their IT systems were merged so were the exploited systems. This is known as a data lemon, an acquisition that comes with infected and unsecured systems. Also, it should be noted that malicious insiders, employees that are aware of a pending termination or just seeking to augment their income, make up over 30% of the reported breaches. In this attack example, a malicious insider simply leverages their access and knowledge to drain all the value from their employer’s systems. So what hardware countermeasures can be put in place to limit east-west or lateral attacks within the data center? Today you have three hardware options to secure your data center servers against east-west attacks. We have switch access control lists (ACLs), top of rack firewalls or something uniquely innovative Solarflare’s ServerLock enabled NICs.

Often enterprises leverage ACLs in their top of rack 10/25/100G switches to protect east-west traffic within the data center. The problem with this approach is one of scale. IT teams can easily exhaust these resources when they attempt comprehensive application level segmentation at the server. These top of rack switches provide between 100 and 1,000 ACLs per port. By contrast, Solarflare’s ServerLock provides 5,000 ACLs per NIC, along with some foundational subnet level filtering.

In extreme cases, companies might leverage hardware firewalls internally to further zone off systems they are looking to secure. Here the problem is one of volume. Since these firewalls are used within the data center they will be tasked with filtering enormous amounts of network data. Typically the traffic inside a data center is 10X the traffic volume entering the data center. So for mission-critical clusters or server groups, they will demand high bandwidth, and these firewalls can become very expensive and directly impact application performance. Some of the fastest appliance-based firewalls designed to handle these kinds of high volumes are both expensive and add another 2.5 to 3.5 microseconds of latency in each direction. This means that if an intranet server were to fetch information from a database behind an internal firewall the transaction would see an additional delay of 5-6 microseconds. While this honestly doesn’t sound like much think of it like compound interest. If the transaction is simple and there’s only one request, then 5-6 microseconds will go unnoticed, but what happens when that employee’s request decomposes into hundreds or even thousands of database server calls? Delays then become seconds. By comparison, Solarflare’s ServerLock NIC based ACL approach adds only 0.25 to 0.75 microseconds of latency in each direction.

Finally, we have Solarflare’s ServerLock solution which executes entirely within the hardware of the server’s own Network Interface Card (NIC). There are NO server side services or agents, so there is no attackable software surface area of any kind. Think about that for a moment, a server-side security solution with ZERO ATTACKABLE SURFACE AREA. Once ServerLock is engaged through the binding process with a centralized ServerLock DirectorOne controller the local control plane for the NIC that manages security is torn down. This means that even if a hacker or malicious insider were to elevate their privilege to root they would NOT be able to see or affect the security settings on the NIC. ServerLock can test up to 5,000 ACLs against a network packet within the NIC in just over 250 nanoseconds. If your security policies leverage subnet wildcards the worst case latency is under 750 nanoseconds. Both inbound and outbound network traffic is checked in hardware. All of the Solarflare NICs within a data center can be managed by ServerLock DirectorOne controllers. Today a single ServerLock DirectorOne can manage up to 1,000 NICs.

ServerLock DirectorOne is a bundle of code that is delivered as an ISO image and can be installed onto a bare metal server, into a VM or a container. It is designed to manage all the ServerLock NICs within an infrastructure domain. To engage ServerLock on a system you run a simple binding process that facilitates an exchange of secrets between the DirectorOne controller and the ServerLock NIC. Once engaged the ServerLock NIC will begin sharing new network flows with the DirectorOne controller. DirectorOne provides visibility to all the network flows across all the ServerLock enabled systems within your infrastructure domain. At that point, you can then begin defining security policies and place them in compliance or enforcement mode. In compliance mode, no traffic through the NIC will be filtered, but any traffic that is not in compliance with the defined security policies for that NIC will generate alerts. Once a policy is moved into “enforcement” mode all out of policy packets will have the default action applied to them.

If you’re looking for the most secure solution to protect your companies servers you should consider Solarflare’s ServerLock. It is the most affordable, and secure way to protect your valuable corporate assets.

East West​ Threat Made Real

Raspberry Pi 3B+ With Power over Ethernet Port in Plastic Case

Many in corporate America still don’t view East-West attacks as a real, let alone a significant threat. Over the past several years while meeting with corporate customers to discuss our future security product, it wasn’t uncommon to encounter the occasional Ostrich. These are the 38% of people who responded to the June 2018 SANS Institute report stating that they’ve not yet been the victim of a breach. In security we have a saying “There are only two types of companies, those that know they’ve been breached, and those that have yet to discover it.” While this sounds somewhat flippant, it’s a cold hard fact that thieves see themselves as the predators and they view your company as the prey. Much like a pride of female lions roaming the Africa savanna for a large herd, black-hat hackers go where the money is. If your company delivers value into a worldwide market, then rest assured there is someone out there looking to make an easy buck from the efforts of your company. It could be contractors hired by a competitor or nation-state actors looking to steal your product designs, a ransomware attacker seeking to extort money, or merely a freelancer surfing for financial records to access your corporate bank account. These threats are real, and if you take a close look at the network traffic attempting to enter your enterprise, you’ll see the barbarians at your gate.

A few months back my team had placed a test server on the Internet with a single “You shouldn’t be here” web page with a previously unused, unadvertised, network address. This server had all its network ports secured in hardware so that only port 80 traffic was permitted. No data of any value existed on the system, and it wasn’t networked back into our enterprise. Within one week we’d recorded over 48,000 attempts to compromise the server. Several even leveraged a family of web exploits I’d discovered and reported back in 1997 to the Lotus Notes Domino development team (it warmed my heart to see these in the logs). This specific IP address was assigned to our company by AT&T, but it doesn’t show up in any public external registry as belonging to our company, so there was no apparent value behind it, yet 48,000 attempts were made. So what’s the gizmo in the picture above?

In the January 2019 issue of “2600 Magazine, The Hacker Quarterly” a hacker with the handle “s0ke” wrote an article entitled “A Brief Tunneling Tutorial.” In it, s0ke describes how to set up a persistent SSH tunnel to a remote box under his control using a Raspberry Pi. This then enables the attacker to access the corporate network just as if he was sitting in the office. In many ways, this exploit is similar to sending someone a phishing email that then installs a Remote Access Trojan (RAT) on their laptop or desktop, but it’s even better as the device is always on and available. Yesterday I took this one step further. Knowing that most corporate networks leverage IP Phones for flexibility and that IP Phones require Power over Ethernet (PoE), I ordered a new Raspberry Pi accessory called a Pi PoE Switch Hat. This is a simple little board that snaps onto the top of the Pi and leverages the power found on the ethernet port to power the entire server. The whole computer shown above is about the size of a pack of cigarettes with a good sized matchbook attached. When this case arrives, I’ll utilize our 3D printer to make matching black panels that will then be superglued in place to cover all the exposed ports and even the red cable. The only physically exposed port will be a short black RJ45 cable designed to plug into a power over Ethernet port and two tiny holes so light from the power and signal LEDs can escape (a tiny patch of black electrical tape will cover these once deployed). 

When the Raspberry Pi software bundle is complete and functioning correctly, as outlined in s0ke’s article, then I’ll layer in accessing my remote box via The Onion Router (Tor) and pushing my SSH tunnel out through port 80 or 443. This should make it transparent to any enterprise detection tools. Tor should mask the address of my remote box from their logs. In case my Pi is discovered I’ll also install some countermeasures to wipe it clean when a local console is attached. At this point with IT’s approval, I may briefly test it in our office to confirm its working correctly. Then it becomes a show-and-tell box, with a single powerpoint slide outlining that east-west threats are real and that a determined hacker with $100 in hardware and less than one minute of unaccompanied access in their facility can own their network. The actual hardware may be too provocative to display, so I’ll lead with the slide. If someone calls me on it though I may pull the unit out of my bag and move the discussion from the hypothetical to real. If you think this might be a bit much, I’m always open to suggestions on better ways to drive a point home, so please share your thoughts.

Raspberry Pi 3B+ with Pi PoE Switch Hat

P.S. The build is underway, the Pi and Pi PoE Switch Hat have arrived. To keep the image as flexible as possible I’ve installed generic Raspbian on an 8GB Micro-SD card. Applied all updates, and have begun putting on custom code, system generically named “printer” at this point . Also, a Power over Ethernet injector was ordered so the system could be tested in a “production like” power environment. It should be completed by the end of the month, perhaps in time for testing in my hotel during my next trip. Updated: 2019-01-20

A persistent automated SSH tunnel has been set up between the “printer” and the “dropbox” system and I’ve logged into the “printer” by connecting via “ssh -p 9091 scott@localhost” on the “dropbox,” this is very cool. There is a flaw in the Pi PoE Switch board or its set up at this point as it is pulling the power off the ethernet port, but it is NOT switching the traffic so at this point the solution utilizes two Ethernet cables, one for power and the second for the signal. This will be resolved shortly. Updated: 2019-01-23

Raspberry Pi Zero on Index Finger

But why risk the Ethernet port not being a powered Ethernet jack, and also who wants to leave behind such a cool Raspberry Pi 3B+ platform behind when something with less horsepower could easily do the job? So shortly after the above intrusion device was functional I simply moved the Micro-SD card over to a Raspberry Pi Zero. A regular SD card is shown in the picture for the purpose of scale. The Pi Zero is awesome if you require a low power small system on a chip (SoC) platform. For those not familiar with the Pi Zero it’s a $5 single core 1Ghz ARM platform that consumes on average 100mw, so it can run for days on a USB battery. Add to that a $14 Ethernet to MicroUSB dongle and again you have a single cable hacking solution that only requires a generic Ethernet port. Of course it still needs a tight black case to keep it neat, but that’s what 3D printers are for.

Pi Zero, Ethernet Dongle
& USB Battery
(SD Card for Size Comparison)

Now, this solution will burn out in a couple of days, but as a hacker if you’ve not established a solid beachhead in that time then perhaps you should consider another line of work. Some might ask why I’m telling hackers how to do this, but frankly, they’ve known for years since SoC computers first became main stream. So IT managers beware, solutions like these are more common than you think, and they are leaking into pop culture through shows like Mr. Robot. This particular show has received high marks for technical excellence, and Myth Busters would have a hard time finding a flaw. One need only rewatch Season 1 episode 5, to see how a Raspberry Pi could be used to destroy tapes in a facility like Iron Mountain. Sounds unrealistic, then you must watch this Youtube video where they validate that this specific hack is in-fact plausible. The point is no network is safe from a determined hacker, from the CAN bus in your car, to building HVAC systems, or industrial air-gapped control networks. Strong security processes and policies, strict enforcement, and honeypot detection inside the enterprise are all methods to thwart and detect skilled hackers. Updated: 2019-01-27

Idea + technology + {…} = product

If you’ve been in this industry a while you may remember the IBM Simon or the Apple Newton, both great ideas for products, but unfortunately, the technology just wasn’t capable of fulfilling the promise that these product designers had in mind. The holidays provide a unique opportunity to reflect. They also simultaneously create an environment for an impulse buy proceeded by a pause every year to play with my kids (now 21 and 24). 2017 was no different, and so this year for the first time ever I picked up not one but three quadcopter drones. After dinner Christmas day, all three were simultaneous buzzing around our empty two car garage attempting to take down several foam rubber cubes balanced on the garage door opener return beam. Perhaps I should bound this a bit more, a week earlier I’d spent $25 on each of the kid’s drones, not knowing if they would even interested, and $50 on my own. We had a blast, and if you’ve not flown one you should really splurge and spend $50 for something like the Kidcia unit, it’s practically indestructible. On the downside, the rechargeable lithium batteries only last about eight minutes, so I strongly suggest purchasing several extra batteries and the optional controller.

During the past week since these purchases, but before flying, I’ve wondered several times why we haven’t seen life-sized quad-copter drones deployed in practical real-world applications? It turns out this problem has waited 110 years for the technology. Yes, the quadcopter or rotary wing aircraft was first conceived, designed and demonstrated, in tethered flight mode back in 1907. The moment you fly one of today’s quadcopters you quickly realize why they flew in tethered flight mode back in 1907, crashing is often synonymous with a landing. These small drones, mine has folding arms and dual hinged propellers, take an enormous beating and still continue to fly as if nothing happened. We put at least a dozen flights on each of the three drones on Christmas day, and we’ve yet to break a single propeller. Some of the newer, more costly units, now include collision avoidance, which may actually take some of the fun away. So back to the problem at hand, why has it taken the quadcopter over 110 years to gain any traction beyond concept? Five reasons stand out, all technological, that have made this invention finally possible:

  • Considerably computing power & sophisticated programming in a single ultra-low power chip
  • Six-axis solid-state motion sensors (3-axis gyroscope, 3-axis accelerometer) also on a single ultra-low power chip
  • Very high precision, efficient, compact lightweight electric motors
  • Compact highly efficient energy storage in the form of lithium batteries
  • Extremely low mass, highly durable, yet flexible propellers

That first tethered quadcopter back in 1907 achieved only two feet of altitude while flown by a single pilot and powered by a single motor with four pairs of propellers. Two of the pairs of propellers were counter-rotating to eliminate the effects of torque, and four men, aside from the pilot, were required to keep the craft steady. Clearly far too many dynamically changing variables for a single person to process. Today’s quadcopter drones have an onboard computer that continuously adjusts all four motors independently while measuring the motion of the craft in six axes and detecting changes in altitude (via another sensor). The result is that when a drone is properly setup it can be flown indoors and raised to any arbitrary altitude where it will remain hovering in place until the battery is exhausted. Once the pilot requests the drone move left to right, all four rotors speeds are independently adjusted via the onboard computer to keep the drone from rotating or losing altitude. Controlled flight of a rotary wing craft, whether a drone or a flying car, requires considerable sensor input, and enormous computational power.

Petroleum-powered quadcopters are available, but to overcome issues in the variations of engine speeds and latency, the time from sensor input to action, they often utilize variable pitch propellers with electronic actuators. These actuators allow for rapid, and subtle changes in propeller pitch adjusting for variable inputs from the sensors and the pilot. While gas-powered drones often provide greater thrust, for most applications modern drones are assembled using electronic motors. These electric motors are extremely efficient, respond rapidly to subtle changes in voltage by delivering predictable rotational speeds, all while being very lightweight. Coupled with highly efficient lithium batteries, these make for an ideal platform for building drones.

The final component making these drones possible are advanced plastics and carbon fiber that now provide for very light-weight propellers that can take considerable abuse without fracturing or failing. When I grew up in the late 1960s and early 70s it didn’t take much to break that rubber band powered a red plastic propeller that came with balsa wood planes of that era. Today I can crash my drone into the garage door at nearly full speed and all eight propeller blades remain scratch free.

So next time you interact with a product and wonder why it doesn’t perform to your expectations, perhaps the technology has still not caught up to the intent of the product designers. Happy Holidays.

Container Networking is Hard

ContainerNetworkingisHardLast night I had the honor of speaking at Solarflare’s first “Contain NY” event. We were also very fortunate to have two guest speakers, Shanna Chan from Red Hat, and Shaun Empie from NGINX. Shanna presented OpenShift then provided a live demonstration where she updated some code, rebuilt the application, constructed the container, then deployed the container into QA. Shaun followed that up by reviewing NGINX Plus with flawless application delivery and live action monitoring. He then rolled out and scaled up a website with shocking ease. I’m glad I went first as both would have been tough acts to follow.

While Shanna and Shaun both made container deployment appear easy, both of these deployments were focused on speed to deploy, not maximizing the performance of what was being deployed. As one dives into the details of how to extract the most from the resources we’re provided we quickly learn that container networking is hard, and performance networking from within containers is even an order of magnitude more challenging. Tim Hockin, a Google Engineering Manager, was quoted in the eBook “Networking, Security & Storage” by THENEWSTACK that “Every network in the world is a special snowflake; they’re all different, and there’s no way that we can build that into our system.”

Last night when I asked those assembled why container networking was hard no one offered what I thought was the obvious answer, that we expect to do everything we do on bare metal from within a container, and we expect that the container can be fully orchestrated. While that might not sound like “a big ask”, when you look at what is done to achieve performance networking today within the host, this actually is. Perhaps I should back up, when I say performance networking within a host I mean kernel bypass networking.

For kernel bypass to work it typically “binds” the server NIC’s silicon resources pretty tightly to one or more applications running in userspace. This tight “binding” is accomplished using one of the at least several common methods: Solarflare’s Onload, Intel’s DPDK, or Mellanox’s RoCE. Each approach has its own pluses and minuses, but that’s not the point of this blog entry. When using any of the above it is this binding that establishes the fast path from the NIC to host memory. The objectives of these approaches, and this “binding” though runs counter to what one needs when doing orchestration, and that is a level of abstraction between the physical NIC hardware and the application/container. This level of abstraction can then be rewired so containers can easily be spun up, torn down, or migrated between hosts.

It is this abstraction layer where we all get knotted up. Do we use an underlying network leveraging MACVLANs or IPVLANs or an overlay network using VXLAN or NVGRE? Can we leverage a Container Network Interface (CNI) to do the trick? This is the part about container networking that is still maturing. While MACVLANs provide the closest link to the hardware and afford the best performance, they’re a layer-2 interface, and running unchecked in large-scale deployments they could lead to a MAC explosion resulting in trouble with your switches. My understanding is that with this layer of connectivity there is no real entry point to abstract MACVLANs into say a CNI so one could use Kubernetes to orchestrate their deployment. Conversely, IPVLANs are a layer-3 interface and have already been abstracted to a CNI for Kubernetes orchestration. The real question is what is the performance penalty one can observe and measure between using a MACVLAN connected container and an IPvLAN connected one? All work to be done. Stay tuned…

OCP & the Death of the OEM Mezz

OCPSince the early days of personal computing, we’ve had expansion cards. The first Apple and Radio Shack TRS-80 micro-computers enabled hackers like me to buy a foundational system from the OEM then over time upgrade it with third-party peripherals. For example, my original TRS-80 Model III shipped with 4KB of RAM and a cassette tape drive (long-term data storage, don’t ask). Within a year I’d maxed the system out to 48KB of RAM (16KB per paycheck) and a pair of internal single sided, single density 5.25” floppy drives (90KB of storage per side). A year or so later the IBM PC debuted and transformed what was a hobby for most of us into a whole new market, personal computing (PC). For the better part of two decades IBM lead the PC market with an open standards approach, yeah they brought out MicroChannel Architecture (MCA) and PCNetwork, but we won’t hold that against them. Then in 2006 as the push towards denser server computing reached a head IBM introduced the BladeCenter H. A blade-based computing chassis with integrated internal switching. This created an interesting new twist in the market the OEM proprietary mezzanine  I/O card format (mezz), unique to IBM BladeCenter H.

At that time I was with another 10Gb Ethernet adapter company managing their IBM OEM relationship. To gain access to the new specification for the IBM BladeCenter H mezz card standard you had to license it from IBM. This required that your company pay IBM a license fee (a serious six-figure sum), or provide them with a very compelling business case for how your mezz card adapter would enable IBM to sell thousands more BladeCenter H systems. In 2006 we went the business case route, and in 2007 delivered a pair of mezz cards and a new 32-port BladeCenter H switch for the High-Performance Computing (HPC) market. All three of these products required a substantial amount of new engineering to create OEM specific products for a captive IBM customer base. Was it worth it, sure the connected revenue was easily well into the eight figures? Of course, IBM couldn’t be alone in having a unique mezz card design so soon HP and Dell debuted their blade products with their own unique mezz card specifications. Now having one, two or even three OEM mezz card formats to comply with isn’t that bad, but over the past decade nearly every OEMs from Dell through SuperMicro, and a bunch of smaller ones have introduced various unique mezz card formats.

Customers define markets, and huge customers can really redefine a market. Facebook is just such a customer. In 2011 Facebook openly shared their data center designs in an effort to reduce the industry’s power consumption. Learning from other tech giants Facebook spun off this effort into a 501c non-profit called the Open Compute Project Foundation (OCP) which quickly attracted rock star talent to its board like Andy Bechtolsheim (SUN & Arista Networks) and Jason Waxman (Intel). Then in April of last year Apple, Cisco, and Juniper joined the effort, and by then OCP had become an unstoppable force. Since then Lenovo and Google have hopped on the OCP wagon. So what does this have to do with mezz cards? Everything, OCP is all about an open system design with a very clear specification for a whole new mezz card architecture. Several of the big OEMs and many of the smaller ones have already adopted the OCP specification. In early 1Q17 servers sporting Intel’s Skylake Purley architecture will hit the racks, and we’ll see the significant majority of them supporting the new OCP mezz card format. I’ve been told by a number of OEMs that the trend is away from proprietary mezz card formats, and towards OCP. Hopefully, this will last for at least the next decade.

Solarflare at DataConnectors Charlotte NC

You can never know too much about Cyber Security. Later this month Data Connectors is hosting a Tech Security event on Thursday, January 21st at the Sheraton Charlotte Airport Hotel, and we’d like to offer you a free VIP Pass!

Solarflare will be at this event showing our latest in 10Gb and 40Gb Ethernet server adapters along with discussing how servers can defend themselves against a DDoS attack through our hardened kernel driver which provides SYN cookie, white/black listing by port and address as well as rate limiting. To learn more please stop by.