A 120 KW Cabinet and The Future of Power Demand

When I get together with friends, once in a while, questions about AI come up, and invariably, they steer the discussion towards a reference to Skynet. For those not plugged into the zeitgeist, Skynet is the AI in “The Terminator” that is out to exterminate humanity. Now, as a chess player, I’ll also acknowledge that while the possibility exists, the likelihood of humanity going down that path is extremely low. My latest concern is one over the coming battle over energy.

NVIDIA held its annual GTC (GPU Technology Conference) in San Jose earlier this week. Jensen Huang, their CEO, unveiled their next-generation DGX [48:24], an AI supercomputer system in a single rack. For those not in technology, think of a cabinet-sized box six feet tall, two feet wide, and three feet deep that consumes an amazing 120 KW every hour of power while performing 1.4 ExaFLOPs. For contrast, the DGX consumes energy at a rate equal to 100 average American homes (a home consumes 10,500 KW / year). It does math at a rate equal to all eight billion people on the planet doing one calculation on a calculator per second for four years, non-stop. That’s what this machine can complete in one second.

In November, there were precisely two publicly announced systems in the world, Frontier and Aurora, both US Department of Energy SuperComputer Clusters, capable of achieving an ExaFLOP. One is in Oak Ridge, TN, and the other is at the Argonne National Lab, and they each consume 200X the power of the DGX above. It should also be noted that these are massive systems, often in the 200-rack range, but the move to GPUs has improved this, as Frontier has only 74 racks while Aurora has 166 racks. The main point Jensen was trying to make is that a single DGX is similar to the computational power of these data center clusters.

Those close to this technology would argue that NVIDIA is gaming its ExaFLOPs number because their calculations differ from those computed by Frontier and Aurora to make the Top500 list. Frontier and Aurora report their numbers while running the Linpack benchmark calculation using double-precision 64-bit floating point numbers. They cannot employ mathematical tricks that shorten number formats, reduce results, or optimize matrix multiplication using innovative new algorithms. On the other hand, Jensen is a magician performing unconstrained; he tosses out his ExaFLOPs number using an FP4 data type. This is the absolute smallest number format defined today; it’s 1/16th the size of the numbers used for Linpack, and trust me, size matters in more ways than one. Furthermore, Jensen’s ExaFLOPs metric benefits from using many of the latest tricks, including ways to shrink the number size and reduce the number of terms you need to operate on with calculations, some done in the networking cards.

Let’s get back to power. The US energy grid is under intense pressure from the rapid growth in demand resulting from the widespread adoption of Electric Vehicles (EVs), including electric commercial trucks and soon tractor trailers. Tesla is rolling out a new charging station every day. Shell and others are also looking to jump into this market, and all this power for EVs must come from somewhere. Thankfully, while EV electric demand is growing, homeowners are increasingly installing solar panels on their roofs to offset their use, particularly if they have an EV or two in the garage. The US federal government recently adopted new rules to move all future vehicle production by 2032 to EVs and Hybrids. While the growth in solar may offset the drain placed on the grid by EVs, NVIDIA’s new DGX changes everything.

Current data centers were designed around racks consuming 10-20 KW each, some permit up to 70 KW per cabinet with a 300 KW commitment, but this is pretty new. All of this includes the matching required cooling, which is just as vital but often overlooked. NVIDIA’s new DGX consumes 6-12X more power than what is currently deployed, and the first part of the problem lies here. As they begin shipping the DGX, we will see data center demand for power explode, unlike before. People have realized that the benefits of an AI grow as the models for these AIs, on which they are trained, grow geometrically. These larger models need even larger DGX class systems to be implemented. Unless something changes soon, at some point in the future, we may be competing with AI systems for electrical power.

6 More Reasons Tesla is Far More Than a Car Company

If nine reasons weren’t enough, here are five more that I forgot about in my prior post. Again, I have no financial interest in Tesla at the time of this posting. 

  1. Semi Tractor Trailer Trucks – Yes, they are producing trucks, in the video link you’ll see one with a full load going 500 miles in nine hours, and this was a year ago. In addition to this, their work in full self-driving and truck charging stations will eventually enable them to dominate this market.

  2. Data: Tesla receives real-time telemetry data from every vehicle if they have a charge. I checked on my Tesla this Friday, which is being delivered Monday, and the sales rep said sure, I can tell you where your car is. She said that GPS tracking is enabled from the moment it rolls off the factory floor; think, find my Tesla. Several mouse clicks later, she informed me that it was two states away and shared the exact city it was traveling through. Add all the traffic data and video footage to this, and it’s a wonder that Tesla has data centers big enough to digest it all. Then there’s the power data from home generation, charging centers, and home chargers, merged with weather and news data, and Tesla is sitting on a goldmine. I’ve read that Ford and others believe that Tesla’s data alone gives them an 18-month competitive advantage over all the other EVs companies.

  3. Artificial Intelligence (AI) – Here, Tesla is perhaps king of the hill. Their Full Self Drive version 12 is a fantastic bundle of AI code. The video footage on YouTube of it navigating a busy Costco parking lot on a Saturday morning is excellent. Any teenager can handle highway driving, but put them in a busy Costco on Saturday morning, and their knuckles become white as they turn the radio down to concentrate. I was shocked when the Tesla didn’t develop a case of road rage at some of the idiots pulling out of parking spaces without looking, even a police car backing up because he couldn’t get around someone [4:06]! This is only the very obvious tip of their AI iceberg; their code base extends well beyond self-driving.

  4. Chip Development – Developing a leading-edge chip for AI today is a $50-100 Million dollar expense. This is not something to take lightly. Tesla used NVIDIA chips but shifted a few years ago to designing and producing their own. While Ford and others have to buy GPUs or custom ASICs from third parties, often for $100s of dollars, Tesla’s unit cost for a much more custom-tailored and likely more powerful chips is a fraction of this. This is an area that will put a serious distance between Tesla and its car and Robo-Taxi competitors.

  5. Robotics – Tesla is all about deploying industrial robots, think big arms, on their factory floors, but they’ve also shown off humanoid robots doing a wide range of tasks. This includes what appears to be learning. If you have not seen their Optimus robot, you’re missing out. This thing is a humanoid bot that has been built to replace people on the factory floor. Just check out this video from two months ago, and you’ll see how close they’re getting. This is a substantial improvement from their video of ten months ago. Tesla will be rolling out factories in the future that are designed to accommodate these robots. They’re not spending all this R&D on cute YouTube show-n-tell videos.

  6. Cobranding and Technology Sharing Between Musk Companies. This is already happening with the Roadster. Elon mentioned at the end of last year that the Roadster would deliver sub one second zero to sixty thanks to technology they are “borrowing” from SpaceX. Imagine driving a mass-market consumer car with actual rocket technology. Jay Leno may be losing sleep over this one. Then there’s Neuralink. Sure its about helping people today, through their first human trials, who’ve lost mental and physical function. But a decade into the future, our thoughts will power robots and cars, brought to you by Tesla enhanced with Neuralink. 

So this brings us to 15 reasons why Tesla is far more than a car company. It may be the next Apple, only time will tell.  

9 Reasons Tesla is Far More Than a Car Company

If you’ve bought or watched Tesla’s stock over the past few years, it’s been a real roller coaster from under $20 to nearly $400 and today around $170, but all the while, they’ve kept innovating. Tesla is more than just a car company; it is positioning itself to become a global energy provider. To be clear, while I’ve invested in Tesla in the past, I have no financial positions connected with the company at the time of this writing.

  1. Solar Panels – Tesla’s panels are 20% efficient, which is generally where the market is now. Sure, in the lab, others have pushed these numbers over 40% using various optical and electrical techniques, but in practical residential deployments, 20% is the norm. Tesla’s panels are stylish, as stylish as black panels can be, with their main visual highlight being they appear as seamless and flush to the roof as visibly possible. In this case, the panels represent only part of the solution.
      
  2. Solar Roof Tiles – Every 20 to 30 years, most roofs, at least those with asphalt tiles, must be replaced. Tesla has a complete roof system that uses solar roof tiles designed to withstand 120 MPH winds while being roughly 15% efficient. Unlike solar panels, which you install only on roof lines that face the sun most of the time, the roof tiles are installed on the whole roof. While this costs more, it creates a visual look like a traditional roof. The point here is that Tesla is offering a unique option to panels, which some homeowners’ associations have banned.
      
  3. Batteries – Be they for a home Powerwall system or their cars, this is an area where Tesla is leading innovation. They are spending heavily on R&D and have their own battery plants, nearly a half dozen, with more under construction, and have been in talks over the years about acquiring mining facilities to source the raw materials. The latest cars implement LFP technology batteries, but Tesla is looking well beyond this technology. Battery chemistry is central to nearly all things Tesla.
     
  4. Home Electronics – By this I mean all the components that take power from solar combine it with their Powerwall batteries, inverters, their home car chargers, and the transfer switch. This includes all the other bits of glue electronics necessary to deliver a complete home cogeneration system capable of distributing excess power, beyond the capacity of the Powerwalls and any plugged in cars, back onto the grid. People want solutions, not point products; all the above elements give homeowners a unique option.
      
  5. Their App – The Tesla smartphone application ties all the above together: home generation, grid power, car charging and power consumption, the works. This is the same App you use to manage your car(s). The App and all of the data it manages and collects could be one of Tesla’s biggest assets in years to come.
     
  6. Home Charging – With LFP batteries, I could leave the house in a Model 3 (Highland) daily at a full charge. When I see my daughter in Richmond, VA, we live in Raleigh, NC, I’ll need to charge while we eat lunch or dinner for the ride home, but that will likely be about the price of a single mixed drink. The same is true when we visit the in-laws in Charlotte. Home charging is a game changer. Duke Energy charges $0.19 KW/hr fully burdened (taxes and all fees included); this works out to about $0.045 per mile, depending upon ones driving. Not since we’ve fed our horses the hay we’d grown in our fields has the fuel for transportation been cheaper.
     
  7. Charging Stations – Tesla is putting in one new charging station every business hour of the day. Many of these new stations will have some pull-through stalls so trucks can charge without having to unhook trailers. Even more interesting, Ford F150 Lightning and Mustang MachE vehicles can now install the Tesla App and charge on the Tesla network. Other EV companies are following suit and later this year all GM EVs will be able to charge using the Tesla network. Once GM sells an EV, all the after-market revenue will go to Tesla for the home charger and the charging stations.
     
  8. Diverless Taxis – For the past year Tesla has not permitted customers who’ve leased Model 3s or Ys to purchase their vehicles at the end of the lease. It is rumored that Tesla is looking to roll out its fleet of driverless taxis later this year or early next. If Full Self Driving version 12 is any clue, they are getting really close. This will totally crush Uber and Lift, and validate Cruise and Waymo.
  9. Cogeneration – At some point, Tesla will have a critical mass of home solar customers in a market, region, or country, and it will begin testing branded cogeneration back into these grids using a federated model. This may deliver better rates to Tesla and homeowners and significantly more bargaining power than they currently have as individuals.  

Perhaps even more so than Apple or Google, Tesla positions itself as a critical player in the worldwide technology market. I haven’t even mentioned all the data it harvests from its cars, how that can be used, and the work they’re doing in AI; perhaps that’s a future blog.

The Huge Security Threat Posed by China EVs

Last year, the US Congress held hearings on TikTok and debated the security of the platform, the data it collected, and what it may be sending back to China. This past month, we learned Temu, a shopping app owned by Pinduoduo, China’s second most popular online shopping site, is very sophisticated spyware. This shopping application was heavily hyped during the last two Super Bowls with the slogan “Shop Like a Billionaire.” This “free” application was the second most popular free app on Apple’s App Store following the Super Bowl. Researchers have found that the Android version can escalate its privilege and install a rootkit. At this point, its data collection engine is ALWAYS running in the background, even when you haven’t used Temu since the last time you rebooted your phone. The extent to which this program is harvesting data from those Android phone users is still being determined, but we know it collects a user’s locations, contacts, calendars, notifications, photo albums, social media account data, and chat sessions, all without their consent. This is all on the phone; imagine if it were a car.

Unlike regular Internal Combustion Engine (ICE) vehicles, Electric Vehicles (EVs) are rolling data centers. Most are designed to support some form of assisted driving and, eventually, full self-driving. Therefore, many have full 360-degree camera coverage around the outside of the car and in the car. If you own a Tesla, have you ever used “Dog Mode?” Tesla has nearly full camera coverage for everything inside the car, so for clueless teenagers and parents, the backseat is no longer for private “discussions.” While this is a Tesla-specific branded feature, you can bet that other manufacturers have similar in-vehicle cameras, even if they are not branded and announced features. “Dog Mode” is a side effect of having the camera to monitor driver engagement for self-driving enablement; Tesla just got clever and repurposed/rebranded it.

Are conversations in an EV private? Doubtful. Since most EVs also have voice control, this means there’s always a hot mic. Most of these EVs also include a 5G cellular data connection back to the manufacturer for “over-the-air updates” and to send video, traffic, and mapping data back to their road and traffic mapping systems. Since data is flowing both ways, it’s open to being exploited.

Next week, I take delivery of a new Tesla Model 3. Honestly, I look forward to “over-the-air updates,” Sentry Mode, Dog Mode, voice commands, and the whole enchilada. Still, even then, I won’t do or say anything in or around my car that will give away any national secrets, mainly because I don’t have any. Product or technology secrets, on the other hand, are possible. As many of you know, I work in high-tech at a semiconductor company, so innovation and intellectual property are part of our business. I seriously doubt Tesla would jeopardize its reputation by storing cabin recordings.

On the other hand, EV cars designed in China and built by Chinese firms are a whole different story. If I took a business meeting in an EV designed in China, I’d worry that what I’m saying will be played out of speakers a half world away shortly after my call concludes. So, I wouldn’t take the risk of ever considering purchasing this vehicle class. I fear that others will not be so savvy, and this will be just another example of our secrets being exfiltrated because the less informed were busy “Shopping Like a Billionaire!”

Freedom, Dignity & Self-Driving Cars

I vividly remember walking, then sprinting back from the mailbox after I’d torn open the envelope addressed to me from the New York State Department of Motor Vehicles. It contained my NY State Drivers License, printed on thin blue cardboard. After seeing that little blue slip, I bolted the rest of the way up the driveway to my father that May. He was so proud and quickly agreed when I asked to take out the Pontiac Ventura, a classy Chevy Nova, for a ride. That feeling of freedom, the rush of adrenaline once I cleared the neighborhood and hit the accelerator, it’s never left. Since that Ventura, I’ve had a Road Runner, Mustang, motorcycles, convertibles, and even a Slingshot, and that feeling of freedom and rush of acceleration is still just as acute today.

Several years ago, my mom, whom I love dearly and who is in her eighties, had been diagnosed with a form of dementia. That month, we installed a tracker on her car in case of emergencies. While she has owned an iPhone for years, she has never quite grasped how maps work, and the GPS in her car met most of her needs. Last summer, my younger brother and I, along with her neurologist, had discussed with mom and determined that her condition had deteriorated to the point where she was no longer safe behind the wheel. While she easily carries on intelligent conversations about past and current events and can do fundamental math problems in her head, her reaction time and situational awareness had diminished to a point where she was at much greater risk behind the wheel than ever. Even though Mom was driving a bright red convertible, and she hadn’t yet gotten into an accident, a minor event triggered our path to this judgment. Also, we knew how much it would have torn her up if she had hurt someone else because of her condition. With the sale of her car and the surrender of her driver’s license, she has forever changed. That permanent loss of some measure of control has reduced her perception of self-worth and impacted her sense of dignity. It was a tough call. She has family, friends, and an aide to help her get out and run errands, but it’s not the same.

As we become adults, one of our most basic freedoms is being granted the privilege of driving whenever and wherever we want. With a simple twist of a key, placing our hands on the wheel, and shifting the car into gear, we’re in control, empowered to go wherever we like. Driving, at its very essence, is the ultimate example of power. When I talk about self-driving cars, especially with my generation and my mom’s, many become highly defensive, and I often hear, “I’ll never let a car drive me around.” Sure, my friends will quickly jump in the back seat of an Uber with a driver they’ve never met because a human is “in control.” Those same people, though, would be very reluctant, at this point, to be passengers in a driverless cab. They’ve said it point blank.

A few years into the future, we’ll be able to climb into a Tesla and say, “Take me to the market.” The car will open the garage, pull out, close the garage, and safely drop me off in front of the market a few miles and a dozen or so minutes later. When I’m done shopping, as I approach the check-out, I’ll tap a button in the Tesla app on my phone. As I exit the store, my car will pull up, I’ll put the groceries, then myself into the car, and then ask it to “take me home.” Which it will. As we roll into the garage, the vehicle will pull in and center itself over the inductive charging mat on the floor, then close the garage, unlock the doors, and pop the trunk and or frunk, depending on where it sensed I had placed the groceries. It may even then alert me via my watch or phone to empty those compartments if they remain open and full too long.

During the ride to the market, while the car is self-driving, I could make suggestions to change lanes, go slower or faster, and, in doing so, retain some degree of control over my artificially intelligent pilot. Much like an admiral might instruct the flagship’s captain.

I expect that a self-driving car will help me retain my freedom well beyond the point, several decades from now, when my safety behind the wheel will become a point of discussion. Mom would never accept a self-driving car, even if it helped her retain her freedom, because her understanding and grasp of technology were never strong enough. There is still sufficient mistrust in self-driving because driving is a complex task with many nuances that are hard to define and test. For example, snow-covered roads in a blizzard or oncoming emergency vehicles with lights and sirens. I’ve seen humans make incorrect responses in both these cases many times, yet we expect, no, we require, that our artificial intelligence engines do better, and they will.

The most significant value proposition of self-driving might be how it will extend our freedom to roam well beyond the point at which we are safe behind the wheel. Though I’d never advocate standing on the driver’s seat while the vehicle is in motion and self-driving!

How AI Will End Everything, As We Know It

…and it’s not Skynet, not even remotely close.

At birth, every human is granted one gift: time. Some receive a few moments, while others have more than a century, but in the end, what we do with this time is what matters most. Did we enrich ourselves, our families, our communities, or humanity?

Many things compete for our time, families, the need to work to support them, and our hobbies that hopefully make us and our communities better. They actively seek our engagement, most with age-old techniques like love, romance, greed, etc. At the center of engagement is addiction. There is hardly a person alive who hasn’t felt the rush of scoring a goal, completing a challenging task, or checking a box on a to-do list. Our brains are hardwired to reward us when we finish something. Many of us struggle our whole lives to master our addictions. One of mine was computer gaming. Arcades in the 1980s, but I thought I overcame it with a bit of work and limiting my supply of quarters. Then PC games arrived. Sure, I stumbled around in the dark cave of Zork for a while and hung with Leisure Suit Larry for perhaps a bit too long. Still, it wasn’t until Everquest that I understood how the algorithms in these games manipulated my internal reward system to sustain my engagement. I lost a week of my life to Everquest, then one Sunday morning, I stuffed it back in its box and sold it on eBay, and I have barely touched a computer game since.

Yesterday, 60 Minutes had a story about the rise of mobile sports betting. Platforms like FanDual, DraftKings, and many more now enable gamblers to place micro-bets during a game on the next play within that game. The story explained that behind these micro-bets is an AI engine that dynamically adjusts the odds in real-time. The story then chased the obvious human side of the addiction angle. It missed the bigger story behind the story: how these AI engines are designed to leverage our addiction tendencies to increase their revenue.

The real question is, what drives the AI engines to arrive at these odds? The assumption that many users likely make is that it’s doing something similar to a parimutuel, what you often see in horse racing. This would be where the AI sees all the bets being made in real-time and adjusts the odds, taking into account the return the house requires, then pays winners from this betting pool. This is the most obvious explanation and likely the one these companies will give for how these odds are computed. Frankly, it’s the easiest for most people to understand. Unfortunately, though, this may be the furthest thing from the truth; my bet is that these AI engines are the next generation of a three-card Monte dealer; they are playing the man.

These platforms exist to enrich their owners, so at their core, it’s entirely about engagement because engagement drives revenue. Therefore, they’re optimized to keep their finger on our addiction buttons. It’s not unlike why Benny Binion in the 1950s gave the gamblers in his casino free booze; if you felt like a high roller, you’d begin acting like one. The AI engines in FanDual and the other mobile sports gambling platforms exist for one reason: to make money. Benny taught them how to improve revenue by increasing engagement. I have no proof of how these AI engines work, so the rest of my advice has no basis more reliable than my own meandering decades of software development experience…

A newbie arrives with no individual specific data set for the FanDual AI engine to operate on, so it will default to one that enforces engagement based on subtle details in their login profile. The AI will then tilt the odds in the newbie’s favor to achieve the objective of the AI, which is to set the hook in their new mark. Much like a three-card Monte dealer, who forces their mark to win one or two before sucking them in for the big bet, that they will then lose. For those who aren’t aware, a perfect three-card Monte dealer is one part illusionist, one part accountant, and one part expert psychologist. The illusionist uses extremely well-practiced sleight of hand to create the perception they require of their audience at every choice they provide, so they know which card is the “obvious winner.” The accountant tracks earnings and establishes forecasts based on various player profiles. The psychologist is constantly reading the gambler and his influencers, if there are any, and quickly determines which player profile to apply. They’ll adjust their profile assessments as things play out while always remaining focused on playing the man.

The FanDual AI engine very likely operates precisely as the highly skilled three-card Monte dealer mentioned above. It will offer up several favorable, easy-to-win micro-bets in the initial series to create some wins and begin setting the hook in the newbie. It will then shift the odds against the newbie in hopes of forcing a few minor losses to build a realistic profile of the newbie’s decision-making patterns, their timing between bets, and the degree to which they are willing to sustain risk for future reward. During this early data-gathering phase, the AI engine will likely keep the player at an overall odds advantage in an effort to make them feel like a winner.

To prove my thesis, we need only four things: a very regular, experienced FanDuel gambler and a brand new FanDuel gambler, both on different networks (one on Wi-Fi and the other on cellular, so they aren’t both coming from the same ISP network address), and sitting side by side to compare notes on the odds for matching micro-bets. I assert that these two gamblers’ odds of matching micro-bet opportunities at the same time within this platform will be inconsistent.

Perhaps today, during the Super Bowl, someone will prove me wrong, I hope you do.

AI engines will be leveraged in a great many ways, and a great many will improve our lives. A few, like those mentioned above will not. We need to recognize these for what they are, simple forms of entertainment, and not empower them to dominate our lives.

A Decade Old MacBook Pro

In 2012, I invested in my daughter, who was studying social work, by buying her a new MacBook Pro, the last one with a DVD drive. My road to Mac Fanboy has been one of the longest. My first professional job in 1983 was as a programmer hired to write educational software for the Apple II+. That lasted almost nine months till I left for college to finish my degree. It wasn’t until late in the 1980s, while working at IBM Research in New York doing final assembly and distribution on the as-yet-unannounced IBM RT/PC, that I encountered one of Steve Jobs’s next workstations in a lab. Like a Ferrari, only in black. It looked fast just sitting on the lab bench. I requested an account and spent a few evenings exploring the system. I wondered if the IBM RT/PC even stood a chance against it. At the time, I also had access to Silicon Graphics systems, and while the RT/PC had a much better processor architecture than the MIPS R3000 and far more advanced compilers, it still looked like a PC trying to be something more. The RT/PC eventually became the RS/6000 and later Power Systems. Until 1994, I had access to and regularly used many of these systems, and I taught Unix to hundreds of people at IBM Research over the years.

It was nearly ten years later, in 2004, before I purchased my first Apple product, a PowerBook G4, because it had the PowerPC G4 chip. It was the first, in my opinion, serious Unix laptop with a pleasant graphical software environment whose heritage went back to Next and Unix beyond that. I had Linux on various systems in the 1990s and tried Linux on laptops until then. Still, at that time, it always seemed like a science project, and I just needed something that would work and run versions of MS Office that were compatible, at the file format level, with its Windows counterparts. Since then, I’ve had all manner of MacBook and Mac mini. Occasionally, my current employer drags me back into the Windows world. Windows machines are acceptable at a hardware level, but the operating system WILL NEVER be as robust and secure as OSX or Linux.

Back to my daughter, two weeks ago, she mentioned that her laptop, the 2012 MacBook Pro, had crapped out, so I had her drop it off with me on Friday. Several years back, I replaced her 512GB spinning disk with a 1TB SSD, so I doubted that was the issue. It took forever for the system to boot, but once it was up, I was able to launch the Activity Monitor and a Console window, and over two hours, as a super user, I weeded out three pieces of crap-ware that were hogging all the CPU cycles. Once these applications were removed, it was like a new system. Sure, Apple suspended support for this platform precisely a year ago, after ten years, but with OSX Catalina, it is still running strong. This morning, I swapped out the two 4GB memory modules with two 8GB ones I had leftover from another project and replaced the battery with a new one for $35. To avoid further problems, I hid Safari, which was still acting quirky, and Chrome and installed Firefox, making it the default browser. She could now get another two, and if she’s careful, many more years from this system. Please don’t get on me about security updates. The only remaining moving part is the cooling fan. Besides a few minor dents in the aluminum case and the charger will only work in one of the two default orientations, the system performs like a champ. The display looks great, and the keyboard and trackpad are functioning excellently.

So why this post? We live in a consumable world. To remain relevant, those on the leading edge of technology replace our smartphones and laptops every two or three years. Our desktop and server computers and smart watches perhaps every five years. As an Apple shareholder, I expect, no require, that our customers remain on this upgrade treadmill. We also have to recognize, though, that there is another tier of customer, one who needs a product that works for ten or more years. My 82-year-old mother has a MacBook Air that she bought at my request to replace her failing three-year-old Windows machine about eight years ago. She’s worn the letters off all the vowel keys and several consonants, but she knows what’s where, and it still performs as well as the day she bought it. I dread the day, in the coming years, when I’ll have to pry her iPhone 8, the last one with a home button, that she got six years ago, from her hands. Her mind is no longer agile enough to handle the change of a home button-less iPhone 15.

I’m writing this on my 13” MacBook Air M1. I lament that the M1 chiplet architecture and tight integration of the MacBook Air line have eliminated the capability of upgrading the memory and storage beyond how the system initially shipped. Therefore, I bought up at the time of purchase to 16GB and 1TB. Eventually, next year, I’ll break down and move up to a 15” MacBook Air M3. Then, next Christmas, my daughter will get my M1 MacBook Air, but she doesn’t know it and isn’t expecting it so let’s keep this between us.

The Future 3D Printing & Plant-Based Resin

Raspberry Pi4, UPS and Temperature Sensors

A few months ago I switched over to Eco UV Resin as I wanted to be a good citizen of the planet, and thought that moving to a plant-based product that is water-soluble was the right thing to do. I’m on my third Kg container of this product, and I’m beginning to have my doubts, but perhaps I should back up.

In March of 2020, I purchased an Elegoo Mars 2 Pro 3D resin printer, and have consumed at least 15kg or more of standard resin in at least four colors, so I’m not your typical 3D resin noob. I also picked up an Elegoo Mercury Plus wash station at the same time, and typically wash 20-40 prints per gallon of 97% alcohol. To be clear I’m not printing small figures, but more often than not 3-5″ models of things like custom Raspberry Pi cases, fan mounting brackets, and prototypes of future products. So when using normal resin the alcohol in the washbasin will become discolored after a few washes, but I don’t switch colors much and the prints are always clean, smooth, and free of any residual resin after washing. In fact, often the rapid resin stays in suspension in the alcohol between print jobs, sometimes a week or more without settling. I’ve found that washing with less than 97% leads to stickier prints and the alcohol becomes contaminated faster due to the additional water. Some suggest curing suspended resin in the wash tank by leaving it out in the sun for a bit, then filtering it. Recently, I started using translucent clear plant-based resin, and the experience is considerably different from that of rapid resin.

When I finish printing with translucent clear Eco resin, I put the build plate with all my printed parts into a tank of soapy water in the wash station and run it for twenty minutes. The build plate is then removed, parts are separated from it then scrubbed to remove any trapped resin the wash station missed. Next, the supports are snapped off. One further scrub, some light sanding, and another rinse then on to curing. Seriously, I’m probably consuming a few gallons of water per print, and wash my hands several times thoroughly throughout the process. No, I don’t wear gloves, and this has never been a problem for me as I’m extremely careful, and what resin which has come in contact with my skin has never caused a problem. After I’m done, I then need to wash and scrub out the wash tank and build plate. This is far more labor-intensive than alcohol-soluble rapid-resin where I never clean the model outside of what the wash tank has done. It should be mentioned that I often print hollow models, and very carefully place the drain and vent holes so they are hidden, yet offer the best functionality. Trapped resin is rarely an issue. While Eco resin sounds environmentally friendly, when one takes the volume of water consumed into account it may not be as friendly as it appears. 

Today my $500 setup has a limited print volume of 5” x 3.2” x 6.3” and can produce a full-size single color print in under eight hours, often careful placement means two to three hours. Other products in this 3D printer line are bigger and capable of print volumes nearly triple mine, but they are still single color. Producing a final product that has multiple colors requires multiple prints for the various different colored parts, then you need to assemble them. Between colors, the tank needs to be emptied and cleaned. Designing something from scratch that screws or snaps together requires some serious modeling and slicing skills, along with the professional version of the slicer. A slicer is a program that takes your 3D model and turns it into a file your printer can understand. Even more important though, the slicer helps you layout your model on the build plate, detect and fix print problems with the model, make some adjustments, like creating weep holes and vents so that resin isn’t trapped inside hollow parts, and define supports connecting it to the build plate. How, and where to place supports for a 3D model with a resin printer requires different considerations than a traditional deposition printer. So where is 3D printing headed?

3D Resin printers often utilize a 4K monochrome computer screen that faces up from underneath the build tank. A UV light then shines through the screen and cures resin wherever there isn’t a negative print image. Initially, the slicer produces a raft that is a flat surface on the build plate with a lip around the edge that enables sliding a scraper underneath the raft when the printing is done to separate the print from the build plate. Supports then grow out of the raft and connect it with the model. Models are often suspended five to ten millimeters below the build plate with these supports enabling them to be easily separated from the raft after the build is completed. Once the printed model is rinsed and separated from the build plate then you need to remove all the supports, further clean the model, and often sand the surfaces where supports were attached to remove any pitting. Does this sound user-friendly? It isn’t, right now it’s something that only a devoted hobbyist or employee would leverage to achieve an objective.

Many of us can envision a day when we order a product on Amazon, and while it’s in the cart Amazon checks the supplies in our home “Amazon Replicator” to determine if the proper color resins and cleaning supplies are available to build the product. If so then the order is accepted, and sliced print files for my model replicator are automatically downloaded and queued into the printer. When the parts eventually emerge in the product hopper, along with one-page assembly instructions, the customer is then charged for the print and additional replacement supplies are shipped out when necessary. The customer then assembles their product and they have what they ordered in hours rather than days.

Today we pour 200 ml or so of a single color resin into the tank then print a single color until the job is complete. What if the printer sprayed an appropriately thick layer of a specifically colored resin onto the tank bottom, for the layers needing to be printed next in that color. Then between layers requiring a color change, the tank bottom could be cleaned, drained, dried, and the new color sprayed thick enough for the next layers requiring that color. When the print is completed the tank would be drained, cleaned, and dried for the next job. The build plate and the print would then be fully washed. At this point, simple robotics would be needed to separate the print from the supports coming up from the raft using one of several types of tools to clip, melt or vaporize support resin where it meets the model. At this stage in the process, the model is dropped into the curing bin for a few minutes then released into the output bin. Meanwhile, the raft and supports are recycled, perhaps back into supplies or for return to Amazon for reprocessing. With advances in modeling, slicing, and printing we may eventually reach this point for some simple products, but given my experience, this is still a number of years away. 

Regardless, it’s awesome to think that even today we can take an idea, create from it a 3D model, then slice and print it all in the same day! I really love technology.

7 Things I Learned From the IEEE Hot Interconnects Panel on SmartNICs

For the second year, I’ve had the pleasure of chairing the panel on SmartNICs. Chairing this panel is like being both an interviewer and a ringmaster. You have to ask thoughtful questions and respond to answers from six knowledgeable panelists while watching the Slack channel for real-time comments or questions that might further improve the event’s content. Recently I finished transcribing the talk and discovered the following seven gems sprinkled throughout the conversation.

7. Software today shapes the hardware of tomorrow. This is almost a direct quote from one of the speakers, but nearly half of the participants echoed it several times in different ways. One said that vertical integration means moving stuff done today in code for an Arm core into gates tomorrow.

6. DPUs are evolving into storage accelerators. Perhaps the biggest vendor mentioned adding compression, which means they are serious about soon positioning their DPU as a computational storage controller. 

5. Side-Channel Attacks (SCA) are a consideration. Only one vendor brought up this topic, but it was on the mind of several. Applying countermeasures in silicon to thwart side-channel attacks nearly doubles the number of gates for that specific cryptographic block. I understand that the countermeasures essentially consume the inverse power while also generating the inverse electromagnetic effects so that external measurements of the chip package during cryptographic functions yield a completely fuzzed result. 

4. Big Vendors are Cross-pollinating. We saw this last year with the announcement of the NVIDIA BlueField 2X, which includes a GPU on their future SmartNIC, but this appeared to be a bolt-on. NVIDIA’s roadmap didn’t integrate the GPU into the DPU until BlueField 4 some several years out. Now Xilinx, who will soon be part of AMD, is hinting at similar things. Intel, who acquired Altera several years ago, is also bolting Xeons onto their Infrastructure Processing Unit (IPU).  

3. Monterey will be the Data Center OS. VMWare wasn’t on the panel, but some panelists had a lot to say on the topic. One mentioned that the data center is the new computer. This same panelist strongly implied that the future of DPUs lies in the control plane plugging into Monterey. Playing nicely with Monterey will likely become a requirement if you want to sell into future data centers.  

2. The CPU on the DPU is fungible. The company seeking to acquire Arm mentioned they want to use a CPU micro-architecture in their DPU that they can tune. In other words, extending the Arm instruction set found in their DPU with additional instructions designed to process network packets. Now that could be interesting.

Finally, this is a nerdy plumbing type thing, but it will change the face of SmartNICs and bring enormous advantages to them is the move to Chiplets. Today ALL SmartNICs or DPUs rely on a single die in a single package, then one or perhaps two packages on a PCIe card. In the future, a single chip package will contain multiple dies, each with different functional components, and possibly fabricated at other process nodes, so…. 

1. The inflection point for chiplet adoption is integrated photonics. Chiplets becoming commonplace in DPU packages will become popular when there is a need to connect optics directly to the die in the package. This will enable very high-speed connections over extremely short distances.

9 Practical Resin Printing Suggestions

Just over six weeks and three liters of resin ago I received my Elegoo Mars 2 Pro Mono and the strongly suggested Elegoo Mercury Plus 2 in 1 washing and cleaning station. I ordered both these on Amazon for about $500 and they were extremely easy to set up and get working. Along with this order, I added a five-pack of Elegoo Release Film, Elegoo 3D Rapid Resin in clear red, a gallon of 99% Isopropyl Alcohol, 400 Grit sandpaper, and an AiBob Gun Cleaning Pad 16”x60” (this is a must-have). I’ve printed in both translucent red (2.5L) and flat black (0.5L). Also, I’ve been careful to hollow out models in the slicer, Chitubox, so that I’m using the minimum amount of resin necessary to print my models, and I’ve printed many with very little waste.

My resin printer setup, and yes a magnifying glass.

This printer is amazing, my prior experience was a few months, two years ago, with my son’s Creality Ender 3D which is a Fused Deposition Printer (FDP), your typical 3D printer. Eventually, we got the Creality producing usable results, but the difference between the Creality and Elegoo units is night and day. It would often take several tries to get the Creality to produce a workable print, and I’d installed the unit in a cabinet in my office so the temperature and airflow were strictly managed, and we’d modified the printer to reduce the noise, upgraded the print heads and improved the fans, but this post is about resin printing. My first resin print and nearly everyone since has come out as expected. So here are my nine suggestions for those interested in trying resin printing using the Elegoo Mars 2P. 

  1. Don’t Print Flat. Never print your model flat on the build plate. Because the printer exposes a print layer then rises a bit in the build tank then lowers again this creates shearing forces on the supports and a flat model could fail early. Also, you always have to pry your model off the build plate so having a raft and supports which may take damage on removal is always better than scratching up or breaking your model. I’ve found that rotating my model so it’s inclined 10 degrees from the build surface then elevating it 10mm off the build plate produces the best results. Chitubox will then create a raft to bond to the build plate, and it will raise the edges to make prying your model off easier. 
  2. Supports, you can never have too many. Be generous, add more, but make sure you’re bridging them from existing supports, or adding supports that you can then bridge from. You can always sand your model with 400 grit paper later to remove support marks. For finished surfaces sometimes you can avoid supporting these surfaces provided the surface facing the build plate is fully supported. 
  3. Models should drain down. Make certain you orient your model so that it drains down into the build tank. Also, be sure you hollow out your model and set your wall thickness to something like 3mm. This can save you considerable resin, and using translucent resins with somewhat hollow models can create some interesting effects when viewing the model.   
  1. Different color resins require different slicer settings. For example, black requires almost 30% more exposure time over the translucent resins. The Chitubox V1.8.1 slicer is very flexible, it makes it easy to make these adjustments. Here is a table that is invaluable when switching between resins.
      
  2. Never run out of resin during printing. I had this happen once, this morning, now I’m soaking the tank with alcohol and will try and get resin that’s bonded to the clear film on the bottom removed. Otherwise, I’ll need to replace the film.
  3. Have a ceiling fan on during printing and curing. You can use your printer in an office environment with normal indoor temperatures and if you work carefully, gloves and a mask can be avoided. This printer is very quiet, and prints much more quickly than traditional FDP. Resin printers use a single stepper motor that is installed in the base and it drives a single screw to raise and lower the build plate. The printer is fully enclosed, and the 2P has a fan and carbon filter so there is only a small amount of smell that leaves the unit. I’ve had the printer, not the wash station, running in the background while on Zoom calls, and nobody has ever said anything about hearing it.
  4. Lay down a felt rubberized gun mat ($10) on your work surface before installing the printer and cleaning station. It makes for an ideal work surface and wicks up the few droplets of alcohol that often fall everywhere. Before transferring a model from the printer to the cleaning tank I tilt the build plate a little to drain off the excess resin, and I carefully move the build plate from the printer to the cleaning station without dropping resin on the pad. I’ve found that two inches of spacing between the printer and the cleaning station are enough to make lifting the covers and reaching around back to turn things off easy, while also limiting the travel distance for models that may still drip.
     
  5. Removing the Cover. Lift the Mars 2P lid with your fingers wrapping under the cover edge as you lift it off the printer. There is a silicone gasket on the bottom of the cover and it will often rub the supports for the build tank which will result in it fall off. So if you carefully lift it and roll your fingers under the silicone gasket you can prevent this from happening. I’ve considered glueing the gasket in-place on the cover, but I think that’ll create other issues.
  6. Make sure you configure Chitubox with your specific printer model so it scales the build plate size properly and make the other default settings. Chitubox and the Elegoo will allow the raft to slightly fall outside the build area and still print, but be careful. Chitubox is a simple slicer, and I’ve used several in the past, but it is very capable and does a nice job.  

Well, that’s it, for now. I’ll edit this a bit more later today, but I hope you have an awesome time with your new printer. I’m sure there will be people out there that will insist I wear a ventilator mask and rubber gloves when printing, cleaning, etc… but I have the ceiling fan on high, and my office is extremely clean and clutter-free so that’s what works for me.