Tesla “Full Self-Driving” – Finally Usable

Last Thursday, I embarked on an over 1,100-mile road trip with the latest version of Tesla’s Self-Driving software. My mission was to travel 161 miles from Raleigh, North Carolina, to Richmond, Virginia, on Thursday. I planned to stay the night in Richmond with my daughter, then, early the next morning, we’d set out together for Danbury, Connecticut, to see my Mom at her place for a holiday visit, 396 miles away. The first leg of the trip from Raleigh to a supercharger just South of Richmond went flawlessly, with me not having to touch the steering wheel for two hours. It even found a space at the charger and parked. I played with “Hurry” and “Mad Max” modes while it drove and determined that on I-85, “Mad Max” was just too fast, and would likely result in a speeding ticket. 

After a quick charge just South of Richmond, I was off to my daughter’s, but not without an incident in the parking lot. Full Self-Drive stopped at one point, and I was perplexed. Just as I was about to override it, I noticed a car out of the corner of my eye coming from behind a dumpster. Self-Drive saved the day. Some 20 minutes later, Self-Drive pulled the car up and parked in front of my daughter’s house. Another flawless drive without requiring my assistance. 

We awoke Friday morning to two inches of snow and light, steady flurries. By 8 AM, the road was clear, so we set out on the next 396-mile leg from Richmond to Danbury. We stopped at two Super Chargers along the way. The first was an early lunch, and the second was a late-afternoon snack. The light snow followed us from Richmond through to the Delaware Bridge, and the software performed excellently as long as the windshield and cameras remained clean. There were several warnings along the way, but in general, it handled the precipitation very well. The only hitch on this leg that required my attention was the George Washington Bridge exit to 9A North, a Rat’s Nest of exits, where another car blocked our merge, and frankly, most humans would have failed this interchange. A few seconds behind the wheel and things were back on course, and forty-five minutes later, we were parking in Danbury. For these first two legs, 560 miles total, I’d driven less than one mile with my hands on the wheel. Having experimented with “Normal,” “Hurry,” and “Mad Max” modes, depending on traffic. “Mad Max” was my go-to when all the cars around us began to move quicker, but in less congested situations, “Hurry” set the speed at 10 MPH over the posted speed limit and worked very well. In comparison, at times “Mad Max” would exceed 80 MPH in a 65 MPH zone, and without traffic, I’d be destined to get a ticket. 

That night, we stayed at my brother’s house in Wappingers Falls, New York, and on the way there, Full Self-Drive made one incorrect lane choice at a light that needed fixing, but otherwise the drive, including a short charging stop, went flawlessly.

In the morning, we put the Full Self-Drive in “Normal” mode for the icy trip down the mountain and out to Danbury. We had two stops: The Home Depot and a car wash. After the second stop, it also made another incorrect lane selection at a light, but again, it was a quick fix, with only a few moments of my hands on the wheel.

Later in the day, we took mom out for dinner, and my daughter was in the driver’s seat using “Normal” for the short five-mile trip to the steakhouse. The car made no mistakes, and my daughter was suitably surprised. After dinner and back to New York, it made only a single-lane selection error at one light; again, some quick action on the wheel, and we were back on course.

The next morning at 7:30 AM, we hit the road for a sprint to an IHOP in East Windsor, New Jersey, for a combo Super Charger/breakfast stop. It made one minor mistake navigating the back of the parking lot as we pulled in, which required attention for only a moment, and then it found a parking space with a charger. The run to Richmond and back to Raleigh went flawlessly. We easily put 1,300 miles on my Tesla Model 3 Highland over the weekend, and all told, I might have driven one or two miles the whole weekend. The trip with the Full Self-Driving software went exceptionally well, far better than my two prior trials over the past eighteen months with Tesla, which lasted only a few miles each and were less than satisfactory. Late in this trip, I discovered that if I hit a blinker, Full Self-Drive would attempt to accommodate my lane change request. This would have been nice to know earlier in the trip.

While Tesla employs a camera-only approach to Full Self-Driving (version 14.2.1), it works well in clear weather, but it isn’t ideal. Their Full Self-Driving could have easily benefited from LIDAR for lane selection. One of the nagging problems we noted with Tesla’s Full Self-Driving system was stuttering lane selection. There were many times when it put on the blinker, then canceled a lane change because it didn’t properly predict the speed of an oncoming car. In many cases, it actually started the lane change, then aborted it. This is clearly an area for improvement with this code. 

Given that I work from home, purchasing Full Self-Drive for $99/month isn’t wise. While I visit mom monthly, I only drive there every few months, often when someone else is coming; otherwise, I fly. Next time I’m driving back to New York, I will definitely purchase Full Self-Drive for the month. 

This picture was taken several days after returning home. It should be noted that I drove the car normally for a few days before the trip and a bit after, hence the 45 miles or so of human driving.

Photo taken on December 11, 2025

If it Sounds to Good to be True…

Last week, I posted in “Quantum Teleportation & Financial Trading” that the effect, which Einstein called “Spooky Action at a Distance,” might lead to faster-than-light communication of data. The subtle part I missed is that while the distant entangled particle mirrors the quantum state, the state itself can not be utilized without context. As you might now guess, the context must be communicated over traditional data channels at or below the speed of light, which means that faster-than-light communication is not possible. 

If you wish to understand the specifics, Wikipedia has done an excellent job at explaining this in its article on “Quantum Teleportation,” and I doubt I could do it any better. 

Quantum Teleportation & Financial Trading

The straight line distance between New York City and Chicago is 1,146 km (712 miles). Data through a standard fiber-optic cable travels at 200,000 km/sec. A fiber-optic cable strung in a straight line between these two cities should have an expected round-trip latency of about 11.5 milliseconds. Given the distance, there also needs to be amplifiers, and the line will never be straight, so the actual travel time is longer. In 2010, Spread Networks began offering a service that could do this round trip in 13 milliseconds. If we convert this to one-way, also known as a half-round-trip, the latency would be 6,500,000 nanoseconds (billionths of a second).  It should be noted that this project at the time had cost $300M USD. Seven years later, after competition from microwave solutions entered the market, Spread Networks was sold for $117M.

For perspective, when trading stocks, the current record using the STAC-T0 benchmark is 13.9 nanoseconds. Having helped define this benchmark and been on the team that set it on two prior occasions, I know that some traders would pay handsomely, as shown in the fiber example above, to move trading information frictionlessly from NYC to Chicago. By frictionless, I mean eliminating this bothersome 6.5 million nanoseconds of half-round-trip latency, enter the Quantum Teleportation of Data. It should also be noted that NYC to Chicago is just one of many very profitable paths that could be exploited. 

“Spooky Action at a Distance” is the phrase Einstein used to describe what we now call Quantum Teleportation of Data. It’s been proven through experimentation that if one member of a pair of entangled particles, which are separated by some random distance, has its spin (value) measured, then the corresponding spin of the other particle will align with the value of its entangled partner. A third particle can then be introduced to influence the spin of the first particle, and the second particle will again mirror the change in the first. In theory, particles one and two could be on opposite sides of the universe with no physical connection between them, yet remain instantaneously synchronized, with zero latency. Keep in mind that no physical media connects these particles; one could be on Earth and the other on Mars. This seriously concerned Einstein because the instantaneous communication of these spin changes from particle one to particle two, over any distance, violates the speed of light. Something he was very fond of, but quantum mechanics does not follow the traditional laws he worked with.

Recent experiments published in Science Advances have shown that even unentangled particles can exhibit nonlocality, suggesting that entangled photons may not be required for quantum teleportation. Also, another experiment published yesterday used fuzzy-state photons (not entangled), quantum dots, and off-the-shelf fiber cable to demonstrate quantum teleportation.

Other experiments using satellites and establishing entanglement-based downlinks have been completed. A recent paper from Australia has proposed establishing satellite uplinks using entangled photons, so work to establish a quantum network, or quantum Internet, is advancing. 

In the following article: “If it Sounds to Good to be True…” I show that everything above lacks context, and as such, faster-than-light communication is impossible, sorry.

Available to Consult on SmartNICs and DPUs

At the end of March 2025, Achronix scaled back its business to focus on Artificial Intelligence, and with that, they released me and several outstanding network-centric FPGA developers. If you are looking for world-class FPGA talent, please let me know, and I’ll connect you with some very fine people.

As for me, I’m available to assist companies in formulating their SmartNIC or DPU product definition and marketing plans, author technical documentation, craft marketing literature, define markets, and establish routes to market.

Some of my most recent posts have been on “SmartNICs Today,” a somewhat irregular Newsletter I maintain on LinkedIn that often sees over 2,500 page views for new articles. Here are the last five articles posted:

If you wish to learn more, please drop me an email.

Here is the Uncle Dougie – Bash Memorial Show.

Here is an initial draft of the photobook we’re doing.

QPUs, the Fusion of Art, Design & Technology

IBM Quantum System Two as displayed at SC24

For the last two decades, November has meant my annual pilgrimage to Super Computing (SC). My first SC was in 2004 with NEC as their US product manager for their Intel Itanium Windows Super Computer. We were also featuring our next-generation SX-8 Vector Super. At that time, I was new to Vector processors, but I learned that, in simple terms, a vector is a variable that represents a linear set of numerical values. Therefore, the formula:

3A + 5B = 7C

contains variables A and B, each containing up to 128 values. Then in one CPU clock cycle you get out the vector C with 128 values for the result. At the time, I didn’t know it, but the age of vectors was in decline, and Linux clusters were coming into their own. That year, vectors still ruled supreme as the most powerful in the Top500, the fastest Super Computers worldwide list. However, a Linux cluster occupied the 5th position, Lawrence Livermore National Lab’s Thunder system, with 4,096 Intel Itanium Cores. By SC08, the DOE Roadrunner Linux cluster captured the number one position. Also, Graphical Processing Units, GPUs, had just started to enter the scene, with CUDA (Compute Unified Device Architecture) coming out a year earlier. Over the past few years, one of the emerging technologies at SC has been quantum computing.

Outside the national labs, where vectors still have a niche role today, Super Computing is dominated by GPUs; everywhere else, we use CPUs, GPUs, FPGAs, and AI/ML engines mixed together to complete the vast majority of our daily calculations. All four of these technologies have become commonplace and exist in most new computers, even laptops and phones. Let’s take a moment and get everyone on the same page. The above-mentioned computational platforms have become highly parallelized; CPUs are our general-purpose calculators, and have pushed well beyond 100 compute cores per package, and handle all data types reasonably well. By contrast, GPUs have thousands of cores designed to process high-precision floating point, decimal-based numbers. FPGAs contain anywhere from a few hundred to several million lookup tables that take in integers and spit out integers in a single clock cycle, and they are massively parallel. AI or ML cores have been created to operate on very low precision floating point numbers, and we often see thousands of these cores per package. Depending on the use case, they are sometimes interspersed in the same package with FPGAs. So where does that leave Quantum?

In IBM’s booth at SC24 was a two-meter square glass cube, pictured above, showing a visually stunning display of engineering; some might even say breathtaking. Please excuse the opening photograph’s quality; it doesn’t do it justice. The booth was bustling, and I had to shoot quickly. Nothing in the display moved; there were no blinking lights, whirring fans, or grinding pumps to serve as a distraction, just sheer measured precision in copper, brass, chrome, stainless, aluminum, and glass. It could have just as easily hung in the lobby of the Manhattan St. Regis hotel and would have fit in perfectly. It was a feast for the eyes, almost like its primary role as a computer was an afterthought. It’s unclear if this unit was functional during the show, but if it were, then some portion is contained in a near-perfect vacuum with the qubits functioning at almost zero. No liquid Helium tanks were around so it may have been a show-and-tell unit. The booth rep I was talking to didn’t even grasp how the physics translated to computation, perhaps the right person was on a break. The aluminum box (pictured in this paragraph) at the bottom of each of the three columns was labeled IBM Quantum, they use the term QPU. I was told that each QPU contained a 64-qubit processor, but IBM’s website states they each include a 133-qubit Heron QPU.

Furthermore, wiring these three QPUs together creates a quantum circuit that can process 5,000 operations. They don’t clarify the time domain for these 5,000 operations, or even exactly what constitues an operation. Honestly, I’m lost at this point. I’ve read Wikipedia articles on qubits, quantum circuits, and the like, and the math is beyond me. As if 5,000 operations in a single circuit isn’t confusing enough, IBM also has some magic called quantum coupling.

This enables multiple Quantum System Twos (each with three Heron processors) to be tightly coupled together. Today, with this technology, IBM can scale multiple System Twos, enabling a total of 100 million operations within a single quantum circuit, and by SC33, they expect to support one billion operations with a single circuit. As I was crawling through all this, the one area I did start to grasp is that FPGAs are often used now for front-end quantum computing or as a “poor man’s” quantum computer. How is still a mystery, but that’s one I’ll look into over the holidays. 

I hope all my friends return safely from SC24, and I look forward to seeing you again next year. To the right I’m modeling my vintage SC04 jacket. Happy Thanksgiving to all.

Scott Schweitzer, Technology Evangelist, CISSP #644767

A 120 KW Cabinet and The Future of Power Demand

When I get together with friends, once in a while, questions about AI come up, and invariably, they steer the discussion towards a reference to Skynet. For those not plugged into the zeitgeist, Skynet is the AI in “The Terminator” that is out to exterminate humanity. Now, as a chess player, I’ll also acknowledge that while the possibility exists, the likelihood of humanity going down that path is extremely low. My latest concern is one over the coming battle over energy.

NVIDIA held its annual GTC (GPU Technology Conference) in San Jose earlier this week. Jensen Huang, their CEO, unveiled their next-generation DGX [48:24], an AI supercomputer system in a single rack. For those not in technology, think of a cabinet-sized box six feet tall, two feet wide, and three feet deep that consumes an amazing 120 KW every hour of power while performing 1.4 ExaFLOPs. For contrast, the DGX consumes energy at a rate equal to 100 average American homes (a home consumes 10,500 KW / year). It does math at a rate equal to all eight billion people on the planet doing one calculation on a calculator per second for four years, non-stop. That’s what this machine can complete in one second.

In November, there were precisely two publicly announced systems in the world, Frontier and Aurora, both US Department of Energy SuperComputer Clusters, capable of achieving an ExaFLOP. One is in Oak Ridge, TN, and the other is at the Argonne National Lab, and they each consume 200X the power of the DGX above. It should also be noted that these are massive systems, often in the 200-rack range, but the move to GPUs has improved this, as Frontier has only 74 racks while Aurora has 166 racks. The main point Jensen was trying to make is that a single DGX is similar to the computational power of these data center clusters.

Those close to this technology would argue that NVIDIA is gaming its ExaFLOPs number because their calculations differ from those computed by Frontier and Aurora to make the Top500 list. Frontier and Aurora report their numbers while running the Linpack benchmark calculation using double-precision 64-bit floating point numbers. They cannot employ mathematical tricks that shorten number formats, reduce results, or optimize matrix multiplication using innovative new algorithms. On the other hand, Jensen is a magician performing unconstrained; he tosses out his ExaFLOPs number using an FP4 data type. This is the absolute smallest number format defined today; it’s 1/16th the size of the numbers used for Linpack, and trust me, size matters in more ways than one. Furthermore, Jensen’s ExaFLOPs metric benefits from using many of the latest tricks, including ways to shrink the number size and reduce the number of terms you need to operate on with calculations, some done in the networking cards.

Let’s get back to power. The US energy grid is under intense pressure from the rapid growth in demand resulting from the widespread adoption of Electric Vehicles (EVs), including electric commercial trucks and soon tractor trailers. Tesla is rolling out a new charging station every day. Shell and others are also looking to jump into this market, and all this power for EVs must come from somewhere. Thankfully, while EV electric demand is growing, homeowners are increasingly installing solar panels on their roofs to offset their use, particularly if they have an EV or two in the garage. The US federal government recently adopted new rules to move all future vehicle production by 2032 to EVs and Hybrids. While the growth in solar may offset the drain placed on the grid by EVs, NVIDIA’s new DGX changes everything.

Current data centers were designed around racks consuming 10-20 KW each, some permit up to 70 KW per cabinet with a 300 KW commitment, but this is pretty new. All of this includes the matching required cooling, which is just as vital but often overlooked. NVIDIA’s new DGX consumes 6-12X more power than what is currently deployed, and the first part of the problem lies here. As they begin shipping the DGX, we will see data center demand for power explode, unlike before. People have realized that the benefits of an AI grow as the models for these AIs, on which they are trained, grow geometrically. These larger models need even larger DGX class systems to be implemented. Unless something changes soon, at some point in the future, we may be competing with AI systems for electrical power.

6 More Reasons Tesla is Far More Than a Car Company

If nine reasons weren’t enough, here are five more that I forgot about in my prior post. Again, I have no financial interest in Tesla at the time of this posting. 

  1. Semi Tractor Trailer Trucks – Yes, they are producing trucks, in the video link you’ll see one with a full load going 500 miles in nine hours, and this was a year ago. In addition to this, their work in full self-driving and truck charging stations will eventually enable them to dominate this market.

  2. Data: Tesla receives real-time telemetry data from every vehicle if they have a charge. I checked on my Tesla this Friday, which is being delivered Monday, and the sales rep said sure, I can tell you where your car is. She said that GPS tracking is enabled from the moment it rolls off the factory floor; think, find my Tesla. Several mouse clicks later, she informed me that it was two states away and shared the exact city it was traveling through. Add all the traffic data and video footage to this, and it’s a wonder that Tesla has data centers big enough to digest it all. Then there’s the power data from home generation, charging centers, and home chargers, merged with weather and news data, and Tesla is sitting on a goldmine. I’ve read that Ford and others believe that Tesla’s data alone gives them an 18-month competitive advantage over all the other EVs companies.

  3. Artificial Intelligence (AI) – Here, Tesla is perhaps king of the hill. Their Full Self Drive version 12 is a fantastic bundle of AI code. The video footage on YouTube of it navigating a busy Costco parking lot on a Saturday morning is excellent. Any teenager can handle highway driving, but put them in a busy Costco on Saturday morning, and their knuckles become white as they turn the radio down to concentrate. I was shocked when the Tesla didn’t develop a case of road rage at some of the idiots pulling out of parking spaces without looking, even a police car backing up because he couldn’t get around someone [4:06]! This is only the very obvious tip of their AI iceberg; their code base extends well beyond self-driving.

  4. Chip Development – Developing a leading-edge chip for AI today is a $50-100 Million dollar expense. This is not something to take lightly. Tesla used NVIDIA chips but shifted a few years ago to designing and producing their own. While Ford and others have to buy GPUs or custom ASICs from third parties, often for $100s of dollars, Tesla’s unit cost for a much more custom-tailored and likely more powerful chips is a fraction of this. This is an area that will put a serious distance between Tesla and its car and Robo-Taxi competitors.

  5. Robotics – Tesla is all about deploying industrial robots, think big arms, on their factory floors, but they’ve also shown off humanoid robots doing a wide range of tasks. This includes what appears to be learning. If you have not seen their Optimus robot, you’re missing out. This thing is a humanoid bot that has been built to replace people on the factory floor. Just check out this video from two months ago, and you’ll see how close they’re getting. This is a substantial improvement from their video of ten months ago. Tesla will be rolling out factories in the future that are designed to accommodate these robots. They’re not spending all this R&D on cute YouTube show-n-tell videos.

  6. Cobranding and Technology Sharing Between Musk Companies. This is already happening with the Roadster. Elon mentioned at the end of last year that the Roadster would deliver sub one second zero to sixty thanks to technology they are “borrowing” from SpaceX. Imagine driving a mass-market consumer car with actual rocket technology. Jay Leno may be losing sleep over this one. Then there’s Neuralink. Sure its about helping people today, through their first human trials, who’ve lost mental and physical function. But a decade into the future, our thoughts will power robots and cars, brought to you by Tesla enhanced with Neuralink. 

So this brings us to 15 reasons why Tesla is far more than a car company. It may be the next Apple, only time will tell.  

9 Reasons Tesla is Far More Than a Car Company

If you’ve bought or watched Tesla’s stock over the past few years, it’s been a real roller coaster from under $20 to nearly $400 and today around $170, but all the while, they’ve kept innovating. Tesla is more than just a car company; it is positioning itself to become a global energy provider. To be clear, while I’ve invested in Tesla in the past, I have no financial positions connected with the company at the time of this writing.

  1. Solar Panels – Tesla’s panels are 20% efficient, which is generally where the market is now. Sure, in the lab, others have pushed these numbers over 40% using various optical and electrical techniques, but in practical residential deployments, 20% is the norm. Tesla’s panels are stylish, as stylish as black panels can be, with their main visual highlight being they appear as seamless and flush to the roof as visibly possible. In this case, the panels represent only part of the solution.
      
  2. Solar Roof Tiles – Every 20 to 30 years, most roofs, at least those with asphalt tiles, must be replaced. Tesla has a complete roof system that uses solar roof tiles designed to withstand 120 MPH winds while being roughly 15% efficient. Unlike solar panels, which you install only on roof lines that face the sun most of the time, the roof tiles are installed on the whole roof. While this costs more, it creates a visual look like a traditional roof. The point here is that Tesla is offering a unique option to panels, which some homeowners’ associations have banned.
      
  3. Batteries – Be they for a home Powerwall system or their cars, this is an area where Tesla is leading innovation. They are spending heavily on R&D and have their own battery plants, nearly a half dozen, with more under construction, and have been in talks over the years about acquiring mining facilities to source the raw materials. The latest cars implement LFP technology batteries, but Tesla is looking well beyond this technology. Battery chemistry is central to nearly all things Tesla.
     
  4. Home Electronics – By this I mean all the components that take power from solar combine it with their Powerwall batteries, inverters, their home car chargers, and the transfer switch. This includes all the other bits of glue electronics necessary to deliver a complete home cogeneration system capable of distributing excess power, beyond the capacity of the Powerwalls and any plugged in cars, back onto the grid. People want solutions, not point products; all the above elements give homeowners a unique option.
      
  5. Their App – The Tesla smartphone application ties all the above together: home generation, grid power, car charging and power consumption, the works. This is the same App you use to manage your car(s). The App and all of the data it manages and collects could be one of Tesla’s biggest assets in years to come.
     
  6. Home Charging – With LFP batteries, I could leave the house in a Model 3 (Highland) daily at a full charge. When I see my daughter in Richmond, VA, we live in Raleigh, NC, I’ll need to charge while we eat lunch or dinner for the ride home, but that will likely be about the price of a single mixed drink. The same is true when we visit the in-laws in Charlotte. Home charging is a game changer. Duke Energy charges $0.19 KW/hr fully burdened (taxes and all fees included); this works out to about $0.045 per mile, depending upon ones driving. Not since we’ve fed our horses the hay we’d grown in our fields has the fuel for transportation been cheaper.
     
  7. Charging Stations – Tesla is putting in one new charging station every business hour of the day. Many of these new stations will have some pull-through stalls so trucks can charge without having to unhook trailers. Even more interesting, Ford F150 Lightning and Mustang MachE vehicles can now install the Tesla App and charge on the Tesla network. Other EV companies are following suit and later this year all GM EVs will be able to charge using the Tesla network. Once GM sells an EV, all the after-market revenue will go to Tesla for the home charger and the charging stations.
     
  8. Diverless Taxis – For the past year Tesla has not permitted customers who’ve leased Model 3s or Ys to purchase their vehicles at the end of the lease. It is rumored that Tesla is looking to roll out its fleet of driverless taxis later this year or early next. If Full Self Driving version 12 is any clue, they are getting really close. This will totally crush Uber and Lift, and validate Cruise and Waymo.
  9. Cogeneration – At some point, Tesla will have a critical mass of home solar customers in a market, region, or country, and it will begin testing branded cogeneration back into these grids using a federated model. This may deliver better rates to Tesla and homeowners and significantly more bargaining power than they currently have as individuals.  

Perhaps even more so than Apple or Google, Tesla positions itself as a critical player in the worldwide technology market. I haven’t even mentioned all the data it harvests from its cars, how that can be used, and the work they’re doing in AI; perhaps that’s a future blog.

The Huge Security Threat Posed by China EVs

Last year, the US Congress held hearings on TikTok and debated the security of the platform, the data it collected, and what it may be sending back to China. This past month, we learned Temu, a shopping app owned by Pinduoduo, China’s second most popular online shopping site, is very sophisticated spyware. This shopping application was heavily hyped during the last two Super Bowls with the slogan “Shop Like a Billionaire.” This “free” application was the second most popular free app on Apple’s App Store following the Super Bowl. Researchers have found that the Android version can escalate its privilege and install a rootkit. At this point, its data collection engine is ALWAYS running in the background, even when you haven’t used Temu since the last time you rebooted your phone. The extent to which this program is harvesting data from those Android phone users is still being determined, but we know it collects a user’s locations, contacts, calendars, notifications, photo albums, social media account data, and chat sessions, all without their consent. This is all on the phone; imagine if it were a car.

Unlike regular Internal Combustion Engine (ICE) vehicles, Electric Vehicles (EVs) are rolling data centers. Most are designed to support some form of assisted driving and, eventually, full self-driving. Therefore, many have full 360-degree camera coverage around the outside of the car and in the car. If you own a Tesla, have you ever used “Dog Mode?” Tesla has nearly full camera coverage for everything inside the car, so for clueless teenagers and parents, the backseat is no longer for private “discussions.” While this is a Tesla-specific branded feature, you can bet that other manufacturers have similar in-vehicle cameras, even if they are not branded and announced features. “Dog Mode” is a side effect of having the camera to monitor driver engagement for self-driving enablement; Tesla just got clever and repurposed/rebranded it.

Are conversations in an EV private? Doubtful. Since most EVs also have voice control, this means there’s always a hot mic. Most of these EVs also include a 5G cellular data connection back to the manufacturer for “over-the-air updates” and to send video, traffic, and mapping data back to their road and traffic mapping systems. Since data is flowing both ways, it’s open to being exploited.

Next week, I take delivery of a new Tesla Model 3. Honestly, I look forward to “over-the-air updates,” Sentry Mode, Dog Mode, voice commands, and the whole enchilada. Still, even then, I won’t do or say anything in or around my car that will give away any national secrets, mainly because I don’t have any. Product or technology secrets, on the other hand, are possible. As many of you know, I work in high-tech at a semiconductor company, so innovation and intellectual property are part of our business. I seriously doubt Tesla would jeopardize its reputation by storing cabin recordings.

On the other hand, EV cars designed in China and built by Chinese firms are a whole different story. If I took a business meeting in an EV designed in China, I’d worry that what I’m saying will be played out of speakers a half world away shortly after my call concludes. So, I wouldn’t take the risk of ever considering purchasing this vehicle class. I fear that others will not be so savvy, and this will be just another example of our secrets being exfiltrated because the less informed were busy “Shopping Like a Billionaire!”

Freedom, Dignity & Self-Driving Cars

I vividly remember walking, then sprinting back from the mailbox after I’d torn open the envelope addressed to me from the New York State Department of Motor Vehicles. It contained my NY State Drivers License, printed on thin blue cardboard. After seeing that little blue slip, I bolted the rest of the way up the driveway to my father that May. He was so proud and quickly agreed when I asked to take out the Pontiac Ventura, a classy Chevy Nova, for a ride. That feeling of freedom, the rush of adrenaline once I cleared the neighborhood and hit the accelerator, it’s never left. Since that Ventura, I’ve had a Road Runner, Mustang, motorcycles, convertibles, and even a Slingshot, and that feeling of freedom and rush of acceleration is still just as acute today.

Several years ago, my mom, whom I love dearly and who is in her eighties, had been diagnosed with a form of dementia. That month, we installed a tracker on her car in case of emergencies. While she has owned an iPhone for years, she has never quite grasped how maps work, and the GPS in her car met most of her needs. Last summer, my younger brother and I, along with her neurologist, had discussed with mom and determined that her condition had deteriorated to the point where she was no longer safe behind the wheel. While she easily carries on intelligent conversations about past and current events and can do fundamental math problems in her head, her reaction time and situational awareness had diminished to a point where she was at much greater risk behind the wheel than ever. Even though Mom was driving a bright red convertible, and she hadn’t yet gotten into an accident, a minor event triggered our path to this judgment. Also, we knew how much it would have torn her up if she had hurt someone else because of her condition. With the sale of her car and the surrender of her driver’s license, she has forever changed. That permanent loss of some measure of control has reduced her perception of self-worth and impacted her sense of dignity. It was a tough call. She has family, friends, and an aide to help her get out and run errands, but it’s not the same.

As we become adults, one of our most basic freedoms is being granted the privilege of driving whenever and wherever we want. With a simple twist of a key, placing our hands on the wheel, and shifting the car into gear, we’re in control, empowered to go wherever we like. Driving, at its very essence, is the ultimate example of power. When I talk about self-driving cars, especially with my generation and my mom’s, many become highly defensive, and I often hear, “I’ll never let a car drive me around.” Sure, my friends will quickly jump in the back seat of an Uber with a driver they’ve never met because a human is “in control.” Those same people, though, would be very reluctant, at this point, to be passengers in a driverless cab. They’ve said it point blank.

A few years into the future, we’ll be able to climb into a Tesla and say, “Take me to the market.” The car will open the garage, pull out, close the garage, and safely drop me off in front of the market a few miles and a dozen or so minutes later. When I’m done shopping, as I approach the check-out, I’ll tap a button in the Tesla app on my phone. As I exit the store, my car will pull up, I’ll put the groceries, then myself into the car, and then ask it to “take me home.” Which it will. As we roll into the garage, the vehicle will pull in and center itself over the inductive charging mat on the floor, then close the garage, unlock the doors, and pop the trunk and or frunk, depending on where it sensed I had placed the groceries. It may even then alert me via my watch or phone to empty those compartments if they remain open and full too long.

During the ride to the market, while the car is self-driving, I could make suggestions to change lanes, go slower or faster, and, in doing so, retain some degree of control over my artificially intelligent pilot. Much like an admiral might instruct the flagship’s captain.

I expect that a self-driving car will help me retain my freedom well beyond the point, several decades from now, when my safety behind the wheel will become a point of discussion. Mom would never accept a self-driving car, even if it helped her retain her freedom, because her understanding and grasp of technology were never strong enough. There is still sufficient mistrust in self-driving because driving is a complex task with many nuances that are hard to define and test. For example, snow-covered roads in a blizzard or oncoming emergency vehicles with lights and sirens. I’ve seen humans make incorrect responses in both these cases many times, yet we expect, no, we require, that our artificial intelligence engines do better, and they will.

The most significant value proposition of self-driving might be how it will extend our freedom to roam well beyond the point at which we are safe behind the wheel. Though I’d never advocate standing on the driver’s seat while the vehicle is in motion and self-driving!