
For the last two decades, November has meant my annual pilgrimage to Super Computing (SC). My first SC was in 2004 with NEC as their US product manager for their Intel Itanium Windows Super Computer. We were also featuring our next-generation SX-8 Vector Super. At that time, I was new to Vector processors, but I learned that, in simple terms, a vector is a variable that represents a linear set of numerical values. Therefore, the formula:
3A + 5B = 7C
contains variables A and B, each containing up to 128 values. Then in one CPU clock cycle you get out the vector C with 128 values for the result. At the time, I didn’t know it, but the age of vectors was in decline, and Linux clusters were coming into their own. That year, vectors still ruled supreme as the most powerful in the Top500, the fastest Super Computers worldwide list. However, a Linux cluster occupied the 5th position, Lawrence Livermore National Lab’s Thunder system, with 4,096 Intel Itanium Cores. By SC08, the DOE Roadrunner Linux cluster captured the number one position. Also, Graphical Processing Units, GPUs, had just started to enter the scene, with CUDA (Compute Unified Device Architecture) coming out a year earlier. Over the past few years, one of the emerging technologies at SC has been quantum computing.
Outside the national labs, where vectors still have a niche role today, Super Computing is dominated by GPUs; everywhere else, we use CPUs, GPUs, FPGAs, and AI/ML engines mixed together to complete the vast majority of our daily calculations. All four of these technologies have become commonplace and exist in most new computers, even laptops and phones. Let’s take a moment and get everyone on the same page. The above-mentioned computational platforms have become highly parallelized; CPUs are our general-purpose calculators, and have pushed well beyond 100 compute cores per package, and handle all data types reasonably well. By contrast, GPUs have thousands of cores designed to process high-precision floating point, decimal-based numbers. FPGAs contain anywhere from a few hundred to several million lookup tables that take in integers and spit out integers in a single clock cycle, and they are massively parallel. AI or ML cores have been created to operate on very low precision floating point numbers, and we often see thousands of these cores per package. Depending on the use case, they are sometimes interspersed in the same package with FPGAs. So where does that leave Quantum?

In IBM’s booth at SC24 was a two-meter square glass cube, pictured above, showing a visually stunning display of engineering; some might even say breathtaking. Please excuse the opening photograph’s quality; it doesn’t do it justice. The booth was bustling, and I had to shoot quickly. Nothing in the display moved; there were no blinking lights, whirring fans, or grinding pumps to serve as a distraction, just sheer measured precision in copper, brass, chrome, stainless, aluminum, and glass. It could have just as easily hung in the lobby of the Manhattan St. Regis hotel and would have fit in perfectly. It was a feast for the eyes, almost like its primary role as a computer was an afterthought. It’s unclear if this unit was functional during the show, but if it were, then some portion is contained in a near-perfect vacuum with the qubits functioning at almost zero. No liquid Helium tanks were around so it may have been a show-and-tell unit. The booth rep I was talking to didn’t even grasp how the physics translated to computation, perhaps the right person was on a break. The aluminum box (pictured in this paragraph) at the bottom of each of the three columns was labeled IBM Quantum, they use the term QPU. I was told that each QPU contained a 64-qubit processor, but IBM’s website states they each include a 133-qubit Heron QPU.
Furthermore, wiring these three QPUs together creates a quantum circuit that can process 5,000 operations. They don’t clarify the time domain for these 5,000 operations, or even exactly what constitues an operation. Honestly, I’m lost at this point. I’ve read Wikipedia articles on qubits, quantum circuits, and the like, and the math is beyond me. As if 5,000 operations in a single circuit isn’t confusing enough, IBM also has some magic called quantum coupling.
This enables multiple Quantum System Twos (each with three Heron processors) to be tightly coupled together. Today, with this technology, IBM can scale multiple System Twos, enabling a total of 100 million operations within a single quantum circuit, and by SC33, they expect to support one billion operations with a single circuit. As I was crawling through all this, the one area I did start to grasp is that FPGAs are often used now for front-end quantum computing or as a “poor man’s” quantum computer. How is still a mystery, but that’s one I’ll look into over the holidays.
I hope all my friends return safely from SC24, and I look forward to seeing you again next year. To the right I’m modeling my vintage SC04 jacket. Happy Thanksgiving to all.
Scott Schweitzer, Technology Evangelist, CISSP #644767
