Size Matters, Especially in Computing

Yes, this is a regular size coffee cup

The only time someone says size doesn’t matter is when they have an abundance of what it is that’s being discussed. Back in the 1980s some of us took logic design and used discrete 7400 series chips to build out our projects. A 7400 has four two-input NAND gates, with four corresponding outputs, as well as power and ground pins. It is a simple 14 pin package about 3/4 of an inch long and maybe a quarter-inch wide that contains a grand total of sixteen transistors. Many of the basic gates we needed for our designs used that same exact package form factor which made for great fun. Thankfully we had young eyes back then because often times we’d be up till all hours of the night breadboarding our projects. We knew it was too late when someone would invariably slip up, insert a chip backward, and we’d all enjoy the faint whiff of burnt silicon.

Earlier this month Xilinx set a new world record by producing a field-programmable gate array (FPGA) chip which is a distant cousin of the 7400 called the Virtex Ultrascale+ VU19P. Instead of 16 transistors, it has 35 billion, with a “B”. Also, instead of four simple two-input, one output logic gates, it has nine million programmable system logic cells. A system logic cell is a “box” with six inputs and one output that is fully configurable and highly networked. Each individual little “box” is programmed by providing a logic table that maps all the possible six input combinations to the single output. So why does size matter?

Imagine you gave one child a quart-sized Ziplock bag of Legos and another several huge tackle boxes of pre-sorted bricks including Lego’s own robotics kit. Assuming both children have similar abilities and creativity which do you think will create the most compelling model? The first child’s solution wouldn’t be much larger than an apple and entirely static. While it could be revolutionary, it is limited to the constraints of the set of blocks provided. By contrast, the second child could produce a two-foot-tall robot that senses distance and moves freely about the room without bumping into walls. Which solution would you find compelling? In this case size matters in both the number and type of bricks available to the builder.  

The system logic cells mentioned above are much like small Lego bricks in that they can easily replicate the capability of more complex bricks by combining several smaller ones. FPGAs are also like Legos in that you can quickly tear down a model and re-use the build blocks to assemble a new model. For the past 30 years, FPGAs have had limitations that have prevented them from going mainstream. First, it was their speed and size, then it was the complexity of programming them. FPGAs were hard to configure, but the companies behind this technology learned from the Graphical Processing Unit (GPU) market and realized they needed tools to make programming FPGAs easier. Today new tools exist to port C/C++ programs into FPGA bitstreams. Some might think that the decade of 2010 was the age of the GPU, while 2020 is shaping up to become the age of the FPGA.     

Leave a Reply