Mining, and the Importance of Knowing What, How, When, and Where?

It doesn’t matter if you’re panning for gold, drilling for oil, or mining Bitcoin, your success is bounded by your best answers to what, how, when, and where? Often the “what” and “how” are tightly linked. If you own oil drilling equipment, you’re probably going to continue drilling for oil. If you buy an ASIC based Bitcoin mining rig, you can only mine Bitcoin. Traditionally “when” and “where” are the most fluid variables to address. A barrel of crude oil today is $57, but over the past year, it has fluctuated between $42 and $66. Similarly, Bitcoin, during the same year, has swung between $3,200 and $12,900, so answering the “when” can be very important. Fortunately, digital currencies can easily be mined and held, which allows us to artificially shift the “when” until the offer price of the commodity achieves the necessary profitability. In digital currency mining, the term is sometimes written HODL, originally a typo, but it has since morphed into “Hold On for Dear Life” until the currency is worth more than it cost you. Finally, we have the “where”, and I’m sure some are wondering why “where” matters in digital currency mining.

Moving backward through the above questions and drilling down specifically into digital currency mining as the application. “Where” is the easiest one, you want to install your mining equipment wherever you can get the cheapest power, manage the excess heat, and tolerate the noise. Recently two of the most extensive mining facilities, both around 300MW, have or are being stood up in former Aluminum plants. When making Aluminum, the single most costly component in the process is electricity, and it requires access to vast volumes of electricity. Often these facilities are located near hydro-electric plants where electricity is below $0.03 KW/h. Also, since every watt of power is converted into heat or sound, you need a method for cost-effectively dealing with these byproducts. One of the mining operations mentioned earlier is located in the far northern region of Russia, which makes cooling exceptionally easy. Also, with “where” you need a local government that is friendly to digital-currency mining. In the Russian example mentioned above, it took nearly two years to secure the proper legal support. Some countries like China, until recently, were not supportive of digital-currency mining. For enthusiasts like myself, we locate our mining gear in out of the way places like basements or closets, perhaps even insulating them for sound and channeling away the excess heat to somewhere useful. 

Concerning “when,” that should be now. The general strategy executed by most of us currently mining is known as “mine and hold.” With the Bitcoin halving coming in May, the expectation is that Bitcoin will see a run-up to that point. In the prior two Bitcoin halvings, the price remained roughly the same before and after the event. The last halving was in July 2016, and since then, Bitcoin has gone from a niche commodity to a mainstream offering. In the previous week, Fidelity was awarded a trust license to operate its digital assets business; further proof Bitcoin has gone mainstream. As Bitcoin is the dominant digital currency, it is believed that as it rises, so shall many of the other currencies that use it as a benchmark. So, holding some of the other mainstream digital currencies like Ethereum should also see a significant benefit from a substantial increase in the value of Bitcoin. 

Back to the “what” and “how.” With digital currency mining, you have two criteria to consider when answering the “how,” efficiency or flexibility. If you purchase a highly efficient solution, then it will be an ASIC based mining rig. You will then soon learn, if you haven’t already, that it has been designed to mine a single currency, and that’s ALL it can ever mine. Conversely, if you want flexibility, then an FPGA or GPU miner affords you various degrees of freedom, but again the choice between efficiency and flexibility comes into play. FPGA mining rigs are often 5X more efficient per watt than GPU based rigs, but the selection of FPGA bitstreams is finite, but growing monthly. Both FPGA and GPU rigs can easily switch from mining one coin to another with nominal effort; it’s the efficiency and what can be mined that separate the two.    

Finally, I’ve neglected to address the most obvious questions “why?” This is both the root of our motivation to mine and the fabric of our most social network. “Our only hope, our only peace is to understand it, to understand the why. ‘Why’ is what separates us from them, you from me. Why’ is the only real social power, without it you are powerless. And this is how you come to mewithout why,’ without power.” – Merovingian, “Matrix Reloaded” 2003

Size Matters, Especially in Computing

Yes, this is a regular size coffee cup

The only time someone says size doesn’t matter is when they have an abundance of what it is that’s being discussed. Back in the 1980s some of us took logic design and used discrete 7400 series chips to build out our projects. A 7400 has four two-input NAND gates, with four corresponding outputs, as well as power and ground pins. It is a simple 14 pin package about 3/4 of an inch long and maybe a quarter-inch wide that contains a grand total of sixteen transistors. Many of the basic gates we needed for our designs used that same exact package form factor which made for great fun. Thankfully we had young eyes back then because often times we’d be up till all hours of the night breadboarding our projects. We knew it was too late when someone would invariably slip up, insert a chip backward, and we’d all enjoy the faint whiff of burnt silicon.

Earlier this month Xilinx set a new world record by producing a field-programmable gate array (FPGA) chip which is a distant cousin of the 7400 called the Virtex Ultrascale+ VU19P. Instead of 16 transistors, it has 35 billion, with a “B”. Also, instead of four simple two-input, one output logic gates, it has nine million programmable system logic cells. A system logic cell is a “box” with six inputs and one output that is fully configurable and highly networked. Each individual little “box” is programmed by providing a logic table that maps all the possible six input combinations to the single output. So why does size matter?

Imagine you gave one child a quart-sized Ziplock bag of Legos and another several huge tackle boxes of pre-sorted bricks including Lego’s own robotics kit. Assuming both children have similar abilities and creativity which do you think will create the most compelling model? The first child’s solution wouldn’t be much larger than an apple and entirely static. While it could be revolutionary, it is limited to the constraints of the set of blocks provided. By contrast, the second child could produce a two-foot-tall robot that senses distance and moves freely about the room without bumping into walls. Which solution would you find compelling? In this case size matters in both the number and type of bricks available to the builder.  

The system logic cells mentioned above are much like small Lego bricks in that they can easily replicate the capability of more complex bricks by combining several smaller ones. FPGAs are also like Legos in that you can quickly tear down a model and re-use the build blocks to assemble a new model. For the past 30 years, FPGAs have had limitations that have prevented them from going mainstream. First, it was their speed and size, then it was the complexity of programming them. FPGAs were hard to configure, but the companies behind this technology learned from the Graphical Processing Unit (GPU) market and realized they needed tools to make programming FPGAs easier. Today new tools exist to port C/C++ programs into FPGA bitstreams. Some might think that the decade of 2010 was the age of the GPU, while 2020 is shaping up to become the age of the FPGA.