Our lives have been defined – no transformed, by the availability of cheap, powerful, and ubiquitous computing power. Whether it is the multicore gigahertz processors that power my phone and laptop; or the scores of micro controllers that make just about everything in my car work – everywhere you look there’s a computer doing something useful. I’m pretty sure I recently saw a Wi-Fi enabled light bulb. A lightbulb with a computer inside!
In the press, the rise of ubiquitous computing is referred to as Moore’s Law. In case you’re not aware, Moore’s Law was first proposed by Gordon Moore, one of the founders of Intel. It states that the number of transistors on a chip doubles every 1.7 years. Anytime something doubles on a regular basis, be prepared for dramatic change. Since you need one transistor for one bit of memory, the amount of memory on a chip doubled every 1.7 years. Every other type chip also became more sophisticated and powerful at the same rate.
Moore’s Law dramatically understates what was really happening in the semiconductor industry. At the same time as transistor counts were undergoing exponential growth, so were the clock speeds in processors. They increased from a very modest megahertz or two in 1980 to 5 GHz plus in the early twenty teens. A few thousand times faster in just 30 years. In addition, this firehose of more and more transistors gave computer designers the ability to come up with fancy ways to improve processors: they introduced instruction pipelines so that instead of taking four clock cycles to execute a single instruction, processors now execute almost two instructions per clock. Processors went from eight bits, to sixteen bits, to thirty-two bits, to sixty-four bits – each time doubling processor throughput. And designers also started putting multiple cores on a single chip. Another factor of, say, fifty or more. And let’s not forget about the advances in software going on at the same time.
The thing is, this was a perfect storm of technological advancement that is not indicative of how technology usually progresses. The key determiner of how many transistors you can etch onto a chip is how small a feature you can draw. And when you halve the width of the smallest line you can etch on silicon, you quadruple the number of transistors. The progress in reducing line widths is not nearly as stunning as the progress in the number of transistors. Further, reducing the line width is just a matter of using higher and higher frequencies of electromagnetic radiation (a fancy word for light). The fact that the payoff for reducing linewidths was so high meant investors were lining up to pour ship loads of money into the technology necessary to reduce these line widths. It also didn’t hurt that the engineers were solving the same problem over and over again: how to use higher frequencies of light to print lines on semiconductors. And they had a schedule: every 1.7 years transistors doubled. On top of that, as the transistors got smaller, their capacitance goes down, the distance between transistors decreases and so, for free, they get faster. Faster transistors allow higher clock frequencies and more powerful processors.
I’m tempted to say easy peasy, but of course, it wasn’t. I massively simplified the challenges that confronted – and still confronts, those chip designers that are transforming our lives. Nonetheless, they overcame many problems and transformed the world.
We have this one example of how a technology exploded and touched all aspects of our lives. If you look at any other technology that is advancing quickly, chances are a key enabler of that technology is advancing silicon chips. Now we expect that this is just how technology works. We expect everything to advance this quickly. From batteries, to green power, to cures for cancer, we think they all move forward at warp speed.
Except they don’t. Semiconductors were an aberration. Technological progress in most fields is much more grinding and much less explosive. Smaller, more incremental improvements that are easy to miss on a year to year basis add up to something significant over a decade or two.
Batteries are like this. The improvements in lithium-based batteries have not been anywhere near as stunning as we’ve come to expect from our computers. Batteries follow more of a leap and then stagnate process. The leaps come when the industry switches from one chemistry to another. The last one came when we switched from nickel-cadmium batteries to lithium-based batteries back in the late 2000s or early 2010s. The next big jump battery capacity will come when we switch to a new chemistry.
If battery capacity followed Moore’s Law, the Tesla Model S Introduced in 2012 With a range of 426 km would have a range of roughly 11,117 km today in 2020. Instead, the maximum range of any production Tesla Model S is just under 600 km.
Pity the poor battery designers – their path is no where near as clear as that of the chip designers. Where the chip designer had a clear path to higher and higher frequency lithography, its not at all clear what will be the next big thing in batteries. There are lots of teams working on the next battery. For each chemistry being pursued there are a whole bunch of problems that must be solved – all different from the other chemistries. In addition, there will be only one winner. If your chemistry doesn’t win, all your hard work, and all your investor’s money are down the drain. Of course, investors know this and so they are much more reluctant to put their money into unproven battery technology than they were to invest in higher frequency lithography.
When we think about, and plan for, the future it is important to keep in mind that every technology advances at its own pace, that Moore’s Law is not the norm and that we move forward at a more measured pace.