The Future of Computing Beyond More's Law

Why today’s transistors are hitting a hard wall, and how the supercomputer of the future might be built on new tech.

Portrait of Tammy Strobel
Why today’s transistors are hitting a hard wall, and how the supercomputer of the future might be built on new tech.
My Reading Room
How Moore became the law

We first had an inkling of the unique conductive properties of semiconductors as far back as 1833. That year, Michael Faraday, an English natural philosopher (really just a quaint term for a physicist in those days), observed the “extraordinary case” of electrical conduction in silver sulfide crystals, where higher temperatures led to increased conductivity. This was the reverse of the usual behavior of other metals like copper, which displayed reduced conduction when exposed to higher temperatures.

Now, two centuries later, when semiconductors enable almost all the technology that comprise the fabric of 21st-century civilization, we are fast approaching the limits of the fundamental underpinning of the semiconductor industry.

In 1965, Intel co-founder Gordon Moore observed that the number of transistors that could be crammed onto a silicon chip was doubling every year, leading to similar increases in performance. Moore revised the time frame to a more realistic two years in 1975, and the rest as they say, is history. Moore’s Law has since become the holy grail of the chipmakers. Smaller process nodes allowed manufacturers to cram more transistors into the same area, leading to exponentially more powerful machines. The resulting chips also ended up consuming less power, all part of the automatic benefits that were assumed to follow smaller manufacturing processes.

But as manufacturers moved farther down the nanometer scale, the chipmaking process got more and more complicated, so the global semiconductor industry banded together to release a roadmap every two years that would chart the path for how to move forward together.

In the end, Moore’s Law continued apace not because of any incontrovertible scientific rule, but because the industry willed it to be so. New applications and use cases emerged to push the limits of previous generation chips, which in turn led to successive generations of silicon that improved by leaps and bounds. Moore’s Law held true, and an empirical observation became a law, right up till the point that it wasn’t anymore. 

In 1965, Intel co-founder Gordon Moore observed that the number of transistors that could be crammed onto a silicon chip was doubling every year...
The problem with transistors
In the past decade or so, we’ve come to realize the all too real limitations of Moore’s Law. In the early 2000s, manufacturing processes reached a point where the heat being generated by smaller processes was becoming untenable. As circuits become smaller, electrons move through them faster, which results in better performance, but also more heat. In a word, things were just getting too small and chipmakers were running up against a heat wall (some even worried that the heat output would equal that of the surface of the sun!).
The solution to this was to stop increasing clock speeds, which have more or less stagnated since 2004. But in order to adhere to the projections of Moore’s law, Intel also ditched its Tejas CPUs in favor of a new multi-core architecture that is pretty much the standard today. That’s worked out fine so far, but the signs are pointing to a new, larger – and possibly insurmountable – roadblock. We are approaching the physical limits of how possible it is to shrink crucial chip features, which are nearing the size of mere atoms.
Problems like electron leakage aside, the industry will eventually get to the point where electron behavior would be subject to quantum uncertainties, thus rendering the transistors hopelessly unreliable. One example is a quantum phenomenon known as tunneling, where particles such as electrons turn up on the other side of an ostensibly impermeable barrier without ever passing through the intervening gap. Even Intel, the world’s largest chipmaker, has faltered. Its 10nm chips were originally slated for release this year, but the company now expects them to debut only in 2017.
More tellingly, it officially killed off its “tick-tock” manufacturing cycle in favor of a new “process-architectureoptimization” schedule. This essentially buys Intel more time to develop new and smaller transistor technologies – its next processor line, Kaby Lake, will be an optimized Skylake architecture still based on the 14nm process.
Either money or physics will stop Moore first
But the final nail in the proverbial coffin may have been the industry’s reaction to the new state of affairs. In March, the global semiconductor industry released for the first time a roadmap that did not assume the continuation of Moore’s Law. It lets applications – ranging from smartphones to data centers – take the lead, so the industry will work to design chips that fulfill more specific use cases, instead of having applications work to take full advantage of chip performance after the fact.
The semiconductor industry, once united, also shows signs of splintering, as companies pursue their own paths in moving beyond Moore’s Law. Intel, a towering figure once responsible for setting the pace of progress, isn’t even formally contributing to the forecasting process anymore.
And even if the realities of physics didn’t stand in the way, every step down the scale requires a new generation of more expensive equipment to manufacture. This is typically in the range of billions of dollars – an amount few companies can afford – and the cost is so prohibitive that some analysts even think that the expense alone makes the entire endeavor untenable. As Daniel Reed, a computer scientist and vice-president for research, says quite aptly, “My bet is that we run out of money before we run out of physics.”
But customers are less understanding of the limitations of physics. Are chip features becoming as small as a single atom? Are electrons leaking through transistor gates? That’s just too bad. They want faster chips to keep up with the latest applications, and they want these chips year after year.
That creates quite a conundrum for chipmakers. When the entire premise that your business has been based on begins to fail, what do you do? Increasingly, attention is turning to alternative technologies like quantum computing, and even silicon replacements like graphene that offer comparable performance but generate much less heat. To use a the transistors in quantum computers, also known as quantum bits (or qubits), can be 0, 1, or both values at the same time thanks to a quantum mechanics principle called “superpositioning.”
As a direct corollary, two qubits can then hold four values simultaneously – 00, 01, 10, and 11 – and adding more qubits would theoretically result in a computer far more powerful than any supercomputer today. This is because a set of qubits in a quantum computer could exist in every possible combination at once, and where a regular machine would have to go through each configuration individually, a quantum computer could simultaneously process all combinations. 
My Reading Room
What’s a transistor?

A transistor is actually a switch, where electrons flow through a channel from a “source” to a “drain.” A gate regulates this current flow between source and drain. In its resting state, the gate stops current from flowing through, but it becomes conductive when a voltage is applied to it. When this happens, the transistor is said to be switched on. The on and off states of the transistor correspond to the 0s and 1s in a binary system, and it is how chips are able to store data.

But as transistors became smaller, this process started to fail. As the source and drain become closer to each other, the channel begins to leak, which means that there is a residual current flowing, even when the transistor is supposed to be off. This translates into excess heat and wasted power. And as current leakage becomes more severe, it may even become difficult to tell if the transistor is supposed to be on or off, which affects the reliability of the chip. 

My Reading Room

THE SUPER COMPUTER OF THE FUTURE

How tomorrow’s supercomputer might be built.

A quantum leap forward One of the more prominent solutions is quantum computing, a novel approach to computing that dispenses with the traditional binary system of 0s and 1s. Bits, which are the most basic units of information, come in two flavors – quantum and classical. But while they may seem like two different subtypes of the same thing, they’re actually radically different.

For starters, classical bits, which are really just the 0s and 1s featured in binary systems, can only take on one value at any one time. This means that it is either a 0 or a 1, but never both. On the other hand, common phrase figuratively, the show must go on. 

Helping this along is a second quantum phenomenon called “entanglement”, whereby a change in one particle also immediately affects others with which it is entangled. This is essentially what allows a quantum computer to manipulate all its qubits together.

In a nutshell, this means the ability to crunch huge amounts of data at once. And as the number of combinations increases exponentially with the number of qubits, quantum computers could tackle problems in days that would take today’s computers a millennium to solve. It was even demonstrated as early as 1996 that quantum computers could drastically speed up the searches of massive databases (the idea isn’t new, and the conceptual bedrock was laid down as far back as the 1970s).

Ultimately, because there’s so much computing power waiting to be harnessed, chipmakers don’t have to scramble to increase performance by shrinking transistors and cramming more of them onto successive generations of chips.

The problem with quantum computers

But quantum systems are notoriously fragile, and the industry is still a long way from a commercially viable design, which is why everyone isn’t rushing to implement it. IBM may have recently unveiled a working five-qubit quantum computer that is available for public use, but it is still nowhere close to realizing the full potential of the technology. 

Yet another issue is that there is no consensus as to what to build qubits out of. IBM’s approach is to represent qubits as currents flowing through superconducting wires, where the presence or absence of a current – or alternatively, its direction – equates to a 0 or a 1. Then there are proposals to use photons, a combination of ions and laser beams, and esoteric-sounding methods involving quasi-particles called anyons (your guess is as good as mine).

Is quantum computing even possible?

That aside, progress has been swift. One of the key challenges in quantum systems is to maintain a quantum superposition without it decohering, or collapsing, when you don’t want it to. In 2012, the record for maintaining a superposition without silicon was two seconds. Three years later, in 2015, this had leapt to six hours.

Last year, a team led by John Martinis, a Google quantum physicist, published a paper describing a nine-qubit system, where four could be measured without collapsing the other five. This allowed for significant strides in error correction as they could then check for and rectify mistakes, which is key to navigating the probabilities involved in getting the right solutions from quantum systems.

The possibilities are enticing. One application that has been frequently floated is codebreaking. Documents leaked in early 2014 by National Security Agency (NSA) contractor Edward Snowden showed that the NSA was actually working on quantum computers for just this purpose. Another application is in artificial intelligence (AI), where quantum computers could theoretically accelerate training of deep learning networks.

What’s more, these systems behave in exceedingly strange ways. While qubits can exist in multiple states, they need to be measured if you want the machine to spit out the answer to whatever problem it was given. But if you attempt to observe the state of a quantum system, it “decoheres,” which is scientific jargon for saying that the qubit stops existing as both a 0 and a 1, and falls into one state or the other.

The problem is that there’s no way of knowing for sure which state the qubit will collapse into, so the success of any attempt to build a quantum computer rests on the ability to harness the probability that a qubit will decohere into one state as opposed to the other. If the measurement isn’t done correctly, the resulting answer will just be one of its countless possible states, and almost definitely the wrong one.

To further complicate matters, because of quantum entanglement, any observation of one qubit could end up collapsing others, a phenomenon that Albert Einstein famously referred to as “spooky action at a distance.”

On top of that, qubits are supremely sensitive to external interference. A passing electromagnetic wave, or even a tiny whiff of heat, can cause them to decohere and ruin any ongoing calculations. As a result, they generally have to be painstakingly isolated from external influences (usually involving some sort of sub-zero refrigerator). 

My Reading Room
Going 3D

For all the scrambling and indecision over what would best sustain Moore’s law, manufacturers have hardly been standing still. Over the past years, we’ve seen changes to the design of the transistor to mitigate problems like electron leakage arising from smaller process nodes.

The most notable of these was the shift to the 3D FinFET structure, a switch Intel made in 2012 when it moved to the 22nm process node. While previous transistors were planar, FinFET transistors feature a raised channel that is then wrapped by the gate on its three exposed sides. This gives the gate much more control over the channel, and enables the transistors to switch faster and still consumes less power.

The next proposed step involves so-called “gate-all-around” transistors, where the gate surrounds the channel on all four sides. This gives the most amount of control, but also makes the manufacturing process more complex. With some luck, this could enable manufacturers to reach the 5nm process, but bigger changes are required beyond that. 

My Reading Room

THE SEARCH FOR NEW MATERIALS

A different approach to solving the transistor problem. 

Graphene

A different approach to solving the transistor problem involves a search for a suitable replacement for silicon that has better properties like higher electron mobility and lower heat output. The latter is already approaching its physical limits, and one alternative that has risen to prominence is graphene, a sheet of carbon that is only one atom thick.

Graphene shares silicon’s ability to remain stable across a wide range of temperatures, while also being a better conductor of electrons, which means transistors can switch on and off more quickly. This equates to better performance and consumes less power. Graphene transistors can operate hundreds and even thousands of times faster than the highest performing silicon devices.

The only problem is that graphene is almost too good a conductor, and it cannot effectively switch between discrete on and off states. Various workarounds have been attempted, including doping, squashing, and squeezing graphene, or applying electric fields to modify its electrical properties.

One solution that has presented itself in recent years is the use of graphene ribbons narrower than 10nm. These behave like proper semiconductors, so they can switch on and off, and researchers have even demonstrated a new chemical-based approach that would make it more feasible to manufacture them in an industry setting.

Sige alloy

Another option that has been floated is to create transistors with channels made from a silicon-germanium (SiGe) alloy. Like graphene, SiGe is a better conductor than silicon, at least in certain aspects. To be specific, it conducts “holes” better. Holes are places in a semiconductor that might contain electrons, but by chance, do not, and they behave much in the manner of positively charged electrons.

Using this approach, IBM – as part of a consortium including names like Samsung and GlobalFoundries – has even leapfrogged the next-generation 10nm process to create a working 7nm chip last year. Of course, it’s exceedingly expensive to make, and uses cuttingedge technology like extreme ultraviolet lithography (EUV) to etch features onto the 7nm chips. 

 But the impressive part is that IBM claims the technique will eventually allow for 7nm chips to be crammed with a whopping 20 billion transistors. In comparison, the highest transistor count on an existing chip is 7.2 billion, on a 22core 14nm Xeon processor. 

But SiGe has its downsides as well. While it conducts holes much better than silicon, it is a less able conductor of electrons, and a viable long-term path forward would require SiGe to be used with other compounds like arsenide, indium, or gallium alloys to improve its electron conduction properties.

Spintronics

A more ambitious idea involves something called “spintronics”, short for “spin transport electronics.” Unfortunately, this is still a highly experimental field, even though research into spintronic transistors has been going on for over 15 years. Having said that, it’s not all entirely science fiction either, and spintronics is already used in certain hard drives.

While traditional electronics uses the charge of an electron to represent information and encode data, spintronics utilizes “spin,” an intrinsic property of electrons. Spin can take two forms, either up or down, which is useful because they can then represent a 0 or a 1. Electron spins can essentially be transferred without the flow of any electric current, and this “spin current,” as it is called, can transfer information without generating heat in devices.

Ultimately, the most appealing thing about spintronics is the fact that the voltage required to drive these transistors is extremely small, in the range of a mere 10 to 20 millivolts. This is hundreds of times lower than a regular transistor, so chips would no longer have any problem with excess heat. But these tiny voltages come with problems of their own, as it becomes difficult to distinguish between a 0 and a 1 at such low voltages. 

The only problem is that graphene is almost too good a conductor, and it cannot effectively switch between discrete on and off states.
My Reading Room

THERE’S STILL MORE TO COME

Moore’s Law is dead. Long live Moore’s Law.

While there are countless possibilities, we’re still not anywhere close to an authoritative solution that will allow Moore’s Law to continue unabated. But does this mean that we’re doomed to stagnating computing capabilities within the space of the next decade? Hardly.

We haven’t even scraped the surface of what’s being considered by the computing industry. Quantum computing, spintronics, and innovative new materials aside, others have proposed things like neuromorphic computing and increasingly specialized chips – for instance, Movidius’ visual processing chips – as the way forward. Then there are carbon nanotubes and nanowires, and even the prospect of 3D chips that involve stacking layers of components on top of a single silicon die.

The point is that innovation hasn’t stopped, and it isn’t going to. Moore’s Law is first and foremost an economic observation, not a natural law, and the economics will ensure that it stays alive in some form or other. If you take Moore’s Law at its literal meaning, where it simply involves a lockstep march to more transistors and smaller process nodes, then yes, it is on its last legs. But Moore’s Law is also about cheaper costs and generally better designs. Over the past half century, the semiconductor industry has achieved this by relying on the automatic benefits derived from smaller and better process technologies.

With that route closed, the path ahead is paved with smarter and more creative designs. And as long as consumers continue to demand more of the same, there’s little reason to doubt that the industry will eventually deliver. In fact, if we were to take the optimistic view, this might spur design innovations that could kickstart a new era in computing, now that manufacturers can no longer rest on their laurels and expect the improvements from smaller processes.

The spirit of Moore’s Law might just live on after all.

My Reading Room
We haven’t even scraped the surface of what’s being considered by the computing industry.