Amid the recent explosion of startups and venture capital investment into quantum computing, there has been much talk of an inevitable “quantum winter”.

Map of Winter Storm Quantum

No, this is not some bleak doomsday scenario where our enemies win the race to develop a quantum computer and thrust our society into a winter of defeat and despair.

Instead, it’s the fear that the hype around quantum computing will far outpace the realities, investors will get frustrated by the failure to meet the inflated expectations, and funding for the industry and associated research will collapse. Let’s take a look at the basis for this fear.

The famous AI winter

The most famous “winters” through the history of technology were the AI winters of the late 20th century. From the invention of the first learning algorithm – the perceptron in the late 1950s – the popular press was enamored with the potential of this new breed of technology. Famously, the New York Times reported in 1958 of the potential for a computer that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

Government funding for AI research exploded in the 1950s and 1960s, but people got frustrated (no robots yet? where are my robots?!), funding was cut, and the 1970s became sort of the first “winter” for AI. By the 1980s, funding had returned, and the field was again on an upslope. But experts worried that hype was again outpacing reality, and indeed, the late 1980s and 1990s brought another collapse in funding and the failure of many AI-focused companies.

Why we fear a quantum winter

Levels of government funding and industry investment in quantum computing are unprecedented, and it seems nearly weekly that some announcement of new funding is made. But those of us in the field know all too well that, for practical purposes, quantum computers are still completely useless. Sure, there’s a ton of great work being done which will pay dividends in the future, but most realists don’t expect widespread quantum adoption for practical problems for many years or even decades. Will investors be patient? We hope so. For every hype-filled article, there are plenty of experts trying to manage expectations and avoid inevitable disappointment.

This is by no means a unique situation. Gartner publishes a “Hype Cycle” every year for emerging technologies, with a prominent “Peak of Inflated Expectations” – a typically crowded list of over-hyped technologies just waiting for their proverbial bubble to burst. In their most recent analysis, quantum computing is still on the rising edge of this curve, almost unnoticeable among a tidal wave of AI-related technologies (seems like the AI spring has sprung).

So at some point, we should certainly expect the hype around quantum computing to subside. This is the typical trend for new technologies, and building a quantum computer is a long slog that has a much more extended time frame than, say, the next blockchain. The current hype is unsustainable. But does that also mean that, like with AI in the past, funding will dry up? Is a quantum winter is around the corner? I would argue that this is extremely unlikely.

Why quantum computing will not suffer the fate of AI

1. Technological maturity. Quantum computing today is a far more mature field than AI was in the 1950s. Modern AI research didn’t really begin until around the 1940s, and so the field was only about a decade old when the first massive wave of investment came in the 1950s. People had grand dreams, but no one knew what AI would truly be capable of.

By contrast, quantum computing research began in earnest in the 1980s (spurred in part by Richard Feynman), and so at this point the field has nearly four full decades of research behind it. And the technological feasibility of quantum computing is not just wishful thinking (some people would beg to differ with this statement, but they are a small minority). The principles of quantum physics underlying quantum computing have been around since the 1920s and have been experimentally tested many times over the last century, with astonishing success. And quantum error correction – the key to making quantum computers fault-tolerant and scalable – has been on a firm mathematical foundation since Shor and Steane developed their codes in the 1990s.

2. Frankenstein’s monster. It’s easy for people’s imaginations to run wild when thinking about robots. When the perceptron was invented in the 1950s, no one had any realistic plan for developing a conscious machine. But people bought into this idea, in part because it had been the stuff of science fiction for so long. People had an intuitive idea of what this technology could look like, and what impact it could have. If you were expecting a walking, talking, reproducing robot, and all you got are a few algorithms that can classify images, you’d lose faith, too.

Most technologies are unable to capture the imagination like AI. This includes quantum computing. Sure, there is an international spy novel written about quantum computing (currently ranked #1492 on the Espionage Thrillers bestsellers list at Amazon!), and there are plenty of misunderstandings of what a quantum computer will be able to do, but we don’t run a serious risk of investors being influenced by their knowledge of science fiction.

3. Factoring, factoring, factoring. For those of us in the field, it has become cliché to mention Shor’s algorithm, by which quantum computers will be able to quickly factor extremely large integers, and thereby break the RSA encryption scheme that is used to secure basically everything on the Internet. And while most algorithms research today is focused on near-term applications of smaller (“NISQ-era”) quantum computers, it’s impossible to overstate the importance of factoring to the field as a whole. Essentially anything that’s transmitted over the Internet today (or for the foreseeable future, until a quantum-safe encryption standard broadly replaces RSA) – if an attacker wants to decrypt it, all they have to do is store the encrypted data and wait for a fault-tolerant quantum computer to exist. Sure, this may be decades away – but maybe not. The potential value to industry investors is enormous, and will no doubt be worth the risk and the wait. And no government can afford to take the chance that a rival nation might get a quantum computer first. As long as factoring remains the killer app of quantum computing (and assuming no one discovers an efficient factoring algorithm for classical computers), it’s hard to envision a scenario where funding for quantum computing dries up, despite the inevitable decline in hype.