Quantum Blockchain
At a recent book event, I was asked an interesting question: how would blockchain systems be affected if quantum computing becomes a practical reality?
I think it’s fun to speculate about this, but I want to do so without getting bogged down in two very hard issues: (1) Is quantum computing likely to become a large-scale practical reality? And (2) what are the specific capabilities of that hypothetical large-scale practical quantum computing?
So instead I’m going to consider an imaginary cousin of quantum computing. We will assume some kind of quasi-magical new form of “Q-computing” – maybe it’s quantum, maybe it’s not – and ask what the implications might be for blockchain systems. Q-computing solves problems in a way that’s so different from conventional computing that we have to revisit our underlying assumptions.
(As an aside, I am currently a little skeptical about the practicality of large-scale quantum computing. After writing a couple of chapters in Bits to Bitcoin that explain the digital model to a non-technical reader, quantum computing feels like a move in the wrong direction. It moves away from some of the things that make “conventional” digital computing so appealing: specifically, in a noisy universe, digital systems offer us the opportunity of reliably separating out the signal from the noise. In contrast, quantum systems seem to be very temperamental and fragile, subject to unexpected glitches and failures. Still, one might well have said similar things about the fragility of early computers, with their vacuum tubes and mercury delay lines. Maybe the clumsiness we’re seeing in the first quantum computer realizations will prove to be transient.)
This imaginary Q-computing might well violate crucial assumptions that underpin current blockchain systems. The first area of concern is digital signatures. Blockchain systems depend on those signatures to maintain the integrity of data. (The overall approach is rather ingenious, but those details don’t need to concern us here. You can read a simplified explanation of how signatures are used in Bitcoin in chapter 26 of Bits to Bitcoin.) The “trick” underpinning such a signature is that there is some class of computations where a solution is expensive to generate, but cheap to check. When we create or change data, we have to run the expensive computation to produce a new corresponding signature; but if we’re just checking that data is unchanged, we can run the cheap computation. In a blockchain system, we actually chain these signatures so that it becomes really expensive to change anything in the recorded data, with the number of signatures steadily increasing over time and providing us additional reassurance. There’s a kind of “computational treadmill” running in a blockchain system that means an attacker has a hard time keeping up.
Even with mainstream digital computing, it’s difficult to really prove that these signatures can’t be broken – e.g., by finding shortcuts in the supposedly-expensive computation. Instead, we just have a sort of empirical evidence that they have held up well against attacks. (Except, of course, in those occasional cases where they haven’t held up well. Oops!)
If we have a dramatically different kind of computing, there’s no reason why the costs should be similar. So we might well find that it was very easy for our Q-computer to break digital signatures. If the Q-computer could quickly recompute all the signatures required after some change we’ve made to the blockchain’s stored data, then we could put out a bogus version of the blockchain that would look OK to anyone who hadn’t seen us change it. (There are other protective mechanisms we might have to address as well – I’ll get to those a little later).
This threat to digital signatures might not be a huge problem if Q-computers still suffer the problems of today’s quantum computers, in which it’s hard to build big machines. It’s not very hard to make a stronger digital signature: Adding a single bit to the length of a digital signature makes the space of possible signatures twice as large. Although that increase in size means the digital computer is doing more work, it’s possible that the additional cost grows faster for the Q-computer – and ideally that the Q-computer eventually hits some kind of feasibility limit related to its fragile construction.
What if the Q-computer doesn’t share the problems of today’s quantum computers? Then we should assume that today’s conventional version of blockchain can’t be trusted against a possible Q-computer attack.
What about adapting the idea of digital signatures to Q-computing? That direction would depend on whether there was a suitable class of computations in which finding a solution is hard but checking a solution is easy. The general paradigm of a large sparse space of potential solutions seems to be widespread in physical phenomena, so it seems plausible that there would be something that could be used for “Q-signatures.” Then we could use those Q-signatures to chain together elements of a “Q-blockchain.” We’ll say some more about this at the end. However, it’s worth also noting that Q-computing might not have the right features to build these signatures. Even if we know how to use Q-computing to crack today’s digital signatures, we don’t necessarily know how to build Q-signatures.
Now, what about those other protective mechanisms? In addition to digital signatures, there is another “crackable” part in some (but not all) blockchain systems. In addition to the “computational treadmill” described previously, there is some kind of randomized choice of who gets to extend the chain. The selection is not usually a literally random selection among all the players, because that approach provides an incentive for people to generate fake players (to get more chances, and thus perhaps monopolize the chain). Instead, there’s some kind of weighting that corresponds to a cost. A reasonable analogy is to a lottery – the chances in a fair lottery correspond to the number of tickets bought, which in turn correspond to money spent.
Some blockchain systems use proof of work, where the cost is solving some computational challenge. For example, each block in the chain can constitute a solution to a kind of puzzle implied by the previous block (Bitcoin is the best-known example of a system that works this way). This approach makes it hard for an attacker to reliably monopolize the extension of the blockchain, because it’s hard to know who will win the race to solve the puzzle. The puzzle in one of these systems is itself somewhat like a digital signature: it’s hard to produce (solve), but easy to check. Just as the imaginary Q-computer could upend our cost assumptions about digital signatures, it might be able to produce these kinds of puzzle solutions much faster than a conventional computer. Accordingly, a conventional blockchain could be vulnerable to attack.
If we wanted to keep the blockchain working despite this attack, is there a solution? We could make the puzzle harder, which could foil the Q-computer. Unfortunately, a harder puzzle is harder for conventional systems as well. The problem here is a little different from what we saw when we considered digital signatures.
When we were concerned about forging digital signatures, we didn’t want forging to be feasible for anyone. As long as forgery is infeasible for Q-computing, it’s OK if it’s super-infeasible for conventional computing. That won’t work for the puzzles. These proof-of-work puzzles are trickier, because we need for them to be solvable. Even worse, we need for them to be solvable at a controlled rate. For example, in the normal operation of Bitcoin, the puzzle gets harder when solutions are happening too rapidly (and correspondingly easier when solutions are happening too slowly).
Our problem is that if Q-computing solves these problems much faster than conventional computing, then the puzzle will quickly adjust itself so that only Q-computers can solve it! That’s not what we want at all.
It might be possible to change the proof-of-work puzzles, finding some task that is really hard for Q-computers but still feasible for digital computers. But if that’s not possible, we’ll have to accept that Q-computers kill off the trustworthiness of those conventional blockchain systems.
Even if that happens, Q-computing won’t necessarily kill off blockchain systems as a paradigm. We can probably still have trustworthy blockchain systems: we’ll just have to build them around Q-computing. As I mentioned earlier, that would depend on having some kind of Q-computations with the “hard to solve, easy to check” characteristic. (If we don’t have the right kind of Q-computations, there’s no straightforward adaptation of blockchain.)
If we have those kinds of computations, we could use them to build signatures and puzzles. We could then reuse the structure of today’s blockchain systems. The proof-of-work puzzles being solved will be Q-computing puzzles, and the signatures will be Q-computing signatures.
In such a Q-blockchain system, we will still have a sequence of data items protected by overlapping signatures, in a way that makes it easy to extend the sequence but very hard to modify it. We will still have the right to extend the sequence granted unpredictably as the result of solving a hard puzzle. And we will still have a community actively solving puzzles, competing to extend the sequence.
The details of each of those activities will be completely different in the presence of Q-computing. But intriguingly, it’s still sensible to use a community to defeat an attacker. In addition, it’s still sensible to have a constantly changing starting point as a way of defeating attacks. So this speculation about the effect of quantum computing lets us see that certain parts of the blockchain paradigm are potentially reusable even if we imagine a radical shift in the computing model.