Zephyrnet Logo

New Codes Could Make Quantum Computing 10 Times More Efficient | Quanta Magazine

Date:

Introduction

In the world of quantum error correction, an underdog is coming for the king.

Last week, new simulations from two groups reported that a rising class of quantum error-correcting codes is more efficient by an order of magnitude than the current gold standard, known as the surface code. The codes all work by transforming a horde of error-prone qubits into a much smaller band of “protected” qubits that rarely make mistakes. But in the two simulations, low-density parity check — or LDPC — codes could make protected qubits out of 10 to 15 times fewer raw qubits than the surface code. Neither group has implemented these simulated leaps in actual hardware, but the experimental blueprints suggest that these codes, or codes like them, could hasten the arrival of more capable quantum devices.

“It really looks like it’s coming to fruition,” said Daniel Gottesman of the University of Maryland, who studies LDPC codes but was not involved in the recent studies. “These [codes] could be practical things that can greatly improve our ability to make quantum computers.”

Classical computers run on bits that rarely misfire. But the particle-like objects — qubits — that power quantum computers lose their quantum mojo when just about anything jostles them out of their delicate state. To coax future qubits into usefulness, researchers plan to use quantum error correction, the practice of using extra qubits to redundantly encode information. It’s similar in spirit to protecting a message from static by speaking each word twice, spreading out the information among more characters.

The Canonical King

In 1998, Alexei Kitaev of the California Institute of Technology and Sergey Bravyi, then of the Landau Institute for Theoretical Physics in Russia, introduced the quantum error-correcting surface code. It organizes qubits into a square grid and executes something like a game of Minesweeper: Each qubit connects to four neighbors, so checking designated helper qubits allows you to discreetly snoop on four data-carrying qubits. Depending on whether the check returns a 0 or a 1, you can infer whether some of the neighbors have erred. By checking around the board, you can deduce where the errors are and fix them.

Introduction

Through these checks — and more subtle tweaks of the iffy qubits — you can also hide a reliable qubit throughout the square block’s data-carrying qubits, not exactly here or there but sort of everywhere. As long as the iffy qubits keep the Minesweeper operations humming along smoothly, the hidden qubit stays safe and can be manipulated to perform operations. In this way, the surface code elegantly fuses many shoddy qubits into a single qubit that rarely errs.

“The slightly annoying thing for me is that the surface code is the simplest thing you can think of,” said Nikolas Breuckmann, a physicist turned mathematician at the University of Bristol who has spent years trying to improve on the scheme. “And it performs remarkably well.”

The code became the gold standard for error correction; it was highly tolerant of misbehaving qubits, and the grid was easy to visualize. As a result, the surface code influenced the design of quantum processors and quantum road maps.

“It’s been the thing to do,” said Barbara Terhal, a quantum information theorist at the QuTech research institute in the Netherlands. “This is the chip you have to make.”

The downside of the surface code, which has not yet been fully demonstrated in practice, is an insatiable appetite for qubits. Bigger blocks of shoddy qubits are needed to more strongly protect the reliable qubit. And to make multiple protected qubits, you need to stitch together multiple blocks. For researchers dreaming of running quantum algorithms on many protected qubits, these are onerous burdens.

In 2013, Gottesman saw a potential way out of this mess.

Researchers including Terhal and Bravyi had found evidence suggesting that, for a flat code that only connected neighbors to neighbors, the surface code did as well as you could hope. But what if you allowed each check to link far-flung qubits together? Quantum information theorists had already begun to explore codes featuring such “nonlocal” connections, which are casually called LDPC codes. (Confusingly, the surface code is technically an LDPC code too, but in practice the term often refers to the more exotic clan members with nonlocal checks.)

Gottesman then showed that certain LDPC codes could be far less ravenous: They could cram multiple protected qubits into a single block, which would help avoid the surface code’s ballooning qubit requirements for larger algorithms.

But Gottesman’s work was highly idealized and considered essentially infinite swarms of qubits. The practical challenge was seeing whether researchers could scale down LDPC codes to work in real quantum devices, while preserving their oomph.

Demonstrating Virtual Protection

Over the last two years, Breuckmann and other researchers have started scrutinizing the performance of LDPC codes that can run on smaller and smaller systems. The hope was that some might fit into today’s devices, which can furnish perhaps 100 raw qubits.

Last week, a team of researchers at IBM led by Bravyi unveiled a simulation of the smallest and most concrete LDPC blueprint yet, based on an LDPC code from a little-known paper published in 2012. It started with the surface code’s check of four neighboring qubits and added two carefully chosen “nonlocal” qubits.

They simulated the various errors that could arise if the code were run on a real circuit, a process that is like sticking a digital fighter jet in a digital wind tunnel and seeing how it flies. And they found that their code could protect its reliable qubits far more efficiently than the surface code. In one test run, the code took 288 raw qubits that failed 0.1% of the time and used them to create 12 protected qubits with a failure rate 10,000 times lower. For the same task, the team estimated, the surface code would have required more than 4,000 input qubits.

“We were very surprised by that,” said Andrew Cross, a researcher on the IBM team.

The simulation teases the possibility of getting tomorrow’s error correction today, because while no one has access to 4,000 qubits, devices with hundreds of qubits are right around the corner.

“You could see quite a substantial amount of fault tolerance with devices which have a number of qubits that we have today,” Gottesman said.

A day after the IBM’s preprint appeared, a multi-institution collaboration of researchers headed by Mikhail Lukin of Harvard University and Liang Jiang of the University of Chicago posted similar results. (The researchers declined to discuss their work, which has been submitted to a peer-reviewed journal.) They had dusted off two other LDPC codes, modified them for simulation, and found that they too required roughly one-tenth the number of input qubits to make dozens to hundreds of good qubits, when compared to the surface code.

But building an F-35 is harder than simulating an F-35, and building an LDPC code-ready device will will also be extremely challenging. “Two main things could stop these things from actually taking over,” Gottesman said.

First, creating nonlocal connections between qubits is tough, especially for companies like IBM that make qubits out of immobile superconducting circuits. Connecting those circuits with their neighbors is natural, but creating links between distant qubits is not.

Introduction

Second, LDPC codes excel when their protected qubits are used for memory, as they were in the IBM simulation. But when it comes to using those nebulous, overlapping qubits for calculations, the tangled, nonlocal code structure makes it much harder to select and steer the desired qubits.

“We know it’s possible in principle to do these computations,” said Gottesman, who sketched out a scheme for doing so in his 2013 work. “But we don’t know if it’s possible to do it in a really practical way.”

Lukin and colleagues made modest steps toward addressing these primary weaknesses. For one thing, the team simulated end-to-end computation by melding an LDPC-protected quantum memory with a surface code-protected quantum processor. In that scheme, the qubit savings largely survived the calculation burden, but at the cost of computation taking longer to run.

Furthermore, Lukin’s team tailored their simulations to a type of free-roaming qubits that are a natural fit for arranging long-range connections. Unlike the stationary superconducting circuits, their qubits are atoms held by laser beams. By moving the lasers, they can bring distant qubits into contact. “This is awesome for LDPC codes,” Breuckmann said.

When — or even if — LDPC codes will become practical remains uncertain. Demonstrations of tens of reliable memory qubits are likely at least a few years away in even the rosiest of forecasts, and calculations remain further off. But the recent simulations make the surface code seem increasingly like a steppingstone on the path to quantum computation, rather than the destination.

“There’s a reason the surface code has been around for 20 years,” Breuckmann said. “It’s hard to beat, but now we have evidence that we can actually beat it.”

spot_img

Latest Intelligence

spot_img