Hey there, folks! We’ve got some exciting stuff to talk about today. You see, building a fully error-corrected quantum computer is a massive undertaking, but it’s also a game-changer. A full-scale quantum computer has the potential to solve problems that are simply impossible for classical computers. Now, we’re still a few years away from achieving that dream, but in the meantime, we’re using our current noisy quantum processors for some pretty cool experiments.
Here’s the deal. Our noisy quantum processors are a bit limited compared to the error-corrected ones. We can only run a few thousand quantum operations or gates before noise starts messing with the quantum state. But that hasn’t stopped us from making some breakthroughs. In 2019, we did this thing called random circuit sampling on our quantum processor, and it blew the socks off state-of-the-art classical supercomputing. We showed that our quantum processor outperformed classical computers in a computational task, which was a major milestone.
But wait, there’s more! We’ve also used our processors to observe some crazy physical phenomena like time crystals and Majorana edge modes. And we’ve made all sorts of experimental discoveries, like robust bound states of interacting photons and the noise-resilience of Majorana edge modes of Floquet evolutions. It’s mind-blowing stuff, really.
Now, even though we’re in this noisy regime with our quantum processors, we believe there are still some computational applications waiting to be discovered. We’re talking about quantum experiments that can be performed much faster than any classical supercomputer could calculate. We call these “computational applications” because they’re pushing the boundaries of what’s possible with quantum processors. No one has pulled off such a beyond-classical feat yet, but we’re determined to make it happen.
But here’s the thing, folks. How do we even compare a quantum experiment on our processors to the computational cost of a classical application? It’s not as straightforward as comparing error-corrected quantum algorithms to classical algorithms. That’s where our framework comes in. In our paper “Effective quantum volume, fidelity and computational cost of noisy quantum processing experiments,” we introduce the concept of “effective quantum volume” to measure the computational cost of a quantum experiment. This volume represents the number of quantum operations or gates that contribute to a measurement outcome. It’s a way for us to quantify the computational resources involved.
We put this framework to the test with three recent experiments: random circuit sampling, measuring “out of time order correlators” (OTOCs), and a Floquet evolution related to the Ising model. We’re particularly stoked about OTOCs because they allow us to measure the effective quantum volume of a circuit, and that’s no easy task for classical computers. OTOCs are also important in fields like nuclear magnetic resonance and electron spin resonance spectroscopy. So, we think OTOC experiments could be the ticket to achieving that elusive computational application of quantum processors.
Now, let’s talk about computational cost. When we run a quantum circuit on a noisy quantum processor, we face a trade-off. On one hand, we want to do something that’s difficult to achieve classically. The computational cost depends on the quantum circuit’s effective quantum volume. The larger the volume, the higher the cost, and the more our quantum processor outshines classical computers. But here’s the catch. Each quantum gate introduces an error to the calculation. The more gates, the more errors, and the lower the fidelity of the quantum circuit. So, we might prefer simpler circuits with a smaller effective volume, but those are easily simulated by classical computers. It’s a delicate balance between maximizing computational resources and minimizing errors.
This balance is what we call the “computational resource.” And it’s the key to unlocking the full potential of quantum processors. For example, let’s take a look at random circuit sampling, the “hello world” program for quantum processors. It’s the first demonstration of a quantum processor outperforming a classical computer. But here’s the thing, any error in any gate can make the experiment go haywire. So, it’s a challenging experiment to achieve with high fidelity. And even though it shows immense computational power, it’s not particularly useful on its own. We need that balance of computational resource to achieve both usefulness and computational power.
Then we’ve got experiments like OTOCs and Floquet evolution, where we’re looking at specific local physical observables. The effective quantum volume of a local observable might be smaller than the full circuit needed for the experiment. We can think of it in terms of a light cone from relativity, where not every operation in the circuit influences the observable. The bigger and faster the butterfly cone (determined by the butterfly speed), the harder it is to simulate classically. And that’s where the effective quantum volume comes into play. It’s essentially the volume of the butterfly cone, including the causally connected quantum operations.
All in all, folks, we’re making some serious progress in the development and understanding of quantum processors. We’re pushing the boundaries of what’s possible, even in this noisy regime. And we’re on a mission to find those computational applications that will redefine what we can accomplish with quantum computers. It’s an exciting time to be in the field, and we can’t wait to see what the future holds.
Catch you on the next episode, peace out!