Random circuits are really only good for proving that you are in fact operating a quantum computer, since you can test the results, and random quantum circuits can not be efficiently simulated by classical computers. So quantum volume is not a totally meaningless metric, but it doesn’t necessarily indicate any capability for real problems that people would like to use quantum computers on: factoring, solving discrete logs, general search problems, or simulating quantum physics systems.
In any field you need representative benchmarks. ¼ mile time isn't the best car, GHz isn't the best clasic computer. The problem with quantum computers is they can't yet do anything well. So the isn't really anything to benchmark.
I've asked the contrary -- "why is quantum volume a good measure?" And haven't been satisfied. But don't take my word for it; I'm not a QC expert, (some of my friends are)...
> Quantum Volume would also benefit the CEO or investor who lacks the in-depth technical knowledge necessary to make confident investment decisions in the technology. Additionally, reported variations in Quantum Volume from company to company would likely stimulate more articles by the media, which would serve to educate the general public further.
Look at your computer. It's got a processor with a certain number of cores, a maximum clock frequency, a hierarchical cache system with varying sizes; it's got a certain amount of RAM with its own operating frequency; a graphics card with some number of cores, its own RAM and clock frequency. Imagine that I hand you a single number that's supposed to encapsulate all that information -- let's say something of the form frequency * cores * ram. And... maybe that kinda makes sense? But it doesn't actually correspond to any kind of real-world performance.
The issue isn't so much that there's parts lost in the measure -- rather, they've thrown in the kitchen sink and it's hard to say what it all really means. Except convincing management, "big numbers go zoom!"
Can't really argue with that sort of hedge. I'd even go so far as to say it does correlate with performance on a completely artificial benchmark. But that's a far cry from "it faithfully describes computational power" that would justify the headline in question.
My understanding was that quantum volume was an attempt to unify the previously used metric of number of qubits with some concept of potentially being able to do useful work. The last part being why these metrics are elusive. Adding more Transistors to a processor is only useful if we agree they are all beinh used. Same with qubits, in particular if you can't error correct your qubits you can't be confident in the result. So improvement in quantum volume seems to give a better yardstick for the roadmap on general quantum computing.
I'd be thankful to better understand better what parts of the complicated system is lost in the measure.