Quantum Computing Progress is Way Faster Than Classical Computers

IBM says they could use 250 petabytes of memory and the best supercomputer to match a 53-qubit Google quantum computer on a particular problem.

Scott Aaronson notes that this clearly shows that quantum supremacy is clearly emerging.

IBM paper says the 200 Petaflop Summit at Oak Ridge National Lab with its 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disks. They could then simulate the Google quantum chip solution in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).

If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city etc…

The three-minute quantum solution versus 2.5 days is still a quantum speedup by a factor of 1200. If we compare computation. We are comparing ~5 billion quantum gates against 200 million trillion FLOPs. This is a quantum speedup by a factor of ~40 billion.

SOURCES- Scott Aaronson, Google, IBM
Written By Brian Wang, Nextbigfuture.com

28 thoughts on “Quantum Computing Progress is Way Faster Than Classical Computers”

  1. Well, you are self-aware if you include yourself in your picture of reality, you are conscious if you include your mental processes in it.

  2. Yes, the type of problems a general computer cant solve in a reasonable amount of time. That’s why they’re developing Q computers, they’re not doing it so you can load Facebook faster.

  3. I also don’t believe they know how to do non-local interactions. And “quantum” supremacy is impossible without those, I think.

    There must be much better approximate algorithms for “classical” computers to solve those “quantum” problems on which “quantum” computers have advantage so far.

  4. I don’t know if current implementations are general, but AFAIK there’s no principal reason why it can’t be made general. It’s just a question of which gates you implement, and how you combine them. For example, a Fredkin gate is a quantum logic gate that can perform all of the classical logic operations: https://en.wikipedia.org/wiki/Fredkin_gate

    The difficulty is that the architecture is very different, so programming for QC is also very different. (edit: I guess it’s also a question of choosing the right tool for the job. You wouldn’t usually do integer math on a floating point processor, even though you could. QCs excel at a different class of problems from classical computers. That doesn’t mean they can’t do the classical tasks.)

  5. If a sufficiently advanced quantum computer can simulate a classical computer (which seems likely, eventually), then you could play video games on it. But I think we’re not there yet.

    In terms of raw processing power, QC should be able to run games much earlier than it could simulate a classical computer. But the programming is very different. Porting a game to a QC would probably be more tricky than porting a Windows game to Linux. But as QC progresses, there’ll be new libraries and APIs and software tools that would gradually make it easier.

  6. Sure it’s faster…. but it only solves one type of problem… it’s not a general computer that you can do anything. With….

  7. Actually, it’s the opposite. It can instead break the encryption your bank uses so it can have you pay for its owners’ favourite video game. There are asymmetric key algorithms which are quantum-proof, but they have key sizes in the tens of kilobytes range instead of elliptic curve’s 32 bytes. This might spell the death knell of the blockchain, actually.

  8. Seems like if you could get it small and reliable enough, it’d make a nice complimentary ‘core’ in a classical processor. Sort of how we have hardware acceleration for certain problems like HEVC 265 or AES.

  9. I would say its not ‘outside of the universe’. As far as I understood, these quantum things exist without space and time. So, it cannot be outside of another thing…
    We get our data from a pool of ‘everything’

  10. To the extent that consciousness is “awareness of internal or external existence” (per Wikipedia), we can at least fake it. Autonomous systems have to be aware of their outside, and can be programed to also be aware of their internal state. They can also be programmed to respond in ways that emulate self-awareness, promote self-preservation, etc.

    A truly intelligent system would need to be able to model and predict the effect of its own actions on its surroundings, and vice versa. So it would need some representation of “self”. By some definitions, that would already be at least a form of self awareness.

    That said, self-awareness and consciousness may not be quite the same thing, though probably related. The first problem is consciousness is difficult to define, and may be open to interpretation. But we do know what it looks like from the outside. So we can at least emulate its external appearance (another term for faking it).

    Then following Turing’s ideas, if given a black box that behaves convincingly similar to a conscious being, one can reasonably assume it is conscious.

  11. If there’s a noncomputable process in the universe, couldn’t that process be used to make a noncomputable machine, which simulates another universe that also contains noncomputable processes?

  12. So far, Q-computing seems like a very specialized hammer looking for a very unique and rare nail. At the end of the day you have to ask, can you play your favorite video game on it?

  13. As far I know, universal quantum simulators are exponential in classical computers, but aren’t in quantum computers. So in fact, yes a BQP quantum computer can efficiently simulate a universal quantum system of similar complexity [1], but the simulator ought to be more complex than the simulation.

    But I’m not assuming full quantum simulations, just simplified models with shortcuts made to look as if the simulation was real except under close inspection (or maybe even under it, with some tricks).

    References:
    [1] https://science.sciencemag.org/content/273/5278/1073

  14. I had the honor, as a mere printer, of printing a rough draft, simply typed, of a book explaining all of this to the public by Wheeler in the late 70’s. But it was full of fomulae!

  15. Quantum computers are only good at speeding up BQP problems, which is a smallish subset.

    “future humanity could run entire simulations of its own past, in nearly infinite variations”

    So, no, that isn’t feasible. It isn’t a BQP problem to simulate something like the planet. So your “planet simulator” quantum computer would be bigger than the planet simulating it.

  16. You strike close to my favorite potentially non computable problems: the problems of Hard-AI and qualia.

    We don’t know what makes conscious beings tick, so we don’t know if we can make conscious computers, probably we can’t.

    This however, can be refuted by any counter-examples. For the look of where things are going, humanity will have plenty of computer power for testing this one out.

    In any case, something among the weird apparently observer-dependent and retro-causal phenomena in QM could fit the bill as deal breakers for simulation theory too.

  17. I’m utterly convinced of non-computability. Wheeler relies on the uncomputed entangled U particle to *happen* as we select the path from this end, by perception. If I understand!

  18. Personally, I’m not convinced every process we see in the universe is computable. If some of them aren’t, we can’t be living in a simulation.

    This is just a belief, not a certitude. But I’m certainly very interested in any physics results proving some processes in reality are beyond computation. At least we know there are are some physical process we can only approximate (3 body problem, for example), and some others might be undecidable but still exist in nature.

    So, my current standing is: all this is interesting for humanity’s future developments, but not something to guide our metaphysics on. Not yet.

  19. It increasingly seems we are heading to a post-computing scarcity world, where computing power will exceed our wildest dreams and achieve near unimaginable levels, going beyond the limits of speed (Bremermann’s limit ), communication delays (Margolus-Levitin theorem) and energy (Landauer limit) in terms of operations per second effectively achievable.

    The only remaining exception seems to be the limits of information storage density (Bekenstein bound) given all the information processed needs to have some physical equivalent (e.g. atoms). Unless quantum memory finally arrives and then memory could grow beyond that as well.

    This miraculous situation will come from leveraging the capacities of quantum computers, that ‘steal’ their computing power from somewhere else, possibly outside of the universe altogether (the multiverse, in the many worlds interpretation).

    This is kind of a surprising development, and somewhat eerie, really. What are the implications of such situation in the long term?

    If we can simulate our world with approximations of physics instead of real, expensive simulated particles, future humanity could run entire simulations of its own past, in nearly infinite variations. After all computation capability will be near infinite while storage could still have some price (or be nearly free and equally unbounded).

    This in fact, makes the probability we are living in a simulation much higher.

Comments are closed.