Michele Mosca of the Institute for Quantum Computing and Dept. of Combinatorics & Optimization, University of Waterloo and St. Jerome’s University, and Perimeter Institute for Theoretical Physics reviews Quantum Algorithms. Aug, 2008 [71 pages]

Some of the categories of algorithms:

Factoring, Discrete Logarithms and the Abelian Hidden Subgroup Problem : like Shor’s algorithm.

Quantum walks, sometimes called quantum random walks, are quantum analogues of (classical) random walks, which have proved to be a very powerful algorithmic tool in classical computer science. The quantum walk paradigm is still being developed. For example, the relationship between the continuous time and discrete time models of quantum walks is still not fully understood. In any case, the best known algorithms for several problems are some type of quantum walk.

Amplitude amplification: An example of this is Grover quantum search algorithm that gives a quadratic speed-up over the best possible classical algorithm.

Most of the known quantum algorithms can be phrased as black-box algorithms solving black-box problems. A black-box, or oracle, is subroutine or subcircuit that implements some operation or function. It does so in a way that provides no other information other than simply taking an input and giving the prescribed output. One cannot, for example, look at the inner workings of the circuit or device implementing the black-box to extract additional information. For example, Shor’s factoring algorithm can be viewed as an algorithm that finds the order of an element in a black-box group.

Some directions in which future progress might be made are listed below.

• The complexity of the non-Abelian hidden subgroup problem will hopefully be better understood. This includes addressing the question: Does there exist a quantum polynomial time algorithm for solving the graph isomorphism problem? Of course, a proof that no such quantum algorithm exists would imply that P PSPACE. So, more likely, we might develop a strong confidence that no such algorithm exists, in which case this can form the basis of quantum computationally secure cryptography .

• There are many examples where amplitude amplification was used to speedup a classical algorithm in a non-trivial way. That is, in a way that was more than just treating a classical algorithm as a guessing algorithm A and applying amplitude amplification to it. There are likely countless other algorithms which can be improved in non-trivial ways using amplitude amplification.

• The quantum walk paradigm for quantum algorithms emerged roughly 10 years ago, and has recently been applied to find the optimal black-box algorithm for several problems, and has become a standard approach for developing quantum algorithms. Some of these black-box problems are fairly natural, and the black-boxes can be substituted with circuits for actual functions of interest. For example, collision finding can be applied to find collisions in actual hash functions used in cryptography. We will hopefully see more instances where black-box algorithm can be applied to solve an problem without a black-box, or where there is no black-box in the first place.

• In addition to the development of new quantum walk algorithms, we will

hopefully have a more elegant and unified general theory of quantum walks

that unites continuous and discrete walks, coined and non-coined walks, and

quantum and classical walks.

• The adiabatic algorithm paradigm has not reached the level of success of

quantum walks, partly because it is hard to analyze the worst case complexity

of the algorithms. To date there is no adiabatic algorithm with a proof that it

works more than quadratically faster than the best known classical algorithm.

Can we do better with an adiabatic algorithm? If and when we have large-scale quantum computers, we will be able to just test these algorithms to see if indeed they do have the conjectured running times on instances of interesting size.

• The topological algorithms have received limited attention to date. This is

partly because the initial work in this field was largely inaccessible to researchers

without substantial familiarity with topological quantum field theory and related areas of mathematics. The more recent work summarized in this paper and other recent papers is a sign that this approach could mature into a fruitful paradigm for developing new important quantum algorithms.

• The paradigm of measurement based computation has been to date mostly focussed on its utility as a paradigm for possibly implementing a scalable fault-tolerant quantum computer. We might see the development of algorithms directly in this paradigm. Similarly for globally controlled architectures.

• There is also a growing group of researchers looking at the computational complexity of various computational problems in physics, in particular of simulating certain Hamiltonian systems, often coming from condensed matter physics. Much of the work has been complexity theoretic, such as proving the QMA-hardness of computing ground states of certain Hamiltonians. Other work has focussed on understanding which quantum systems can be simulated efficiently on a classical computer. This work should lead to the definition of some simulation problems that are not known to be in BPP, nor believed to be NP-hard or QMA-hard, and thus might be good candidates for a quantum algorithm. There has been a language and culture barrier between physicists and theoretical computer scientists when it comes to discussing such problems. However, it is slowly breaking down, as more physicists are becoming familiar with algorithms and complexity, and more quantum computer scientists are becoming familiar with language and notions from physics. This will hopefully lead to more quantum algorithms for computational problems in physics, and new algorithmic primitives that can be applied to a wider range of problems.

FURTHER READING

Dwave System’s published papers and references to quantum computing papers

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.

Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.

A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.

Thanks for the link. It has a good analysis.

I think the cell processor and the GPGPUs and FPGA acceleration are all very promising.

IBM provides workstation support for the Cell processor or you can cobble together PS3s as others have done.

The specific application will determine which is superior. The article you provided talked about a 50 times speed improvement from the cell. Some GPGPU and FPGA situations have several hundred times acceleration.

All three methods will be used for new supercomputers.

I will try to examine each and the issues for the three more closely.

There is a discussion presently going on regarding a comparison of FPGA, GPU and the Cell/B.E processors for the purpose of computational computing. See here: http://www.simbiosys.ca/blog/2008/05/03/the-fast-and-the-furious-compare-cellbe-gpu-and-fpga/

I am interested in your opinions on GPU versus Cell processor.