Google and Dwave have been working on sparse coding, deep learning and unattended machine learning with Dwave's quantum computer helping to get better and faster results in some cases.
Google research discusses the use of quantum computers for AI and machine learning.
[Google has ] already developed some quantum machine learning algorithms. One produces very compact, efficient recognizers -- very useful when you’re short on power, as on a mobile device. Another can handle highly polluted training data, where a high percentage of the examples are mislabeled, as they often are in the real world. And we’ve learned some useful principles: e.g., you get the best results not with pure quantum computing, but by mixing quantum and classical computing.
Can we move these ideas from theory to practice, building real solutions on quantum hardware? Answering this question is what the Quantum Artificial Intelligence Lab is for. We hope it helps researchers construct more efficient and more accurate models for everything from speech recognition, to web search, to protein folding. We [Google] actually think quantum machine learning may provide the most creative problem-solving process under the known laws of physics.
Nextbigfuture covered an earlier article about Sparse Coding at Dwave
Hartmut Neven, Google Director of Engineering on Quantum Machine Learning
Machine learning is highly difficult. It’s what mathematicians call an “NP-hard” problem. That’s because building a good model is really a creative act. As an analogy, consider what it takes to architect a house. You’re balancing lots of constraints -- budget, usage requirements, space limitations, etc. -- but still trying to create the most beautiful house you can. A creative architect will find a great solution. Mathematically speaking the architect is solving an optimization problem and creativity can be thought of as the ability to come up with a good solution given an objective and constraints.
Classical computers aren’t well suited to these types of creative problems. Solving such problems can be imagined as trying to find the lowest point on a surface covered in hills and valleys. Classical computing might use what’s called “gradient descent”: start at a random spot on the surface, look around for a lower spot to walk down to, and repeat until you can’t walk downhill anymore. But all too often that gets you stuck in a “local minimum” -- a valley that isn’t the very lowest point on the surface.
That’s where quantum computing comes in. It lets you cheat a little, giving you some chance to “tunnel” through a ridge to see if there’s a lower valley hidden beyond it. This gives you a much better shot at finding the true lowest point -- the optimal solution.
Quantum Machine Learning Singularity from Google, Kurzweil and Dwave ?
Ray Kurzweil, one of the leading scientists in the field of Artificial Intelligence, joined Google late last year. Kurzweil is perhaps the most prominent proponent of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton.
Now we add the Dwave quantum computer which can already accelerate the solution of machine learning algorithms that are useful to Google's AI efforts.
* producing very compact, efficient recognizers
* improved handling of highly polluted [real world]training data
* the best results not with pure quantum computing, but by mixing quantum and classical computing
* 2000 and 8000 qubit systems coming in 2 years and 4 years that will speed up solving nearly all machine learning optimization problems in one second after they are formulated properly for the Dwave system
In a few years, Kurzweil will have the classical computer power of Google's data centers and quantum computer power that could be beyond the power of all classical computers to drive his solution of greater than human intelligence artificial intelligence.
Successful and accelerated unattended machine learning of polluted data should enable classification and organization of nearly any picture, video and patterns.
Dwave is on track to eight thousand qubits by about 2017.
A chart seems to show that as qubits are added the solving time stays at about 1 second
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks