Researchers have proposed a model for perceptrons to be directly implemented on near-term quantum processing devices, and we have experimentally tested it on a 5-qubits IBM quantum computer based on superconducting
technology. Our algorithm presents an exponential advantage over classical perceptron models, as we have explicitly shown by representing and classifying 4 bits strings using 2 qubits, and 16 bits strings using only 4 qubits.
It might then be possible to design a version of our proposed quantum perceptron algorithm working with approximated encoding quantum states instead of exact ones, which would have the potential to scale exponentially better than any classical algorithm implementing a perceptron model. In this respect, it is also worth pointing out that our procedure is fully general and could be implemented and run on any platform capable of performing universal quantum computation. While we have employed a quantum hardware that is based on superconducting technology and qubits, a very promising alternative is the trapped ion based quantum computer, in which multi-qubit entangling gates might be readily available.
In the present work they restricted the whole analysis to binary inputs and weight vectors (the so called “McCollough-Pitts” neuron model), mainly for clarity and simplicity of implementation. A possible improvement for the algorithm presented is obviously to encode continuously valued vectors (equivalent to grey scale images).
Finally, a potentially very exciting continuation of this work would be to connect multiple layers of our quantum perceptrons to build a feedforward deep neural network, which could be fully run on dedicated quantum hardware. In such a network, each neuron could use two ancilla qubits, one to be measured to introduce the nonlinearity as done in this work, while the second would be used to propagate the information from each neuron to the successive layer in the network in a fully quantum coherent way. As such, the work is a concrete first step towards an actual application of near-term (i.e., with few tens of non-error corrected qubits) quantum processors to be employed as fast and efficient trained artificial quantum neural networks.
Artificial neural networks are the heart of machine learning algorithms and artificial intelligence protocols. Historically, the simplest implementation of an artificial neuron traces back to the classical Rosenblatt’s “perceptron”, but its long-term practical applications may be hindered by the fast scaling up of computational complexity, especially relevant for the training of multilayered perceptron networks. Here we introduce a quantum information-based algorithm implementing the quantum computer version of a perceptron, which shows exponential advantage in encoding resources over alternative realizations. We experimentally test a few qubits version of this model on an actual small-scale quantum processor, which gives remarkably good answers against the expected results. We show that this quantum model of a perceptron can be used as an elementary nonlinear classifier of simple patterns, as the first step towards practical training of artificial quantum neural networks to be efficiently implemented on near-term quantum processing hardware.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.