A team has shown that a quantum circuit can learn to sift through reams of data from atom-smashing experiments in search of a new particle. Their proof-of-principle study — performed using a machine built by quantum-computing company D-Wave working on the now-familiar case of the Higgs boson — does not yet provide a clear advantage over conventional techniques. But the authors say that quantum machine learning could make a difference in future experiments, when the amounts data will grow even larger.
In 2012, two experiments at the Large Hadron Collider (LHC) at CERN, Europe’s high-energy physics lab near Geneva, Switzerland, announced that they had proof of the existence of the Higgs boson, the last missing piece in the standard model of particle physics. The two experiments, called CMS and ATLAS, found evidence of the boson created in proton collisions from the way in which the Higgs decayed into more-common ones, such as pairs of high-energy photons. But each time the LHC collides two protons, hundreds of other particles are created, some of which can be misinterpreted as photons when they hit the detectors.
To help speed up their search for the Higgs, ATLAS and CMS physicists used simulated data to train machine-learning algorithms to tell wheat from chaff — photons from impostors.
More recently, particle physicist Maria Spiropulu, who helped lead the Higgs search at CMS, wanted to know whether a quantum computer could help to make the training process more efficient, in particular by reducing the amount of simulated data required to train the system.
The idea was to have the quantum machine find the optimal criteria that an ordinary computer could then use to look for the photon signatures of the Higgs in real data. To test their theory, the team gained access to a D-Wave machine at the University of Southern California in Los Angeles. The experiment was successful, Spiropulu says: “We can train with small data sets and find the optimal solution.”
The machine hasn’t performed better than a virtual version of itself that Spiropolu and her team ran on a conventional computer. And there is a long way to go to demonstrate that these techniques are more efficient than some existing machine-learning algorithms that are able to train on relatively small data sets.
“The interesting thing is that this whole thing works,” Mott says.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.