When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

New research shows that near-term quantum computers can learn to reason

IBM's quantum computer

The applications and development of quantum computers have steadily picked up pace in the last few years. We've seen researchers applying this novel method of computation in a variety of domains including quantum chemistry, fluid dynamics research, open problems, and even machine learning, all with promising results.

Continuing this trend, UK-based startup Cambridge Quantum Computing (CQC) has now demonstrated that quantum computers "can learn to reason". Confusing at first, this claim is based upon new research coming out of CQC. Dr. Mattia Fiorentini, Head of Quantum Machine Learning at the firm, and his team of researchers investigated using quantum computers for Variational Inference.

Variational Inference is a process through which we approximate a given probability distribution using stochastic optimization and other learning techniques. Jargon aside, this means a quantum computer outputs potential solutions to inferential questions such as that given the grass is wet and it's cloudy, what's the more probable cause for it? Rain or sprinklers? Formally, the question is posed as follows:

“What is the state of the unobserved variables given observed data?” These outputs can then be used in downstream tasks such as finding the likeliest reason given the available data or predicting future outcomes and their confidence.

The team's work, titled Variational inference with a quantum computer, has been published on the pre-print repository arXiv and highlights what the firm believes to be a promising indicator that quantum computers are great for Variational Inference, and by extension, at reasoning.

Outputs from a quantum computer appear random. However, we can program quantum computers to output random sequences with certain patterns. These patterns are discrete and can become so complex that classical computers cannot compute them in reasonable time. This is why quantum computers are natural tools for probabilistic machine learning tasks such as reasoning under uncertainty.

In the paper, the researchers demonstrate their results on Bayesian Networks. Three different problem sets were tested. First, was the classic cloud-sprinkler-rain problem that was described above. Second, was the prediction of market regime switches (bull or bear) in a Hidden Markov Model of simulated financial time series. Third, was the task of inferring likely diseases in patients given some information about symptoms and risk factors.

Using adversarial training and the kernelized Stein discrepancy, the details of both which can be found in the paper, the firm optimized a classical probabilistic classifier and a probabilistic quantum model, called Born machine, in tandem.

Adversarial method Kernelized Stein discrepancy method
Adversarial method
Kernelized Stein discrepancy method

Once trained, inference was carried out on the three problems defined earlier, both on a quantum simulator and on IBM Q's real quantum computers. In the truncated histograms shown below, the magenta bars represent the true probability distribution, blue bars indicate outputs from a quantum computing simulator, and grey bars indicate the output from real quantum hardware from IBM Q. The results on real quantum computer hardware are marred by noise and this causes slower convergence compared to the simulation. That is to be expected in the NISQ era, however.

Truncated histogram of the posterior distribution for a hidden Markov model Histogram of the posterior distribution for a medical diagnosis task
Truncated histogram of the posterior distribution for a hidden Markov model
Histogram of the posterior distribution for a medical diagnosis task

The probability distribution of the quantum simulator closely resembles the true probability distribution, indicating that the quantum algorithm has trained well, and that the firm's adversarial training and the kernelized Stein discrepancy methods are powerful algorithms for the intended purpose.

We demonstrate the approach numerically using examples of Bayesian networks, and implement an experiment on an IBM quantum computer. Our techniques enable efficient variational inference with distributions beyond those that are efficiently representable on a classical computer.

The firm believes that this is yet an indicator that "sampling from complex distributions is the most promising way towards a quantum advantage with today’s noisy quantum devices." And that its new inference methods "can incorporate domain expertise". Moving forward, the firm envisions "a combination with other machine learning tasks in generative modeling or natural language processing for a meaningful impact."

Further details can be found in this blog post and the paper on arXiv. If you are interested, you can check out Dr. Mattia's interview on YouTube here.

Report a problem with article
Samsung TV Plus interface on a smart TV
Next Article

Samsung TV Plus is coming to more European markets and India

An AMD Ryzen processor render
Previous Article

Entire AMD Ryzen 5000G "Cezzane" desktop APU lineup and specs leak

Join the conversation!

Login or Sign Up to read and post a comment.

3 Comments - Add comment