The Role of Statistical Science in Quantum Information - Theory and Application

 

Including a one-hour introductory lecture, open to all interested, I will give an intensive course over three days. The format of each of the course days will be: one hour talk in the morning; two hours talk in the afternoon. Including lots of time for discussion, questions, and comments. I hope that those who follow the course will actually interrupt me with questions anytime they don’t understand anything. The topic will certainly contain a lot of unfamiliar material for almost any participant. I want to bridge the two worlds of modern quantum information and of modern statistical science; I want to explain to mathematical statisticians why they should be interested in quantum information and I want also to explain to quantum information folk why it would pay off for them to know more about modern statistics. Most importantly, I want to spark new scientific collaborations between people from these two separate worlds.

 

The lecture which introduces the course is meant to be accessible to a very broad audience of interested scientists, including mathematicians, physicists and statisticians of all kinds. It is aimed especially at statisticians who don’t know anything at all (and probably, so far, did not want to know anything at all) about quantum information, quantum communication, quantum computing, quantum entanglement, quantum cryptography, quantum tomography. I want to emphasize that no physics background at all is needed. But no statistics background is needed either! I’ve given this one hour talk successfully to very mixed audiences, in particular to a large audience of high school students, to a large audience of pure mathematicians, to a large audience of applied physicists.

 

First, I will reproduce the abstract of the introductory talk. I call it “hour 0”. After that, I’ll give an outline of the next lectures, grouping them in pairs, called “hour 1” up to “hour 8”. I’ll give some references to introductory material which some participants might like to look at in advance, or perhaps more likely, to consult after the lecture. Or even during it, if they are bored… At the end is some kind of catalogue of further stuff which we might want to fit in somewhere, perhaps in some “extra time”.

 

Hour 0: Introductory public lecture:

“The Role of Statistical Science in Quantum Information - Theory and Application”

 

In medical research, everyone agrees that research should be “evidence based”, and that evidence is largely statistical, and that the gold standard of evidence-based research is the correctly analyzed double blind randomized clinical trial. Physicists on the other hand, traditionally, have little or no use for randomization or for double blinding, and in fact very little respect for statistics. Lord Rutherford famously said: “if you need statistics, you did the wrong experiment”. Yet the incredible advances in quantum information theory and quantum information applications which are being made right now depend essentially on probability theory and on statistical thinking. Quantum mechanics is a probabilistic theory of the world. In order to find out if one has indeed implemented quantum algorithms, one needs to measure quantum systems and test hypotheses or estimate parameters of those systems. The outcomes are random; quantum theory tells us what the probabilities could be. The quantum information revolution needs statisticians.

 

I will show how randomization and martingale theory are involved in the most conclusive experiments ever yet done to test the most far-reaching predictions of quantum theory. I am referring here to famous experiments to verify Bell's theorem. Bell's (1964) theorem is a theoretical analysis which shows that quantum mechanics predicts real world phenomena which cannot be replicated by a “behind the scenes” classical theory of physics. These features at the same time enable some already commercially implemented quantum cryptography and quantum key distribution technologies. The security of these systems depends on the creation of particular quantum states and implementation of particular quantum transformations of states and measurements of states.

 

Already in the early 80’s experiments were done (by Alain Aspect at Orsay, near Paris) which seemed to confirm Bell’s predictions. But the experimental design was limited by the technology of the time, and though the experiments confirmed quantum mechanical predictions, they did not exclude there being a “classical” explanation behind the scenes of what was going on. As the years went by, technology slowly improved, and finally in 2015 it was possible to perform the experiment which John Stuart Bell had essentially dreamed of in 1964, under the precautions needed to ensure that the experiment not only confirmed quantum mechanics but that its results excluded any reasonable classical explanation of what was going on “behind the scenes”.

 

References:

 

https://arxiv.org/abs/1207.5103

Statistics, Causality and Bell’s Theorem

Richard D. Gill

Statistical Science 2014, Vol. 29, No. 4, 512-528

Comment: Written just before the loophole-free experiments of 2015!

 

https://hal.archives-ouvertes.fr/jpa-00220688

Bertlmann’s socks and the nature of reality

John S. Bell

Journal de Physique Colloques, 1981, 42 (C2), pp.C2-41-C2-62. 

10.1051/jphyscol:1981202jpa-00220688

Comment: In my opinion, this paper by J.S. Bell is his masterpiece, written 15 years after the 1964 and 1965 papers which made him famous.

 

https://cds.cern.ch/record/1060284/files/PhysRev.48.696.pdf

Can Quantum-Mechanical Description of Physical Reality be Considered Complete?

A. Einstein, B. Podolsky, N. Rosen (1935)

Physical Review. 47 (10): 777–780.

Comment: The historical paper which started the whole thing (less than four pages, take a look!)

 

 

 

Hours 1 and 2: “Teleportation into quantum statistics”

 

I will introduce the basic ingredients of quantum information theory and show you how quantum teleportation works. I will cover the fundamental notions of state, preparation, and measurement. Participants should be familiar with basic properties of finite dimensional complex vectors and matrices; and in particular, know about orthonormal bases (diagonalization, singular value decomposition). We will find out that “quantum teleportation” is not exactly what it seems, but that still there is some mystery in how the transfer on an ordinary public phone line of two bits of information enables the perfect copying (at the cost of the destruction of the original) of a “qubit”: a two level quantum system, whose state actually depends on three real numbers.

 

https://arxiv.org/abs/math/0405572

Teleportation into Quantum Statistics

Richard D. Gill

Journal of the Korean Statistical Society 30 (2001), 291-325

 

https://www.amazon.com/Quantum-Computation-Information-10th-Anniversary/dp/1107002176

Quantum Computation and Quantum Information: 10th Anniversary Edition

Michael A. Nielsen & Isaac L. Chuang

Comment: this book is still the bible of quantum information. Nice introductions to all the main components - classical information theory, classical computational complexity, elementary basics of quantum mechanics … nice introductions to the famous Shor’s algorithm, Grover’s algorithm, …

 

 

Hours 3 and 4: “Bell’s theorem, and the quest for a loophole-free experimental test of Bell’s theorem”

 

J.S. Bell’s theorem states that certain predictions of quantum mechanics could not possibly, even approximately, be reproduced by any “classical-like” physical theory. “Classical-like” is characterized by three attributes: local, realist, no-conspiracy. The term “realist” could be replaced by “pseudo-deterministic”. In the modern language of causality, one also uses the phrase “counterfactual definite”. Essentially, it means that in a mathematical model of the physical situation being studied, the outcomes of experiments which you did not actually perform can also be supposed to exist (in the mathematical sense) alongside of the outcome of the experiment which you did perform. Future quantum technology (some of it already available commercially) of great interest both to major corporations and major state institutions depends strongly on this property, while, on the other hand, quantum mechanics has uncertainty relations which mean that quantum noise will eventually put a stop to Moore’s law. We are reaching quantum limits of classical computing, already. But the phenomenon of quantum entanglement also means that some things are possible in reality which, before the discovery of quantum mechanics, no one would ever have believed.

 

https://arxiv.org/abs/1207.5103

https://projecteuclid.org/euclid.ss/1421330545

Statistics, Causality and Bell’s Theorem

Richard D. Gill

Statistical Science 2014, Vol. 29, No. 4, 512-528

Comment: Written just before the loophole-free experiments of 2015!

 

 

Hours 5 and 6: “Quantum state reconstruction and quantum state tomography”

 

Experimental physicists and engineers are building quantum devices which exploit quantum entanglement and other weird features of quantum mechanics in order to build quantum computers, perform quantum cryptography, and so on. The general theory is incredibly accurate and incredibly well tested (to much higher accuracy than the other foundation of modern physics – general relativity). In order to show that they have succeeded in doing what they planned, they need to measure the quantum systems which they have created and reconstruct their state, to sufficient accuracy to show that the engineering was indeed successful. Now: the reconstruction (from measurement results) of the joint quantum state of, say, 15 qubits, is “just” an inverse statistical problem. Theoretically the state just depends on the values of 2 to the power 15 squared real numbers. Typically, one can quite easily measure each of the 15 qubits separately, getting a binary outcome (“spin up or spin down, in a chosen measurement direction”). This is not just an inverse problem but also a “wide data” problem. There are many, many more real parameters than binary measurement outcomes in the data. Hence, we are going to need notions of sparsity to reduce the number of parameters drastically, introducing bias but decreasing variance. There are physical constraints on the parameters, too. Or to put it a different way, one needs to find a parametrization which makes the parameter space as nice as possible, as well as allowing us to define sub-models of much lower dimension which can still describe the data well, and contain the features which the experimenters are hoping to prove that they have engineered.

 

https://arxiv.org/abs/math/0405595

An invitation to quantum tomography

L.M. Artiles, R.D. Gill, M.I. Guta

Journal of the Royal Statistical Society (B) 67, 109-134 (2005)

 

https://arxiv.org/abs/1901.07991

https://iopscience.iop.org/article/10.1088/1751-8121/ab1958

A comparative study of estimation methods in quantum tomography

Anirudh Acharya, Theodore Kypraios, Madalin Guta

Journal of Physics A: Mathematical and Theoretical, Volume 52, Number 23 

 

 

Extra time: Any other business; closing loose ends; discussion; conclusion

 

If there is time for anything more, the content of a possible extra lecture will depend very much on the taste of the audience who remain (one expects the number of participants to slowly decline as the week progresses!) and of course, on whether or not I succeeded to cover what I had hoped to cover in the preceding lectures. I do see room for several possible final topics, as well, of course, as the possibility to simply revisit the earlier topics.

 

1) How to overcome all the experimental loopholes which so far are known in Bell-type experiments (and which the 2015 experiments more or less managed to deal with); this is a return to classical thinking and classical probability theory, since the question we are looking at, is whether a classical stochastic theory could reproduce certain experimental results.

 

2) In view of recent successful experiments and recent theoretical work, what should we presently think about the foundations of quantum mechanics? Physicists on the whole agree that quantum theory is correct but many still like to have a way of thinking about what it actually means. Some physicists actually believe this is a waste of time - there is a famous slogan “shut up and calculate”. We have the math, and lots of tricks to deal with it; but it is hard enough as it stands, even with all those sophisticated tricks; we don’t really need any idea of what it actually means. Others think that the interpretational problem (what is actually going on, in those experiments, at the level of individual particles?) has been solved by the “many worlds interpretation”, which, I must admit, I think is just “many words”. It hides the actual problem under verbiage. The famous Schrödinger’s cat thought experiment shows (to me at least, convincingly) that there is a mathematical/logical contradiction in the mathematical framework (“unitary, no-collapse, QM”) which the professional quantum “calculators” are so skilled at deploying; moreover, there is no satisfactory derivation of Born’s law: the rule which tells us what are the probabilities of what we actually see. There is QBism (“B” stands for subjective Bayesian). There are more options out there. I presently put my money on so-called “collapse theories”. This means that, right now, I think that quantum non-locality is for real; I believe that it does not violate relativistic causality through the existence of irreducible randomness. This is not a popular option, but I know that it is actually the strong belief of several of the most important experimenters and theorists. At some time, it will be possible to experimentally distinguish between these “interpretations”, and at that time, there will also be technological implications.

 

3) A number of my students have explored with me the possibility of generalizing LeCam’s theory of convergence of experiments to the quantum realm. There is a serious obstruction, but there also has been some remarkable successes, and their latest results are being used nowadays by experimenters and engineers to get good approximations and (close to) optimal solutions, just as classical asymptotic statistical theory is actually used to get practical approximations and to do what is practically close to optimal in real finite n situations.

 

4) The quantum information folk are now throwing themselves into a new field called “quantum learning”. What is it? Will it work?

 

My homepage and my recent slideshare talks could give the participants some preview of what is possible, including several statistical topics which (as far as I know) have nothing whatsoever to do with quantum physics.

 

https://www.math.leidenuniv.nl/~gill

https://www.slideshare.net/gill1109/presentations

 

 

Richard Gill

Leiden University

16 October 2019.