Came out this morning: http://arxiv.org/abs/1508.05949 The first successful "loophole free" Bell experiment, in Delft. They use techniques from probability theory which I introduced to this area. Martingale theory. To neutralise possible effects of memory and time-inhomegeneity. The experiment was invented (by J.S. Bell) in 1964. We've had to wait 51 years. What they did ============= The experimenters succeeded 245 times to bring the electrons in twee holes in two diamonds at a distance of 1400 meters in a state of quantum entanglement (two carbon atoms have been replaced by a Nitrogen atom and a hole or "Vacancy"), by exciting them with lazers, by which photons are emitted, which meet one another half way. You measure the two photons (as they meet) in a special way and if both photons say "yes" then the two NV- spins are entangled. OK, you have two entangled spins. In twee different laboratories in Delft. In those two laboratories there are experimenters who we always call "Alice" and "Bob". Alice en Bob now have about 4 microseconds to the following. (Because a light signal propagates 1300 meter in 4 microseconds, en we believe that nothing can go faster). They both toss a fair coin and depending on the outcome (heads or tails) they measure their spin in one or another way. In particular directions, say. We call this the measurement setting. The spin then says "up" or "down". That is the measurement outcome. After a number of repetitions of this, we collect the measurement settings and measurement outcomes together and we take a look at how often the two measurement outcomes were the same (both up or both down) for each combination of the two measurement settings. And they saw (approximately) the following, which is moreover what is predicted by quantum mechanics: HH 85% of the time the same measurement outcome HT 15% of the time the same measurement outcome TH 15% of the time the same measurement outcome TT 15% of the time the same measurement outcome Notice: for just one combination of settings the two spins mostly do the same thing, but with the other three combinations they mostly do the opposite of one another. According to Einstein the measurement outcome must have been determined in advance by "hidden variables" which, restricted by relativity theory, can't propagate faster than light speed. Under that assumption the "best" (most extreme) that those two spins could achieve would be: HH 75% of the time the same measurement outcome HT 25% of the time the same measurement outcome TH 25% of the time the same measurement outcome TT 25% of the time the same measurement outcome The difference between these two tables is, despite the small number of trials, statistically significant. "Local realism" or "local hidden variables" have to be rejected. This is the definitive end to a discussion which started with EPR - Einstein Podolsky Rosen (1935 I believe), and was turned on its head by Bell (1964)... My contribution. ========== The problem with this experiment is dat the 245 individual trials are performed again and again at the same place with the same materials. There might perhaps be memory effects according to which quantum correlations can be "faked" using classical-like physics. Also, as time goes by, all kind of time shifts can take place. No reason at all to imagine that we have 245 independent and identically distributed sub-experiments. Maybe time-inhomogeneity and memory can generate what appear to be "quantum correlations" in a completely classical physical way. I showed a few years ago how you can take account of this. It is namely the case that, 245 times, we tossed two fair coins. I shift the statistical probability calculations from the physics - choosing another two random particles again and again - to the experimental design, randomisation: tossing two coins again and again. In other words: we treat this like a "randomized clinical trial" in order to neutralise the possible effects of confounding factors - in this case *time*. I did that in 2003 http://arxiv.org/abs/quant-ph/0301059 . I wanted to make a bet with someone who claimed they could imitate quantum correlations on a network of classical computers, thereby having proved that Bell was wrong, Bohr was wrong, and EPR (Einstein) had won. I saw how you could use martingale theory (the theory of the random process of your net capital, as function of time, in the course of repeated plays at a game of chance like roulette) to design a bet which I had a chance of at most one in a million of losing. We were going to bet 5000 Euro. The question was how many trials at least, and what was the criteria for win/lose. My opponent might well use memory and time ... but I was going to insist on choosing settings with fair coin tosses. My result got picked up by the experimenters. In recent years they've refined my rough probability inequalities (got them as sharp as possible). They've been preparing this experiment in Delft for two years. There are about 5 top experimental groeps in the world who all want to be first. Two years ago they got very close with experiments with photons (Christensen et al 2013, Giustina et al 2013). Photons are slippery creatures and you lose a lot of them. Maybe the photons which you do catch are not a random sample from all the photons. The experiment of Aspect et al (1981) confirms quantum mechanics, but his results to not conflict with "local realism" as it is called, unless you make an extra and untestable assumption called the "fair-sampling" assumption. http://www.slideshare.net/gill1109/epidemiology-meets-quantum-statistics-causality-and-bells-theorem Conclusion ======== From now on we are forced to believe in *irreducible randomness* as a fundamental feature of nature. Nature is intrinsically non-deterministic. And that in a way which can cause bizarre effects at great distance. Which precisely because of the large amount of randomness in these phenomena do not contradict relativity theory. You can't use these correlations to transmit signals at speeds greater than light-speed. "I cannot say that action at a distance is required in physics. But I cannot say that you can get away with no action at a distance. You cannot separate off what happens in one place with what happens at another" -- John Bell https://www.youtube.com/watch?v=V8CCfOD1iu8 "Nature produces chance events (irreducibly chance-like!) which can occur at widely removed spatial locations without anything propagating from point to point along any path joining those locations. ... The chance-like character of these effects prevents any possibility of using this form of non locality to communicate, thereby saving from contradiction one of the fundamental principles of relativity theory according to which no communication can travel faster than the speed of light" -- Nicolas Gisin, "Quantum Chance: Nonlocality, Teleportation and Other Quantum Marvels". Springer, 2014 Keywords and phrases: entanglement, spooky action at a distance, EPR (Einstein-Podolsky-Rosen), Bell Nowadays these phenomena are the basis for upcoming technology (quantum cryptography, quantum communication, quantum computation). But as long as they couldn't do a loophole-free Bell type experiment, it was all just a dream. If you can only do that experiment by cheating, then you can't build secure communication and cryptography protocols on it. Because someone might be cheating, they might be faking quantum entanglement by classical physical means. Appendix ======== Some more remarks on the data and the statistics. In fact, the experimenters did not quite get the most extreme possible quantum statistics: 85%, 15%, 15%, 15%. They got something more like 80%, 20%, 20%, 20% .... the point is, this is statistically significantly better than the best that Einstein says is possible: 75%, 25%, 25%, 25%. The conventional way to analyse these results is to shift from probabilities of equality to correlations. If X and Y are two random variables taking the values +/-1, then E(XY) = 2 Prob(X = Y) - 1. We calculate four correlations, getting one large one and three small ones. We subtract the three small ones from the one large one and get a number called S. Einstein says that S is not bigger than 2. Quantum mechanics says that S could be as large as 2 sqrt 2 = 2.828.... In Delft they saw S = 2.4 which is nicely half way between Einstein's and QM's best. But could this result be due to chance? This is where statistical calculations, p-values, significance, standard errors, come into play. An empirical correlation between binary variables based on a random sample of size N has a variance of (1 - rho^2)/N. The worst case is rho = 0 and variance 1/N. In fact we are looking at four empirical correlations equal to approx +/- 0.6. So if we believe that we have four random samples of pairs of binary outcomes then each empirical correlation has a variance of about 0.64 / N where N is the number of pairs of observations for each pair of settings. If the four samples are statistically independent the variance of S is about 4 * 0.64 / N where N = 245 / 4. This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism. The chance that this occurs by chance is 0.023. However, if we are a bit paranoid, we might not trust the assumptions of statisticial independence, constant probabilities ... and we might not trust the normal approximation which I just used. Then we have to be more careful. Using more advanced probability techniques (martingale inequalities), the Delft team found that the probability (assuming local realism) of such an extreme result as they got (S >= 2.4) is not bigger than 0.039. Actually I think it is amazing that the rough and ready approximate result is not really different from the paranoid / conservative exact result. Clearly it would be nice to have the experiment redone with say ten (approx = 9 = 3 squared) times as large runs. Then the standard deviations will be three times smaller (everything else remaining the same). 6 standard deviations instead of 2. Then we will have the degree of confidence usually demanded in particle physics (the Higgs boson, for instance). Bell's instructions (for a loophole-free experiment) ==================================================== Bell (1981): "You might suspect that there is something specially peculiar about spin-1/2 particles. In fact there are many other ways of creating the troublesome correlations. So the following argument makes no reference to spin-1/2 particles, or any other particular particles. "Finally you might suspect that the very notion of particle, and particle orbit, freely used above in introducing the problem, has somehow led us astray. Indeed did not Einstein think that fields rather than particles are at the bottom of everything? So the following argument will not mention particles, nor indeed fields, nor any other particular picture of what goes on at the microscopic level. Nor will it involve any use of the words ‘quantum mechanical system’, which can have an unfortunate effect on the discussion. The difficulty is not created by any such picture or any such terminology. It is created by the predictions about the correlations in the visible outputs of certain conceivable experimental set-ups. "Consider the general experimental set-up of Fig. 7. To avoid inessential details it is represented just as a long box of unspecified equipment, with three inputs and three outputs. The outputs, above in the figure, can be three pieces of paper, each with either ‘yes’ or ‘no’ printed on it. The central input is just a ‘go’ signal which sets the experiment off at time t1. Shortly after that the central output says ‘yes’ or ‘no’. We are only interested in the ‘yes’s, which confirm that everything has got off to a good start (e.g., there are no ‘particles’ going in the wrong directions, and so on). At time t1 + T the other outputs appear, each with ‘yes’ or ‘no’ (depending for example on whether or not a signal has appeared on the ‘up’ side of a detecting screen behind a local Stern–Gerlach magnet). The apparatus then rests and recovers internally in preparation for a subsequent repetition of the experiment. But just before time t1 + T, say at time t1 + T – delta, signals a and b are injected at the two ends. (They might for example dictate that Stern–Gerlach magnets be rotated by angles a and b away from some standard position). We can arrange that c delta < < L, where c is the velocity of light and L the length of the box; we would not then expect the signal at one end to have any influence on the output at the other, for lack of time, whatever hidden connections there might be between the two ends. "Sufficiently many repetitions of the experiment will allow tests of hypotheses about the joint conditional probability distribution P(A,B|a, b) of results A and B at the two ends for given signals a and b. Now of course it would be no surprise to find that the two results A and B are correlated, i.e., that P does not split into a product of independent factors: P(A,B|a,b) != P1(A|a)P2(B|b). But we will argue that certain particular correlations, realizable according to quantum mechanics, are locally inexplicable. They cannot be explained, that is to say, without action at a distance."