Mountain Math Software
home consulting videos book QM FAQ contact

PDF version of this book
next up previous contents
Next: Exploring discretized wave equations Up: Relativity plus quantum mechanics Previous: Bell's theorem general proof   Contents


Experimental tests of Bell's inequality

When Bell's result was published, the experimental record was reviewed for evidence against locality. None was found. Thus an effort began to develop tests of Bell's inequality. A series of experiments conducted by Alain Aspect and his colleagues ended with one reported in 1982 in which polarizer angles were changed at a rapid enough rate to prevent light from either polarizer from reaching the more distant detection in time to influence the result[2]. The experimental design was like that shown in Figure 8.4.

The only reservation Aspect and his colleagues expressed for this experiment concerned the lack of randomness in the polarizer settings.

A more ideal experiment with random and complete switching would be necessary for a fully conclusive argument against the whole class of supplementary-parameter theories obeying Einstein's causality[2].
This is a little surprising. In a previous paper Aspect pointed out another problem.
Only two loopholes remain open for for advocates of realistic theories without action at a distance. The first one, exploiting the low efficiencies of detectors, could be ruled out by a feasible experiment. The second one, exploiting the static character of all previous experiments, could also be ruled out by a ``timing experiment'' with variable analyzers now in progress[3].
The experiment ``now in progress'' was the one widely considered to be conclusive at the time. This was true even though the detector efficiency problem remained until an experiment in 2001[44]. That experiment does not have adequate timing constraints and thus is still inconclusive.

Aspect claimed conclusive results in an experiment prior to both of the above reports.

Our results in excellent agreement with the quantum mechanical predictions, strongly violate the generalized Bell's inequalities, and rule out the whole class of realistic local theories[1]

Why all the confusion about what constitutes a conclusive experiment? There are no doubt many reasons. Perhaps the biggest problem is the conviction among most physicists that quantum mechanics is correct about this. When one is certain about the expected results of an experiment, one's critical faculty is handicapped. Its hard to fully consider all the possible ways that an experimental claim may be overly broad.

Many physicists have an emotional investment in the outcome of these experiments that compromises their objectivity. Einstein is universally regarded as the greatest physicist of the twentieth century although he barely made it to the middle of the century and all of his major work was done by the 1935 date of the paper known as EPR which led to these experiments. There is a strong conviction that physics has moved way beyond the naive realism of Einstein and these experiments are the objective proof of that. If they ultimately turn out to vindicate Einstein this will be an enormous blow to the ego of many physicists working in the foundations of quantum mechanics,

Aspect's experiment was widely regarded at the time as conclusive, especially in the popular press. The reservation about randomly varying the polarizer angles seemed like nitpicking. Were the photons suppose to figure out the pattern and use it to time their detections?

In 1985 James D. Franson published a paper showing that the timing constraints in this experiment were not adequate to confirm that locality was violated[24]. The difficulty is in establishing the time of detection. For that starts as a microscopic event like the atom decay that determines the fate of Schrödinger's cat. (See Section 7.3.) While few believe the cat's fate remains undecided until one opens the box the exact time at which that fate has become certain is unclear. For timing in a test of Bell's inequality to be conclusive requires that we time the occurrence of a macroscopic event. The trouble is there is no clear definition of what a macroscopic event is. Franson observed the following.

The time interval over which the probability amplitudes discussed above may simultaneously exist and interact in the experiment by Aspect Dalibard and Roger could conceivably be comparable to the 89-nsec lifetime of the excited atomic state which produces the pair of photons. If the photon emission time remains indeterminate for this length of time than it is plausible that the final outcome of the event may remain indeterminate for a comparable amount of time[24].

Franson introduced the phrase ``delayed determinism''. This sounds very strange but, as he was at pains to point out in his paper, this is an integral part of quantum mechanics and may well be part of a local realistic theory. There is nothing in existing theory that says when an event is finally determined. Microscopic systems can exist in a superposition of states like the dead and live cat because interference effects are observed from both states simultaneously. Any conclusive result must involve an unambiguously macroscopic measurement of time. On that grounds Aspect's experiment failed.

In spite of Franson's objections and the additional problem of detector efficiencies the belief remained widespread that Aspect's experiment was decisive, ``Proposal for a Loophole-free Bell Inequality Experiment[38]'' was published in 1993 detailing the problems in existing experiments and how they might be overcome.

A major stride in addressing the timing was described in ``Violation of Bell's inequality under strict Einstein locality''[52]. This provided a much tighter (although not absolutely conclusive) constraint on timing by separating the detections by 400 meters. Fiber optics made this practical. It take light only 1.3$\mu s$ (1$\mu s$ is one millionth of a second) to travel 400 meters. The time between when the polarizer settings were changed and a detection was registered was less than 100$ns$. (1$ns$ is one billionth of a second.) The paper draws the following conclusions.

While our results confirm the quantum theoretical predictions, we admit that, however unlikely, local realistic or semi-classical interpretations are still possible. Contrary to all other statistical observations we would then have to assume that the sample of pairs registered is not a faithful representative of the whole ensemble emitted. While we share Bell's judgment about the likelihood of that explanation8.4, we agree that an ultimate experiment should also have higher detection/collection efficiency, which was 5% in our experiment.
Further improvements, e.g. having a human observers choose the analyzer directions would again necessitate major improvements of technology as was the case in order to finally, after more than 15 years, go significantly beyond the beautiful 1982 experiment of Aspect et al[2]. Expecting that any improved experiment will also agree with quantum theory, a shift of our classical philosophical positions seems necessary. Among the possible implications are nonlocality or complete determinism or the abandonment of contrafactual conclusions. Whether or not this will finally answer the eternal question: ``Is the moon there, when nobody looks?''[40], is certainly up to the reader's personal judgment[52].

The authors admit that the detector efficiencies make the experiment less than conclusive yet they are completely confident that no future experiment in this area will contradict quantum mechanics. The problem with this attitude is illustrated by their speculation on what this means. They can ask ``Is the moon there when nobody looks?'' because quantum mechanics says nothing about physical state between observations. The only thing that evolves between observations is a wave function of probability densities in configuration space. I suspect something physical does happen between observations and that alone makes quantum mechanics incomplete. Given the complete ignorance about what is happening between the creation of the photon pairs and their detection it would seem that a higher degree of skepticism about a truly conclusive experiment is called for.

There was a similar experiment about the same time that achieved even greater separation of detectors[47].

A recent experiment finally addressed the detector efficiency problem. The following portion of the abstract describes what was accomplished in this experiment.

Here we have measured correlations in the classical properties of massive entangled particles ($^9$Be$^+$ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's `signal' is 2.25 $\pm$ .03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called `detection' loophole[44].

At the end of the article the authors say they were not able to overcome the timing or ``light cone loophole'' in this experiment. Thus there are experiments that individually address these two loopholes but no single conclusive experiment. In addition the timing loophole is somewhat ill defined because of the lack of a a clear distinction between microscopic and macroscopic.

A recent theoretical paper[23] analyzes noise in these experiments and raises new and difficult issues. Following is the abstract and conclusion of this paper.

We emphasize the difficulties of an experiment that can definitely discriminate between local realistic hidden variables theories and quantum mechanics using the Bell CHSH inequalities and a real measurement apparatus. In particular we analyze some examples in which the noise in real instruments can alter the experimental results, and the nontrivial problem to find a real ``fair sample" of particles to test the inequalities.
[...]
Bell's inequality tests necessitate major improvements of technology in order to finally, after more than 15 years, go significantly beyond the 1982 experiment of Aspect et al. [2]. While expecting that any improved experiment will also agree with quantum theory, actually the final answer to the eternal question: ``Is the moon there, when nobody looks?'', is certainly up to our judgement capability. But sometime also the question "Is the moon there when we look at it by a noisy telescope?" appears very hard to address.

It is not difficult to see why most physicists are confident in their expectations for these experiments. Quantum mechanics is a spectacularly successful theory producing extraordinary predictions many of which have astounding accuracy far surpassing anything possible with classical physics. One can easily understand Bell's skepticism about the detection efficiency loophole.

...it is hard for me to believe that quantum mechanics works so nicely for inefficient practical set-ups and is yet going to fail badly when sufficient refinements are made[7].
How can a theory that has been so spectacularly reliable and successful suddenly falter because of improved detector efficiency? That is one way to look at things and the way most physicists do.

An alternative view focuses on how extraordinary these predictions are and on how convoluted and improbable a theory quantum mechanics is. Locality is the most powerful simplifying assumption in physics. Without it any event in the universe can influence any other and physical theories become problematic if not impossible. How is it that the universe violates locality but only does so in obscure and difficult experiments that retain significant loopholes? One would expect that a universe, containing the complexity required for nonlocality, would be spectacularly nonlocal. One would hardly expect a theory like relativity, that is local at its core, to be one of the two dominant theories in such a universe. Of course the universe does not have to live up to our expectations, but simplicity and elegance have often been a guide to deeper and richer physical theory and these predictions of quantum mechanics are about as far from simplicity and elegance as one can get.

Bell proved that the configuration space model of quantum theory cannot be mapped into physical space except with an explicitly nonlocal model such as Bohm's[8]. The alternatives are action at a distance (that is not relativistic) or a model like configuration space that is not relativistic in its structure, but can be, to a limited degree, in its predictions. In addition quantum mechanics says nothing about what happens in physical space between observations. If it is meaningful to talk about an objective physical state that changes continuously, then quantum mechanics, which lacks such a description, is incomplete.

Quantum mechanics fails to define measurement. Does it take a human conscious observation? Is a macroscopic event enough? If so how big is macroscopic? Is a gauge than can be read by a human enough? How about the change in state of a single bit in a computers memory? If there is an objective definition of measurement than quantum mechanics is an incomplete theory.

One can argue that the philosophy of contemporary physics is to define as meaningless what one does not understand. It is one thing to avoid such problems because one has no idea how to deal with them and quite another to say fundamental philosophical principles need to be changed. These predictions deserve a high level of skepticism no matter how many inconclusive experiments agree with quantum mechanics.

The conviction that quantum mechanics will not be falsified by these experiments stems in part from the difficulty of imagining how an alternative local theory could account for the existing experimental record. In the next section I speculate about the properties of discrete models based on the wave equation. Such a model might account for the existing experimental results. This speculations is far from being a new theory. That is how one must start if the models can only be developed in conjunction with experiments as seems likely.


PDF version of this book
next up previous contents
Next: Exploring discretized wave equations Up: Relativity plus quantum mechanics Previous: Bell's theorem general proof   Contents


Mountain Math Software
home consulting videos book QM FAQ contact
Email comments to: webmaster@mtnmath.com