Decoherence Interpretation Falsified?
"Decoherence" is both a real phenomenon, and part of an interpretation purporting to tell us why don't observe macroscopic quantum superpositions like Schroedinger's Cat. ("Explanation" is too strong, often avoided even by supporters.) Some would say, the DI avoids the paradoxical quantum measurement problem of "collapse of the wave function." This supposedly comes about because, roughly, decoherence and entanglement of the phase relations between superpositions (like "dead" + "alive") effectively converts them into mixtures (like classical particles, roughly speaking.) Some say the unactualized alternative slips into another universe. (All easy to find online.) I don't agree, making rebuttal at other posts here and elsewhere. I am heartened that Roger Penrose made similar complaints at e.g. Shadows of the Mind, and critics like N P Landsman have picked at various loose ends and problems. Yet few DI advocates are swayed by critics. As Landsman writes: "Like capitalism, decoherence seems here to stay."
I thought of an experiment we could do to demonstrate the weakness of the DI. It shows we can recover information that would have been lost if indeed "decoherence converts a superposition into a mixture" as some have IMHO too-boldly proclaimed. That's better than just arguments per se against DI. Briefly, it shows that true but decohered superpositions would produce one set of results in this system, but a true mixture (or anything, by definition, "indistinguishable" from a mixture) would (if present where detectors are usually deployed) produce a different set of results. Hence they can't both describe the situation. Since I gather there is agreement ("conventional QM") that the first outcome is the correct result, that causes difficulties for the DI. (Note: the importance of this argument may go beyond DI, if such information "should" simply have been "lost" period, apart from any interpretative framework.) The predicted outcomes are already derivable through known quantum optics, so I assembled a case from existing knowledge. (It still should be empirically verified.) I hold therefore that the "quantum measurement paradox" remains unresolved, perhaps the deepest mystery about the nature of our world.
My proposal is fairly easy to describe. (The math is less simple but not hard to work through.) I use some ASCII conventions for now from tech issues, so "*" for multiplication where needed etc. Synopsis: even if we completely scramble their relative phases over a history of instances, and then recombine split waves; we can recover their original amplitudes when the secondary outputs are recombined again in a subsequent beamsplitter. This would not be possible from a genuine mixture, as opposed to an apparent one as the case below is revealed to be. Hence decoherence does not always convert a ensemble of superpositions of random phase into a mixture.
Consider a Mach-Zehnder interferometer (as shown above) with a first beamsplitter BS1 that does not [was "need not"] divide intensity equally. For sample values we'll use intensity along bottom leg L1 = 64% and top leg L2 = 36%. Hence, relative amplitudes are: a = 0.8 and b = 0.6. We use a 90 degree (i) phase change at each half-silver and treat full reflections and transmissions as not changing phase (per custom of Roger Penrose, OK as long as consistent.) We can represent what happens to a single photon entering BS1 as: transmitted state a|1> goes along the bottom, and reflected state ib|2> along the top.
However, suppose some interaction/s introduce a new phase shift φ to the wave in L2, as complex angle. [Shift φ was "u" before font change, and added cleanup of this section.] That changes the phase in L2 to iφb|2>. Then we recombine the beams at BS2, which is a 50/50 splitter/recombiner. It combines relative phases as did BS1, with output from the lower face of BS2 called channel CA2; and from the other face: channel CB2. (This keeps numbering consistent and allows easy reference to original output from BS1.) I will just use "s" for sqrt(0.5) ~ 0.7. A half-silver mirror reduces intensity to 1/2 and thus multiplies amplitude by s. Hence [equations adjusted to new phase standard, pardon some earlier untidiness],
(1) CA2 superposition = s[(ia|1> + iφb|2>],
CB2 superposition = s[(a|1> - φb|2>].
We find intensity (and photon statistics if we collected at this point) by inserting into
(2) I = A^2 + B^2 + 2AB cos theta,
showing involved net (superposed) amplitudes A and B that are comprised from combinations of |1> and |2>. If a = b and u = 0 and thus net phase between legs is i, then the intensity out of CA2 = 1/4 + 1/4 + 1/2 = 1, and out of CB2 = 1/4 + 1/4 - 1/2 = 0. That is equivalent to bright and dark fringes. If a = 0.8 and b = 0.6, we get CA2 = 0.32 + 0.18 + 0.48 = 0.98 and CB2 = 0.32 + 0.18 - 0.48 = 0.02. Hence, lower contrast fringes. If there is a further phase difference introduced between |1> and |2>, then the relative intensities change accordingly as can be calculated (but still must add to one of course.) If we introduce photons one by one into such a device, the statistics of detection are the same. To make the argument and experiment about "the wave function of a single photon," that is what we'd do. (Despite some states of unclear photon number, an effective "one photon at a time" *can* be introduced into such a device. It means basically, one net "click" from arrays of ideal photon detectors covering all avenues of escape.)
Now, what if we introduced complete decoherence into the picture, in a manner like Chad Orzel uses (and similar in effect to found elsewhere) to model decoherence in e.g the post at http://scienceblogs.com/principles/2008/11/manyworlds_and_decoherence.php ? (We had quite a debate there and elsewhere. I admit being testy sometimes but think I'm in the right in the end. [I add, that Chad seems not to be an advocate of the strongest claims about DI. Commenters suggested also http://www.ipod.org.uk/reality/reality_decoherence.asp by Andrew Thomas.] Now we have to integrate over a range of randomly varying φ, and divide by that full range to get the mean value. This is easy for a uniform distribution of phase differences, since
(3) integral (Eq. 2) d theta = (A^2 + B^2)theta + 2AB sin theta + C.
We pick a range that completely scrambles the phases ("complete decoherence") such as between +/- pi, substitute into the integral, and divide by the range 2pi. Then (since sine of each limit = 0) we find the result out of either channel is simply A^2 + B^2 = (a^2 + b^2)/2. This destroyed the statistical interference pattern of photon hits, even in the case a <> b. The output acts like a "mixture" of photons each exiting BS2 from either CA2 or CB2 with 50/50 chance, but not "both at the same time." [fixed poor wording]
A strong DI follower would say (following an ensemble interpretation): what would have been a coherent superposition is now a mere "mixture" despite being comprised from interacting waves. They would follow an essentially positivist yet post-modern tack that "we couldn't tell the difference, so the output should be regarded as being the same as a 'real' mixture." Hence, somehow we don't have to worry about why a hit occurred at the CA counter instead of the CB counter, when under old-fashioned (!) QM there are still wave amplitudes (usually) at both counters - and a mysterious collapse was still needed to sweep the whole big mess into one little atom that absorbed it all. Their argument sounds circular (what causes any "statistics" in the first place instead of distributed amplitudes, to allow comparing one set of stats to another etc.), and I'm fortified by seeing similar misgivings from e.g. Roger Penrose. But DI is popular because it lets the perplexed brush off their worries about paradoxical features of reality. I can't blame them for wanting to try, but Nature is what it is ...
So, is the DI view really apt - even in its own terms? I think not - but [added] that depends on a crucial distinction. Instead of intercepting photons and collecting statistics right out of BS2, let's instead recombine outputs CA2 and CB2 into BS3. Since amplitudes are again reduced by s and reflection multiplies angle by i, the new output that combines the beams is like this:
CA3 = s[iCA2 + CB2] = s[s[(a|1> - φb|2>] + s[(-a|1> - iφb|2>]] = -φb|2>
CB3 = s[CA2 + iCB2] = s[s[(ia)|1> + iφb|2>] + s[ia|1> - iφb|2>]] = ia|1>
Note that since φ is just a variable complex angle, the amplitude ratios are the same each trial from BS3, and therefore the final average over a range of φ will reflect this as well. So, from BS3 we recover the respective original amplitudes a and b (and hence, same original statistics) that came out of BS1! Of less importance IMHO is that we can later recover the phase relation, that entered BS2. That information was hidden in the relationship between the wave outputs from BS2. It would not show in a raw statistics of hits if we used detectors right around BS2 instead of letting the wave continue on through BS3.
This result could not happen from a mixture that came out of BS2, since photons that came from either CA2 or CB2 but not "both at the same time" would just scatter as individuals from BS3. Their statistics from BS3 would be 50/50 output instead of a^2, b^2. [However, if we imagine that "a mixture" enters BS2 instead, then there is no discrepancy.] This does demonstrate the continued wave nature of the output from BS2, despite total mixing of phases which in some perspectives destroys the superposed character of the photon wave function. The recovery of BS1 exit amplitudes from BS3 shows that. Does one of the deep mysteries of reality remains a challenge?
IMHO, Bye bye decodance! [snark snipped for peaceful purposes!]
Regards, and Happy New Year (and New Decade, so they say !)
[Notes: My post provoked an unfortunate reaction at another blog. It was acerbically critiqued at, with key comment here: Neil Bates Owes Me $160. In that comment Prof. Chad Orzel of Uncertain Principles sci-blog admitted that my math was not wrong as he had claimed. He still doesn't think it proves any good point about quantum measurement, and he may be right (not that it's always clear what the implications of a quantum experiment are anyway!) But readers should decide that for themselves.
Even though him checking some things could have avoided misunderstandings, I don't blame him for most of that. First, my presentation was not adequately clear for various reasons. Also, it was not fair for me to fish for a response from him (in the hope of garnering attention and "publicity") as even some part of referencing his former posts. I didn't mean to anger anyone or make a feud. Sadly it did, and I apologize for that. My main reason for noting his MZI example was it corresponding to how I could show there is a distinction between superpositions and mixtures at the critical juncture. Also, it or similar is in his new book which is selling well and deservedly so.
As to where the "mixture" is to be imagined found. If we can tell the difference between a real mixture and the actual superpositions, even after decoherence - then the claim "there's no way to tell the difference" is wrong. Chad Orzel used the similar phrase "since the end result is indistinguishable from a situation in which you have particles that took one of two definite paths" in the above link.
However, he was imagining a mixture occurring even before we recombined into BS2. Now we have to ask: in which treatments of this problem, would the output from BS2 be considered a mixture? Well, in order to have any significance to the measurement problem in the case of detectors just past BS2 - the mixture would have to be present outside BS2. But then, the result (50/50) would not agree with this experiment. OTHO - (and this is a supreme irony) - if you take the mixture as present before entering BS2, then entering photons would turn back into a superposition anyway! In that case we'd get my result, but it wouldn't help show selection at detectors. So DI either fails to be relevant to detection issues, or it is factually wrong.
BTW I wish no continuing "feud" as such, just fresh looks at this problem. tx!]