December 31, 2009

Decoherence Interpretation Falsified?

[Note: this post has been revised here and there, off and on. I gave up specifying each change in []s; just NB that it's been changed. The essential point is the same, but with a major new caveat about just where the conversion from superposition to "mixture" is imagined to take place. See also note below about the controversy provoked at another blog. Perhaps most important: I am defending orthodox QM from the Decoherence Interpretation, not challenging it with some alternative theory etc. Traditional QM says that wave functions continue to evolve together in superposition until some mysterious measurement process selects and localizes an eigenstate - as per the projection postulate. Hence, we have no idea why Schroedinger's Cat doesn't stay both alive and dead.  DI enthusiasts like W. Zurek say that decoherence gives an "natural" explanation (in some sense which is often unclear) of why we don't find such macro superpositions. I propose that my experimental setup would distinguish the two ways of imagining the evolution of wave functions. Hence, they would no longer be mere "interpretations" - but readily subject to experimental testing. I continue to refer to DI as "interpretation" for convenience and by tradition. I also just posted similar to sci.physics and sci.optics.]

"Decoherence" is both a real phenomenon, and part of an interpretation purporting to tell us why don't observe macroscopic quantum superpositions like Schroedinger's Cat. ("Explanation" is too strong, often avoided even by supporters.) Some would say, the DI avoids the paradoxical quantum measurement problem of "collapse of the wave function."  This supposedly comes about because, roughly, decoherence and entanglement of the phase relations between superpositions (like "dead" + "alive") effectively converts them into mixtures (like classical particles, roughly speaking.) Some say the unactualized alternative slips into another universe. (All easy to find online.) I don't agree, making rebuttal at other posts here and elsewhere. I am heartened that Roger Penrose made similar complaints at e.g. Shadows of the Mind, and critics like N P Landsman have picked at various loose ends and problems. Yet few DI advocates are swayed by critics. As Landsman writes: "Like capitalism, decoherence seems here to stay."

I thought of an experiment we could do to demonstrate the weakness of the DI.  It shows we can recover information that would have been lost if indeed "decoherence converts a superposition into a mixture" as some have IMHO too-boldly proclaimed. That's better than just arguments per se against DI. Briefly, it shows that true but decohered superpositions would produce one set of results in this system, but a true mixture (or anything, by definition, "indistinguishable" from a mixture) would (if present where detectors are usually deployed) produce a different set of results. Hence they can't both describe the situation. Since I gather there is agreement ("conventional QM") that the first outcome is the correct result, that causes difficulties for the DI. (Note: the importance of this argument may go beyond DI, if such information "should" simply have been "lost" period, apart from any interpretative framework.) The predicted outcomes are already derivable through known quantum optics, so I assembled a case from existing knowledge. (It still should be empirically verified.) I hold therefore that the "quantum measurement paradox" remains unresolved, perhaps the deepest mystery about the nature of our world.

My proposal is fairly easy to describe. (The math is less simple but not hard to work through.) I use some ASCII conventions for now from tech issues, so "*" for multiplication where needed etc. Synopsis: even if we completely scramble their relative phases over a history of instances, and then recombine split waves; we can recover their original amplitudes when the secondary outputs are recombined again in a subsequent beamsplitter. This would not be possible from a genuine mixture, as opposed to an apparent one as the case below is revealed to be. Hence decoherence does not always convert a ensemble of superpositions of random phase into a mixture.

Consider a Mach-Zehnder interferometer (as shown above) with a first beamsplitter BS1 that does not [was "need not"] divide intensity equally. For sample values we'll use intensity along bottom leg L1 = 64% and top leg L2 = 36%. Hence, relative amplitudes are: a = 0.8 and b = 0.6. We use a 90 degree (i) phase change at each half-silver and treat full reflections and transmissions as not changing phase (per custom of Roger Penrose, OK as long as consistent.) We can represent what happens to a single photon entering BS1 as: transmitted state a|1> goes along the bottom, and reflected state ib|2> along the top.

However, suppose some interaction/s introduce a new phase shift φ to the wave in L2, as complex angle. [Shift φ was "u" before font change, and added cleanup of this section.]  That changes the phase in L2 to iφb|2>. Then we recombine the beams at BS2, which is a 50/50 splitter/recombiner. It combines relative phases as did BS1, with output from the lower face of BS2 called channel CA2; and from the other face: channel CB2. (This keeps numbering consistent and allows easy reference to original output from BS1.) I will just use "s" for sqrt(0.5) ~ 0.7. A half-silver mirror reduces intensity to 1/2 and thus multiplies amplitude by s. Hence [equations adjusted to new phase standard, pardon some earlier untidiness],

(1) CA2 superposition = s[(ia|1> + iφb|2>],
      CB2 superposition = s[(a|1>  -   φb|2>].

We find intensity (and photon statistics if we collected at this point) by inserting into

(2) I = A^2 + B^2 + 2AB cos theta,

showing involved net (superposed) amplitudes A and B that are comprised from combinations of |1> and |2>. If a = b and u = 0 and thus net phase between legs is i, then the intensity out of CA2 = 1/4 + 1/4 + 1/2 = 1, and out of CB2 = 1/4 + 1/4 - 1/2 = 0. That is equivalent to bright and dark fringes. If a = 0.8 and b = 0.6, we get CA2 = 0.32 + 0.18 + 0.48 = 0.98 and CB2 = 0.32 + 0.18 - 0.48 = 0.02. Hence, lower contrast fringes.  If there is a further phase difference introduced between |1> and |2>, then the relative intensities change accordingly as can be calculated (but still must add to one of course.) If we introduce photons one by one into such a device, the statistics of detection are the same. To make the argument and experiment about "the wave function of a single photon," that is what we'd do. (Despite some states of unclear photon number, an effective "one photon at a time" *can* be introduced into such a device. It means basically, one net "click" from arrays of ideal photon detectors covering all avenues of escape.)

Now, what if we introduced complete decoherence into the picture, in a manner like Chad Orzel uses (and similar in effect to found elsewhere) to model decoherence in e.g the post at ? (We had quite a debate there and elsewhere. I admit being testy sometimes but think I'm in the right in the end. [I add, that Chad seems not to be an advocate of the strongest claims about DI. Commenters suggested also by Andrew Thomas.]  Now we have to integrate over a range of randomly varying φ, and divide by that full range to get the mean value. This is easy for a uniform distribution of phase differences, since

(3) integral (Eq. 2) d theta = (A^2 + B^2)theta + 2AB sin theta + C.

We pick a range that completely scrambles the phases ("complete decoherence") such as between +/- pi, substitute into the integral, and divide by the range 2pi. Then (since sine of each limit = 0) we find the result out of either channel is simply A^2 + B^2 = (a^2 + b^2)/2. This destroyed the statistical interference pattern of photon hits, even in the case a <> b. The output acts like a "mixture" of photons each exiting BS2 from either CA2 or CB2 with 50/50 chance, but not "both at the same time." [fixed poor wording]

A strong DI follower would say (following an ensemble interpretation): what would have been a coherent superposition is now a mere "mixture" despite being comprised from interacting waves. They would follow an essentially positivist yet post-modern tack that "we couldn't tell the difference, so the output should be regarded as being the same as a 'real' mixture." Hence, somehow we don't have to worry about why a hit occurred at the CA counter instead of the CB counter, when under old-fashioned (!) QM there are still wave amplitudes (usually) at both counters - and a mysterious collapse was still needed to sweep the whole big mess into one little atom that absorbed it all. Their argument sounds circular (what causes any "statistics" in the first place instead of distributed amplitudes, to allow comparing one set of stats to another etc.), and I'm fortified by seeing similar misgivings from e.g. Roger Penrose. But DI is popular because it lets the perplexed brush off their worries about paradoxical features of reality. I can't blame them for wanting to try, but Nature is what it is ...

So, is the DI view really apt - even in its own terms? I think not - but [added] that depends on a crucial distinction.  Instead of intercepting photons and collecting statistics right out of BS2, let's instead recombine outputs CA2 and CB2 into BS3. Since amplitudes are again reduced by s and reflection multiplies angle by i, the new output that combines the beams is like this:

CA3 = s[iCA2 + CB2] = s[s[(a|1> - φb|2>] + s[(-a|1> - iφb|2>]] = -φb|2>
CB3 = s[CA2 + iCB2] = s[s[(ia)|1> + iφb|2>] + s[ia|1> - iφb|2>]] = ia|1>

Note that since φ is just a variable complex angle, the amplitude ratios are the same each trial from BS3, and therefore the final average over a range of φ will reflect this as well. So, from BS3 we recover the respective original amplitudes a and b (and hence, same original statistics) that came out of BS1! Of less importance IMHO is that we can later recover the phase relation, that entered BS2. That information was hidden in the relationship between the wave outputs from BS2. It would not show in a raw statistics of hits if we used detectors right around BS2 instead of letting the wave continue on through BS3.

This result could not happen from a mixture that came out of BS2, since photons that came from either CA2 or CB2 but not "both at the same time" would just scatter as individuals from BS3. Their statistics from BS3 would be 50/50 output instead of a^2, b^2. [However, if we imagine that "a mixture" enters BS2 instead, then there is no discrepancy.] This does demonstrate the continued wave nature of the output from BS2, despite total mixing of phases which in some perspectives destroys the superposed character of the photon wave function.  The recovery of BS1 exit amplitudes from BS3 shows that. Does one of the deep mysteries of reality remains a challenge?

IMHO, Bye bye decodance! [snark snipped for peaceful purposes!]

Regards, and Happy New Year (and New Decade, so they say !)

Neil Bates

[Notes: My post provoked an unfortunate reaction at another blog. It was acerbically critiqued at, with key comment here: Neil Bates Owes Me $160.  In that comment Prof. Chad Orzel of Uncertain Principles sci-blog admitted that my math was not wrong as he had claimed. He still doesn't think it proves any good point about quantum measurement, and he may be right (not that it's always clear what the implications of a quantum experiment are anyway!) But readers should decide that for themselves.

Even though him checking some things could have avoided misunderstandings, I don't blame him for most of that. First, my presentation was not adequately clear for various reasons. Also, it was not fair for me to fish for a response from him (in the hope of garnering attention and "publicity") as even some part of referencing his former posts. I didn't mean to anger anyone or make a feud. Sadly it did, and I apologize for that. My main reason for noting his MZI example was it corresponding to how I could show there is a distinction between superpositions and mixtures at the critical juncture. Also, it or similar is in his new book which is selling well and deservedly so.

As to where the "mixture" is to be imagined found. If we can tell the difference between a real mixture and the actual superpositions, even after decoherence - then the claim "there's no way to tell the difference" is wrong. Chad Orzel used the similar phrase "since the end result is indistinguishable from a situation in which you have particles that took one of two definite paths" in the above link.

However, he was imagining a mixture occurring even before we recombined into BS2. Now we have to ask: in which treatments of this problem, would the output from BS2 be considered a mixture? Well, in order to have any significance to the measurement problem in the case of detectors just past BS2 - the mixture would have to be present outside BS2. But then, the result (50/50) would not agree with this experiment. OTHO - (and this is a supreme irony) - if you take the mixture as present before entering BS2, then entering photons would turn back into a superposition anyway! In that case we'd get my result, but it wouldn't help show selection at detectors. So DI either fails to be relevant to detection issues, or it is factually wrong.

BTW I wish no continuing "feud" as such, just fresh looks at this problem. tx!]

Labels: , ,


Blogger Phil Warnell said...

Hi Neil,

I thought I’d have a look see at what you came up with to debunk Decoherence and unfortunately must admit that unless you include a few diagrams I may never quite get what you are driving at; which more likely more my failure then yours. However, what I think you’re saying here, is that although the experimental setup should have destroyed the wave function, information can still be gathered about it after the fact and thus decoherence just can’t serve as an actual explanation of the observations. From the sounds of your closing remarks that the measurement problem still remains unresolved I would say is correct in respect to any singular ontological approach. I guess for one reason or another you’ve never given a serious look at the pilot wave picture, which has a real wave and a real particle and thus there never is any actual collapse of anything to be observed as a particle other then the particle which never collapsed to start.

Anyway there are lots of things that are unaccounted for in orthodox QM, such as entanglement ,which although a required result of the formalism is left without an interpretational explanation. Also, a more classic one is what’s going on in Wheeler’s delayed choice experiment which is claimed to show that not only is there a temporal violation, yet also demonstrates the entangled (non local) nature of QM as also somehow connected with the result .

You also might find it interesting that a group including the famous Alain Aspect conducted the Wheeler Delayed Choice experiment just a few years back where in their covering paper they adhered to such an interpretation of the results as being in confirmation of orthodox QM. However just after publishing the paper Travis Norsen published a critique of it which clearly demonstrates (as J.S. Bell had done long ago) that with the Bohmian Interpretation what appears as a temporal violation is simply the casual action expected and that although the pilot wave picture mandates the theory have a non local component this has nothing to do with the observations or any part the explanation of the experiment.

I offer here an excerpt of the Aspect et al paper’s conclusion followed Norsen’s critique in the same:

Our realization ofWheeler’s delayedchoice GedankenExperiment demonstrates beyond any doubt that the behaviour of the photon in the interferometer depends on the choice of the observable which is measured, even when that choice is made at a position and a time such that it is separated from the entrance of the photon in the interferometer by a space-like interval. In Wheeler’s words, since no signal traveling at a velocity less than that of light can connect these two events, “we have a strange inversion of the normal order of time. We, now, by moving the mirror in or out have an unavoidable effect on what we have a right to say about the already past history of that photon” (7). Once more, we find that Nature behaves in agreement with the predictions of Quantum Mechanics even in surprising situations where a tension with Relativity seems to appear

”Tragically, decades later, physicists (who apparently still remain ignorant of the important lessons of de Broglie and Bohm) are still making the latter choice (apparently without even realizing they are making a choice). One can only hope that the more reasonable choice – of acknowledging the real existence of the pilot wave theory and learning the important lessons it has to teach – will be not much longer delayed.”



3/1/10 16:03  
Blogger Phil Warnell said...

Hi Neil,

As my comment was to long to be in one piece this is the conclusion

So Neil although you would get no argument from me that orthodox QM cannot serve to have the action of Decoherence to complete it, as it already has been shown to have more holes then swiss cheese, I would still insist that your argument formed of experiment would not be able to be applied the same for the pilot wave interpretation of David Bohm. All that said I personally don’t think the Bohm picture to be the last word on the subject, yet as Bell reminded and Norsen here echoes, there is a lot to be learned from the model that eventually might aid to lead to something which all will have no choice other than to say it adequately represents nature as an explanation of its actions.



3/1/10 16:04  
Blogger Neil B said...

Yes Phil, you put it well:
"... although the experimental setup should have destroyed the wave function, information can still be gathered about it after the fact and thus decoherence just can’t serve as an actual explanation of the observations." That is correct. I explain, that even after complete decoherence between the first MZ legs and destruction of the (simplistically defined) interference pattern, we can still extract the original amplitude distinctions from the first BS. (That is done by subsequent recombination.) Hence, the DI is wrong: they would say the output from BS2 must be a "mixture" (equivalent to photons going out one channel or the other, but not portions going out of both at the same time.) But the results I explain require the recombination of two separate waves. My argument is based on the consistency with classical wave addition, which quantum experiments are always consistent with in such arrangements.

You may be right that some theories of QM don't have a problem because they accept the continuing realness of guidance from something like a wave function - e.g. pilot wave theory. Of course DI would still be wrong, and that is my main point. (I'm glad Roger Penrose criticized it, no kool-aid drinker he.) But I have issues with the PW theory too. For example, what sort of entity then is a traveling photon, if only "guided" by a PW?

3/1/10 18:31  
Blogger Phil Warnell said...

This comment has been removed by the author.

3/1/10 21:00  
Blogger Phil Warnell said...

Hi Neil,

Please don’t sign me up just yet that your described experiment has me convinced it serves as a valid argument against decoherence being as a resolution of the measurement problem, as I’m still to be unable to follow the setup all the way down so I can grasp it spacily as to having less chance to miss something important. However, the argument as far as I do understand serves to demonstrate actually what appears consistent with the objections Prof. Goldstein cited in that Stanford piece I linked to over at Backreaction in our discussion.

That basic argument being is that the decoherence solution doesn’t work for orthodox QM as it not having what’s required for it to work to begin with, which in the pilot wave model is satisfied by a real particle and a real wave, with no actual collapse, which you state would have it reguired in the standard model to have two wave function separate yet still coexisting in some way to have one collapse while the other remain which is totally unreasonable. So if you every get around to some diagrams it would be appreciated and I would guess for more than just me in being better able to access things.



3/1/10 21:05  
Blogger Neil B said...

Phil, what I mean is: you understand in general what it is I claim happens and why that would prove the point - not to be confused with, you agreeing with my specific claim, that my setup would indeed accomplish that. I do need a diagram, and am working on getting an illustration uploaded etc. As I said, the result would violate the combination of traditional QM + decoherence argument, leaving many other interpretations still viable.

But other interpretations have their own issues with "reasonableness." Like I asked, what sort of "objects" does the PW theory leave us with? Oddly, trying to imagine them is perhaps more taxing than the waves themselves - are they little nugget things? That is especially hard to imagine in the case of a photon, which needs (?) some wave character for traits like polarization (on two degrees of freedom because of both orientation and eccentricity of ellipse) to be intelligible.

BTW, I am grateful such as the Stanford Ency. and Roger Penrose, for not drinking the increasingly pervasive decoherence kool-aid.

3/1/10 21:22  
Blogger Phil Warnell said...

This comment has been removed by the author.

3/1/10 21:50  
Blogger Phil Warnell said...

Hi Neil,

Thanks for the assurance that you haven’t signed me up just yet, although if after you diagram it and still remained convinced you might be tempted to send it along to Prof. Goldstein or one of the other Bohmians, perhaps Norsen whose paper I cited earlier. Although you may not have the trust in the pilot wave perspective as they do this would certainly be common ground as they I would think a experimental demonstration of their contention would be more the welcomed.

As for your query about the nature of the photon in regards to the Bohmian approach that actually is something you should be concerned with as it actually worried Bohm quite a bit right up until his passing with nothing that I found as convincing and I would suspect neither did he. That’s actually where I part with the contemporaries of Bohm, as they don’t either see his theory or his concerns about it in quite the same way. I then find myself more sympathetic to his point of view, rather then some others. Then again who am I to talk as I’m not truly a physicist?

None the less I’ve had some ideas as to how this photon thing could be dealt with, yet it somewhat changes the relationship between wave and particle and as such brings on other difficulties. This is something I would never post for two reasons, first I’m not committed enough to ever want to seriously work on such a thing and most importantly I don’t have the mathematical prowess or most the rest of the tool set required of a theorist to have it presented as something structured enough as it to be even a model. All I know is if I can conceive such a thing there are bound to be many others that will come up with even better ideas, which I suspect to see someday, which I find reason as to think will come sooner than many think.



3/1/10 21:55  
Blogger Neil B said...

Just for clarification: typically, a wave has its phase changed by 90 degrees (multiplied by i, sqrt (-1)) when it reflects off a partially silvered surface. I simplified by pretending it was 180 degrees, which gives the same net answer. But if we take the usual i-shift, I can easily explain the results (it switches the channels for final output; no matter.) Consider that after the first split, the lower route for |1> into CA2 bounces twice off a PSS, and the top route not at all. Hence we combine (-|1> + |1>)/2 = 0. The lower route for |2> into CA2 goes twice off a PSS, and the upper route twice as well: (-|2> - |2>)/2 = -|2>, so the full squared amplitude b^2 is found there by statistics. You can verify the converse for CB2, so the output is the state |1> with it's associated squared amplitude a^2.

This better explains why you get the differential output disproving the DI hypothesis, that the output from BS2 was a "mixture." (Actually, it's a bigger deal than that - most physicists would say, the output from BS2 should act like a mixture, regardless of whether they think such an equivalence solves the measurement problem!)

Maybe in other posts and venues I'll just call these states |a> and |b>, and include the amplitudes directly.

5/1/10 20:20  
Blogger Phil Warnell said...

Hi Neil,

”The output from BS2 in my setup "should" be a "mixture" in QM, ie equivalent to whole photons randomly going out one face or the other. But if not, then the fundamentals have to be reworked and we can't use traditional DM or mixture language.”

I hope you don’t mind me addressing your comment you made on BackReaction over here, as this actually is the proper place for it. First like I’ve said before until you flesh your experiment out with diagrams (since logic for me has to be spatially manifested) I’m still not certain what it is that you feel I agree with other then what I’ve stated earlier. Also what has me convinced of your position more than your experiment is that the decoherence concept just doesn’t work for a theory whose ontological bases is singular.

Again I repeat that on the other hand with theories requiring a dual ontology decoherence not only works, yet is mandated to be present in the theory. I would agree also with your contention that taking this into account that any density matrix talk simply is meaningless. However, with Bohmian mechanics and other such concept this is not the same, for although the density matrix is not a valid concept what as usually conceived what is called a `conditional density matrix’ is applicable and can also lend clarity for such argumentation. Not that I wish or would want to turn you to Bohm, yet if you review the the paper highlighted here it may help you to crystallize your position as to know how better to proceed .

"So in the de Broglie–Bohm theory a fundamental significance is given to the wave function, and it cannot be transferred to the density matrix."

- J.S. Bell: “De Broglie–Bohm, delayed-choice double-slit experiment, and density
matrix”, Int. J. Quant. Chem. 14, 155-159 (1980), Reprinted in Speakable and unspeakable in quantum mechanics , p. 111



17/1/10 10:06  
Blogger Neil B said...

Phil, thanks for dropping by again. Yes, I need a diagram here. Meanwhile, imagine "cascaded" MZ interferometers, where output from BS2 becomes the input for BS3. BS3 would be the second, recombiner BS if it were in a single MZI.

Note again that this isn't just a matter of whether decoherence can show why we don't see macroscopic superpositions etc. The information is generally expected to be lost by decoherence. That much is not part of the argument over what further consequences that presumed loss should have.

17/1/10 12:52  
Blogger Steven Colyer said...

Neil, I clicked on your link to Chad Orzel's site, and got "Not Found." Please examine, thanks.

20/1/10 14:39  
Blogger Neil B said...

Steven, I fixed the link. It seems merely leaving a period at the end of the active URL itself spoils it working, which is rather pathetic. So look at Orzel's post-modern style argument, that even if we have a "model" in which amplitudes continue to be superposed at various relative phases, the fact that hit patterns (never explaining how the localized hits get achieved to start with in this circular argument) are like the randomness of a classical mixture means that explains why they "are" in effect such a mixture, so wow we don't have to wonder why do the waves collapse.

But like I said at BR, the argument over that framing issue is pretty moot now that I provide a way to actually extract the supposedly lost data. Note also the illustration and the "confuser" to represent relative phase variation.

20/1/10 20:43  
Blogger Phil Warnell said...

This comment has been removed by the author.

21/1/10 06:05  
Blogger Phil Warnell said...

Hi Neil,

After looking at it again I still see no problem with your argument as far as it actually disqualifying dehoherence as an explanation in respect to what should happen from the Copenhagen ontological view. However, as I said in the outset if waves act as waves and a particle is guided by the subsequent action of them dehoherence still holds as being the correct action you would observe. What I suggest is you write this up formally as a paper and publish it. The other suggestion I have is you send it to Travis Norsen at Marlboro College and ask what he thinks of it and perhaps suggest this could be extended as an argument to differentiate standard QM from Bohmian Mechanics. One thing I can tell you it will either have him discover a flaw that I can’t find and if not be a way this idea could become to be taken more seriously if Norsen (who is far more skilled then I at such evaluation) where to translate this all into Bohmian terms.



21/1/10 06:13  
Blogger Steven Colyer said...

I second all of Phil's motions, especially sending it to Dr. Norsen, the widely known (to the few who know about him) Ayn Rand Objectivist (feh) and seVERE critic of fellow PhD Physics Dr. Lewis Little of "The Theory of Elementary Waves" fame/infamy.

But before doing any of that ...

... forgive me Neil if you've already done this, but please, please, please click here, which is considered by many reputable sources as THE CLEAREST explanation of Quantum decoherence on the internet, then read the replies, then think and comment.

I should very much like to see the discussion you and Andrew have at that webpage.

Pass the popcorn, Phil. I'll supply the drinks. What's your pleasure?

21/1/10 08:03  
Blogger Neil B said...

I appreciate the interest and the suggestions. I'll look at that, meanwhile: I may have picked the wrong title for this post. Maybe I should have dryly but boldly titled it "Recovery of amplitude information thought lost by decoherence." I now think the biggest issue isn't whether the decoherence interpretation is falsified. The big deal is, the very fact we can recover the information at all. Regardless of anyone's stance on how to interpret the WF, whether the existence of decoherence can in some sense explain why we don't see macroscopic superpositions etc - the commonly-held view is that the information is destroyed by dechoherence. So one person might think that would have explained (loosely) the apparent collapse, and another one think it didn't; but heretofore both would have agreed that the information was lost. That is the game-changer if I may be so bold (and I have to be, to live up to my title ;-). If so the popcorn is worth it.

But this still IMHO wrecks decoherence as a possible way to look at collapse, because DI hereby loses the operationality to even have a chance (so to speak.) Also, I don't want to sound like ridiculing PW etc, so much, as at Backreaction. I just think it has problems and I can't imagine what a particle-like photon is in flight.
PS - I second that feh.

21/1/10 09:42  
Blogger Phil Warnell said...

Hi Steven,

Now you have me impressed for outside of the few that are hard core foundationalists not many people are aware of Prof. Norsen. It’s also true that he’s a devote Randian, even though saying that form s to be a bit of a contradiction. Although I can’t say I’m a fellow follower of the philosophy, it does resonate with me as so far as Plato called the “good” is necessarily tied to reason. Also some of Rand’s fictional novels such as The Fountainhead contains relevant social comment that I wish many of the Wall Street sector would have paid a little closer attention to. After the economic collapse I would have at least had all the executives from Lehman Brothers and all the bailed out bank executives forced to watch the movie rendition of it at least once a day for a year:-)

With all that said I can vouch for Norsen being a very penetrating rational thinker, who I would trust to discover if Neil’s experimental argument has any holes or not. The nice thing about running this by a studied Bohman is it was in Bohm’s 1952 paper where the concept of dehoherence was first developed as a vital aspect of having the pilot wave theory as initially proposed by de Broglie to be truly turned into a solid theory. So I would advise rather than resorting to second hand information he refer to those two watershed 1952 papers to get it straight from the horse’s mouth.

As for your attempt at throwing down the gauntlet to Neil to square off with Andrew, I would rather see either of them together with Travis Norsed , Sheldon Goldstein or better yet Antony Valentini. Arrange that one I’ll furnish both the food and beverage:-)



21/1/10 21:27  
Blogger Neil B said...

I plan to get back to you guys Friday or Sat. I'd love a big saloon brawl with the other characters too here or anywhere - I don't know them well enough to drag them in, maybe Steven can. Preview - Andrew's stuff looks so far like a more sophisticated version of the same IMHO fallacious, circular argument. Like I say, it slips collapse "in the back door" inside the argument on the way to supposedly deriving it, or it's equivalent "appearance."

21/1/10 22:13  
Blogger Phil Warnell said...

Hi Neil,

This problem you have with the photon I suppose stems around it being the only one that exists at C and not having rest mass. It’s interesting that should bother you as it did me for some time. However I have come up with scenarios where I can imagine a photon, yet I’ve never really discussed them with anyone which is consistent with Bohmian ontology and lend some explanation for the source of the wave at it relates to other particles. The one who is the furthest ahead in all this is Antony Valentini who I consider out of all of them a true extension of Bohm and someone I had always wished I would have had the opportunity to speak to one on one .

Just in the more general sense he published a paper a little over a year ago that was to dismiss the idea held by several Many Worlder’s that Bohmian Mechanics is just Many Worlds in denial. In the paper he raises the dehoherence issue as this being central and perhaps you might like to have a look for yourself. Also just to be fair Dr. Harvey Brown wrote a critique of the Valentini paper shortly after which you might like to review. I’ve also followed Prof Brown’s work, including reading his book entiltled Physical Relativity. I find Prof Brown to be a very interesting character, for on one hand with relativity he would argue Einstein to be wrong, as there must be a fixed frame and thus therefore an aether (the aether part are my words not his). However, at the same time he would deny the existence of the pilot wave as a element of reality. The only thing this has resolved for me is if there is anything thing that’s to found consistent among Everettians is they are more logically schizophrenic then the waves of Bohr:-)



22/1/10 07:19  
Blogger Neil B said...

OK ... And getting back to decoherence: Those of you who believe in PW etc. are at least acknowledging that the DI is inadequate. There's no point in trying to solve the collapse/localization (Penrose "R") with guided particles if decoherence of unguided particles/waves would solve the problem. While I'm waiting to write up a good new put-down of DI, then you and I should agree that diddling with the phases between waves (which is what WFs are, pace Orzel in his "explanation" for the dogs) - and especially bad, pretending to explain what happens in a given instance by appeal to what happens in other trials - is not going to plop a "particle" down at here v. there; unless it either was a particle all along, or unless some weird R comes along to intervene. (Or, unless we can't model the universe anyway.)

I don't know how Steven takes PW etc. but he seems OK with decoherence. I want to better explain why it is literally a fallacious argument (but I already at least named the main fallacy.)

22/1/10 08:57  
Blogger Phil Warnell said...

Hi Neil,

If you write up your experiment formally and send it to Prof. Norsen I can almost assure you he will offer a response. Another thing you might do is post your proposition on the Bohmian Mechanics discussion board as well as posting a link to your blog post within your query. The spirit I suggest is to ask those in the group as how your experiment might be interrupted in the pilot wave context. I’m dead serious about this in as far as I can tell it does dispel dehoherence as a way to explain the measurement problem for standard QM yet rather being one that reinforces the pilot wave ontological position. You can also mention to either Norsen or at the forum that I prompted you to do this if you wish as I’m somewhat known to both.



23/1/10 08:40  
Blogger Steven Colyer said...

Hi Neil,

Yes and unless you can prove otherwise I have no problem with Decoherence, what I have a problem with is Gell-Mann's, Hartle's, Omnes' etc. Consistent histories approach.

They may be right, they may not be right. Time will tell, and for the moment I'm not worried about it.

More advice:

In order to defend yourself better, you really SHOULD read Omnes' book, cover to cover.
- Read the next chapter on Andrre Thomas' website, which is Quantum Reality. It's the fifth webpage and ties the first 4 together, basically Double-Slit, Heisenberg, Schrodinger, and Decoherence.

Good luck, mate.

P.S. Ironically I've written only one paper in my life based on experimental data, and I used a Mach-Zender Interferometer to obtain the data. More on that in the coming months, but for the moment I choose to explore other paths besides Quantum Interpretation.

23/1/10 11:31  
Blogger Steven Colyer said...

This comment has been removed by the author.

23/1/10 14:04  
Blogger Steven Colyer said...

Holy crap, Neil, somebody SERIOUSLY doesn't like you.

Click here to see what I'm talking about.

Pass the popcorn, Phil. ;-)

(P.S. I'll have a Warsteiner's thanks, the light Pilsner one. Bleep that dark ale/stout crap, thankee)

Fore! lol

23/1/10 14:05  
Blogger Neil B said...

Yes, indeed! Chad Orzel struck back at Neil Bates Owes Me $160.

Below I reproduce part of my first comment (got snagged by character limit), I added another note a bit later.

Well, thanks for the free publicity Chad, but I'm not ready to pay you for that yet [see below]! Note, I mentioned you but wasn't harsh about it, maybe you noticed the snarky indirect comment about "the doghouse" per your book? (Mainly, it was convenient to me to use an MZ setup similar to yours, and there are various explanations of DI around.) Some readers in my comments said they think I'm on to something. Well, maybe. I want readers there, here, and anywhere to check it out and decide for themselves. I want someone to do an experiment too. I'll study the critique and comments here, but for now a summary and to warn everyone the bad guy is walking down Main Street. Still, I want to have a fruitful discussion, no reason why not!

Very important, as I note: this is not just about the decoherence interpretation, that claims we can account in some way for not seeing macro superpositions. As I wrote: Its [the experiment's] importance goes beyond DI, since such information should simply have been "lost" period, apart from any interpretative framework.

Since my post is rather long, I will make my own briefer summary here (check the diagram too, and sorry I haven't gotten the better fonts in; having trouble there): BS1 splits the incoming photon WF into different amplitudes, such as 0.8 one leg and 0.6 in the other. They are recombined at symmetrical BS2 in the usual way. If in phase, the combination produces a certain output (not 100% as with 50/50 input from BS1.) It's easy to forget, the output from BS2 isn't just "statistics" of hits. There is a specific WF along each output channel (very important!) We recombine them at BS3 in terms of rules governing the amplitudes and phase relations.

If we introduce random confusion of phase relations, the output from BS2 is 50/50 from each channel (even with asymmetrical input.) AFAIK, this is considered equivalent to a "mixture" (and as I understand, from Chad's post I referenced.) But that is only in terms of bare statistics of hits. Those stats ignore the remaining, hidden relationship between the WFs exiting along the BS2 channels. When we recombine again into BS3, we find we can recover the original amplitudes. So output from BS3 will again be 0.8 and 0.6 (64% and 36% intensity/count stats.)

So what? Well, if the output from BS2 was just a 50/50 mixture - as if, one photon at a time out either channel A2 or B2, but not both) then the output from BS3 would be too. (Easy to figure.) No mere quantum eraser, since you're recovering info post-decoherence. So therefore I say, the output can't be treated like a mixture, like the "collapse" of the WF at BS2 from decoherence, like becoming discrete photons at that point.

23/1/10 17:40  
Blogger Neil B said...

Maybe some of you misunderstood the point? I don't claim to recover phi, the phase difference itself. That doesn't matter. It's the output stats from BS3 that matter, that can't be as predicted by feeding a "mixture" into it. Sorry if - if - it wasn't so well formed but the point should be clear from the concluding remarks.

Citing other work: Hey, that's a blog post and I wanted to get the idea out there. I did mention that Penrose makes nearly identical complaints in Shadows of the Mind (and uh, is anyone here impressed by those? He picked on the foundational argument of DI, similar points to my complaints here and there, at some pieces at Stanford Ency. of Phil, etc.) Yes, comparisons matter but the relevance of a counter argument stands on its own too. And again (can hardly over-emphasize: the recovery of this info, thought "lost" in most treatments AFAICT, is important regardless. Best way IMHO to regard it: ask yourself, if the outcome is as I describe then what does that show you? (Of course, if you think I got the outcome wrong, or the implications wrong, pls. LMK. But be aware I followed rules for combining wave states that have always worked for interference calculations. As for the reflection shift, I use the 90 degree "i" shift (as did Penrose) because I think it's convenient, and it works out the same as long as consistency is right.) As for being enamored of one's own cleverness, that is one of my complaints about DI. I may be "guilty" too. Whatever.

Chad: If this concept of mine gets attention, I turn out to right, and get credit for it, your efforts would be well worth $160. I'll pay you when it's clear you deserve reward for making me famous (or infamous, if that's what decoherence enthusiasts will call it.)

My second comment up to now, may help clarify:

(PS, quick follow-up: the angle of phase shift I call u (to avoid using a symbol, which I'm having tech troubles with), is as you would expect not a constant. It's a variable, changing around each time the "confuser" or environment meddles in the path. I couldn't care less about finding it later.) BTW also, Roger Penrose is not an outsider. And don't outsiders (~) often come up with important insights? No ad homimen sentiments or appeals, please. It is what it is.)

23/1/10 17:43  
Blogger Phil Warnell said...

Hi Steven,

As for the beer I’ll have to look around for all I currently have on hand is Keith’s and therein it being an ale might not be to your liking, although I do have lots of popcorn. As to the rebuttal you noted there’s a lot to go through here and by the looks of the time spent on the graphics and equations Prof. Orzel has taken this as a serious challenge. I will have to look it all over as best as I’m able, with my first reaction being not having problem with what he says QM should show by way of its formalism (mathematical axioms) yet what he assigns ( or rather doesn’t assign) as being the mechanism behind them. The statements that he specifically made that I have to think about more carefully is as follows:

“When you add those together, they destructively interfere, wiping out the earlier interference effect. It's true that, as Neil claims, the phase shift introduced by decoherence doesn't matter for the final result, but that's because there is no interference.”

When coupled with him later insisting:

“When you actually work out the details, his conclusions are wrong, and the normal quantum understanding of the world is correct.”

The fact is today many quantum theories have events (observations) reliant upon decoherence yet so far as the orthodox version there is nothing in either in its formalism or interpretation that would have you know when it is or not to be considered relevant other then what is necessitaed by the result of actual experimental observation. In short decohernce in the context of a theory with a singular ontology lends no true insight as to what is happening as to be a solution for the measurement problem. I guess that is to say when Prof. Orzel can explain to me when, where and how decoherence is to be considered as part of what is to be considered I might then better be able to understand in this instance that there can be no interference to be decohered. A better way to express what I’m saying can be found in the paper by Durr et al ‘Quantum Equilibrium and the Role of Operators as Observables in Quantum Theory’ section 2.3 (Decoherence). That’s to say if Neil is confused as to when decoherence is to be considered relevant within the context of standard QM he is at least so confused for good reason and in good company as Durr et al have so clearly explained.



23/1/10 18:05  
Blogger Neil B said...

Phil, I suggest that you guys actually go on down to the big "brawl" at Uncertain Principles and not just dream of popcorn and beer as spectators. I think I caught Chad making an error about the final output from BS3. He says it would be 50/50 even if BS1 had unequal reflectivities. But if one follows (as you apparently did) through directly with the history of combinations of WFs up to the "end" (stats found outside of BS3), the output is in the same ratio as out of BS1.

Very important: the amplitude output is independent of the value of the phase shift "u" (which I used from tech issues with symbols.) It is a or b each time, so the statistical intensity averages over variation in u must be a^2 and b^2.

Confused about when the "mixture" takes place etc? I am just following from what I hear DI supporters say.

24/1/10 13:10  
Blogger Steven Colyer said...

Would it be too much to ask you to provide a link to the particular response of Chad's at his website Uncertain Principles, Neil?

Plato and I and Phil are in our 50's, that makes us old enough to be your Dad.

We obviously like your fresh and passionate mind, but we're "old" (relatively speaking) dudes.

Would it be that much of a bother to make our lives easier?

25/1/10 13:49  
Blogger Neil B said...

This comment has been removed by the author.

25/1/10 15:06  
Blogger Neil B said...

OK, this is better:
Old enough to be my dad? Brother, maybe ... I am flattered I come across as young but am in my 50s too! Maybe you're fooled by young singer Kate Voegele being a FBF? As for that specific link:

See my comment below that. Chad should correct the technical error in the main post, not just comments. BTW I softened my digs at him since he isn't really a deco-radical anyway, and some subtle caveats apply to the whole issue.

BTW, I found out that it matters greatly whether the "mixture" is considered to be entering BS2, versus exiting BS2. See my fix above, mostly at the end.

25/1/10 15:55  
Blogger Phil Warnell said...

Hi Neil,

Just so you know I did place a post on Chad’s blog yet more of a recommendation as to how everyone might become better acquainted with decoherence , its history, application and significance as it relates to quantum theory interpretations.



26/1/10 07:57  
Blogger Steven Colyer said...

Hi Neil, as I wrote at my own blog:

we're cool, but wavefunction collapse isn't my thing. I question the What? moreso than the How? or Why? Meaning, I question wavefunctions collapsing as a valid question. The How? and Why? are fascinating and important, yes, but the What? comes first, and I don't think the proper one has been asked, and no I don't have a clue as to what the proper What? actually is. Just not interested, BUT, I'm glad to see others are on the job. Seems more like Philosophy than Physics to me.

To which I would add:

You certainly have the PASSION of a young 20-30 yr old. Good for you, and I wish I could add more to this discussion which is neither my area of expertise nor interest.

I would ask though:

Have you contacted a local University to run your experiment?

If not, try Rowan University Dept. of Mechanical Engineering. Hell, if they'd run stuff for noted con artist Randell Mills of crackpot "Hydrino" theory, they'll run anyone's, seriously.

It's all about selling yourself, Neil. Not your ideas. You.

26/1/10 14:06  
Blogger Neil B said...

Phil - thanks
Steven - you too. As for running the experiment - most of the "feudees" now agree on the outcome (b/a amplitude from BS3 instead of 50/50) - given the vital assumption of using "superpositions" throughout! It's still worth doing, but the big deal now is the "interpretation" of those results (presuming that.) I have sharpened up the final caveats, but again:

Now we have to ask: in which treatments of this problem, would the output from BS2 be considered a mixture? Well, in order to have any significance to the measurement problem in the case of detectors just past, BS2 - the mixture would have to be present outside BS2. But then, the result (50/50) [and easy to prove] would not agree with this experiment. OTHO - (and this is a supreme irony) - if you take the mixture as present before entering BS2, then entering photons would turn back into a superposition anyway! In that case we'd get my result, but it wouldn't help show selection at detectors. So DI either fails to be relevant to detection issues, or it is factually wrong.

IOW, DI refers to "mixtures" being FAPP manifested at certain points, and that *is* testable (amazing as it may seem.)

As for selling myself: indeed, I need to do a better job. It isn't easy, I'm roughly in Phil's position as far as not quit "in the profession" (true?) so have to try harder. But look at Carl Brannen, an ethanol engineer IIRC who has made contributions to particle physics. Less snark, less insistence etc. would help.
BTW Phil is already a FBF, check the page sometime.

27/1/10 13:17  
Blogger Steven Colyer said...

You're welcome, Neil?

What is an FBF? When I google or go to dictionary dot com all that pops up is "Fat Burning Furnace."

I have an MBA specializing in Marketing, and worse have considerable business and life experience in selling. I say "worse" because I consider Honesty the most important thing in conducting one's life (for selfish survival reasons, not altruistic ones ... as in you can't be caught in a lie if you don't lie), and I swear to God if I lacked morals like shameless Randell Mills of "Hydrino" infamy, I could make millions. But I wouldn't enjoy them cuz I couldn't sleep with myself.

(For the record, I actually think Loyalty is tied for first with Honesty, but that's another subject for another day).

Hey Neil, a bit more background if you would. What did you teach and at what grade level? What college degrees do you have? I have a BSME in addition to my MBA.

28/1/10 06:15  
Blogger Neil B said...

Steven, sorry I assumed everyone knows what "FBF" means: Facebook Friend. I'm surprised it isn't available in acro lists, but then again "Facebook" is one word per se. Teaching: High School. I've got a BA in Anthropology and AAS in mech. design, but I give a vague answer to what after that. Various things didn't gel into a higher degree or firm career track. I've done too many little things. My material "is what it is." Look up that Carl Brannen for examples of good "amateur" (sort of) work.

28/1/10 09:04  
Blogger Steven Colyer said...

lol, I hate Facebook and MySpace, no wonder I didn't know. lol

Anthropology, huh? Cool. My close relation is Anthropologist Harold Colyer Conklin of Yale. Ever heard of him? Social, not Physical, Anthropologist. Retired but Professor Emeritus there still.

Also, what is "J-Lab?"

28/1/10 18:31  
Blogger Phil Warnell said...

Neil ,

How does someone who’s initial interest deals with the past end up concerned about dehoherence? Then again Einstein always complained about how the Copenhagen crowd could believe that the moon only existed when they looked at it. If one extends this arguement further, that is to include the past, then all your decedents who where necessary to have you to be real at this instant never existed yet are only figments of my imagination when my observation had you all to become real. The question is if decoherence serves as a good enough explanation as to have this not to be a concern:-)



29/1/10 07:16  
Blogger Neil B said...

Steven, I think I heard of Harold but my memory is vague after all those years. "J-Lab" is "Thomas Jefferson National Accelerator Facility" in Newport News, VA They have a high-energy (several GeV, I haven't checked latest level) continuous electron beam (meaning, not just pulses but it doesn't "run all the time.") They do nuclear research with it, and also run side-projects like a free-electron laser etc.

Phil: my point is, about possible experimental consequences. See also my comment at new UP thread:
BTW, it's supposed to snow like the dickens around here tonight and tomorrow!

29/1/10 21:39  
Blogger Andrew Thomas said...

Hi Neil, I don't know if Chad Orzel is right or wrong about your stuff, but I do agree with him that it just looks like a heck of a lot of work to decipher it. Surely you could express the basic principle in a simpler form, preferably in words? Something like John Wheeler did when he described his "Delayed Choice" experiment? The maths here just seems like a real pain to hack through, like Chad said.

31/1/10 05:59  
Blogger Neil B said...

This comment has been removed by the author.

31/1/10 10:47  
Blogger Neil B said...

Andrew: First, that my write-up was rather confusing, and had snark and spotlighting of an individual such that drew ire and fire, was my fault. That didn't make it easy for readers. I do need a Wheeler-style metaphor and will try. My fault too for making Prof. Orzel seem like a strong DI advocate, and I don't want you feeling put upon even if you are (whatever SDI really means!) I do think it was clear that the initial BS split was not 50/50, but a/b amplitudes (thus a^2, b^2 statistics) instead. It was not so clear that the final output corresponded to that ratio, but I'm glad everyone can now agree on that much.

I also learned not to needle someone like that. It was not fair for me to "bait" him to draw a response (in the hope of garnering attention and "publicity"), and I didn't mean to anger anyone or make a feud. Sadly it did, and I apologize for that.

As for interpretation - he may be right that it doesn't prove anything interesting about quantum measurement. However, I have seen phrases like "decoherence turns superpositions into mixtures" - I suppose that is "strong DI." (?) I wondered if there was a way to tell the difference. If you follow through the problem, I think you can verify that a true mixture coming out of BS2 would have different statistics out of BS3, then if we follow the process as "wave function" all the way through. That was my big deal.

I noted in my conciliatory comment at later UP thread: oddly enough, if the decohered output from BS1 "became a mixture" before entering BS2 instead, it would get "split" again into a new superposition anyway! How ironic, so then how to be useful in sorting mixture-like detector response on past BS2? However, I think we can also distinguish mixture from superposition in the case of mixture entering BS2. It requires other processing than in the posted example (mix out of BS2), more on that later. Hence, decohered superpositions and mixtures are not "indistinguishable" - that is a simple grand point. If they aren't, is it a big deal?

tx for dropping in. I am impressed yet humbled just to get professional responses, even if no more comes of it. But I hope something does. BTW I think Bee's new thread is very relevant and interesting.

31/1/10 11:11  
Blogger Andrew Thomas said...

Hi Neil, sorry you've had nasty arguments. It's very important we all try to be friendly.

I haven't read your description in great detail, but the very first part where the beamsplitter does not divide intensity equally is a very strange thing. Just because of that, this experiment is surely totally different from any of the other interference experiments. Surely, you are then getting "which way" information, so when a particle hits the screen you can tell which path it took, no?

That beamsplitter is essentially observing the particle and spitting it out again along a certain path. It's basically just the same as putting a particle detector in the path.

Of course, if you have which way information you can never get any sort of interference pattern.

Or have I misunderstood?

1/2/10 11:41  
Blogger Neil B said...

Andrew - Unbalanced beamsplitters output unequal amplitude states in superposition. That is still as much a case of superposition of two different location states, as is the symmetrical case. So we have a|1> in one direction and b|2> going another direction. (Such as a = 0.8 and b = 0.6.) That makes of course for interference patterns not the same as from 50/50. But being a superposition, it still does not provide WWI because you don't know where an individual photon came from. (Other than banalities about direction, which would have told you even if equal split - I mean, intrinsically.) The amplitude does not tag the photon. It is not a property that can be measured about any one photon like energy etc. Nor do we know which state will get picked by detectors. What you get is statistically more hits coming along one "leg" than the other, in proportion a^2, b^2.

What I gather from explanations of how decoherence makes detection to be like a classical situation, is that decoherence effectively converts superpositions like the above into "mixtures." (Isn't that essentially what you say at your site?) In explanations like Chad's (which I reference because it's been seen a lot and is in his new book that sells well) the detectors are on past BS2. So I asked: can we test if the output from BS2 is genuinely "indistinguishable from a mixture"? By definition, finding any way to tell the difference would mean, "no".

I decided to use asymmetric BS1 but ordinary 50/50 for BS2 and BS3. I found that if we recombine outputs from BS2 into BS3, the answer we get depends on what we assume.
1. Assume the superpositions remain superpositions all the way through the system despite decoherence. The amplitudes at the final detectors on past BS3 determine the statistics. Then the outcome is as I described above: different statistics for each BS3 output channel, ie a^2 and b^2 instead of half and half. This is now agreed as correct given that assumption.
2. Assume instead that a "mixture" came out of BS2, since decoherence occurred (or did it?) from the confuser. Half the time a count can be found outside one BS2 channel, and half the time from the other (since the "interference pattern" has been destroyed it is like "gray" on a screen.) If true mixture photons went from BS2 into BS3, they would come out 50/50 from BS3's output channels. That is easy to confirm: just note each one has a 50/50 chance of transmitting or reflecting at BS3. That is only true if they are "localized" and not superposed along both inputs into BS3.

That is clearly a discrepancy. What does it mean? I am not sure at this point (note "?" and "IMHO" etc. above and caveats.) I suggest reflection (heh) on it. One weird irony: if we imagine a further option:
3. Assume the photons became a "mixture" before entering BS1, since after all the decoherence spoiled any interference pattern they could have at a screen placed between them and BS2. In that case, they'd get turned back into new superpositions by BS2 anyway, right? Then the result is the same as #1 after all! You can see why this becomes a bone of contention.

1/2/10 17:37  
Blogger Andrew Thomas said...

Hi Neil, I'm getting closer to understanding this - the way you have described it here is much easier to understand.

I'm not sure this has anything to do with decoherence in particular, though, it's just the way quantum mechanics predicts the photons will behave. It doesn't matter what interpretation you choose.

So you're saying that if we don't have the detector (confuser?) we will get an interference pattern, but if we add the detector we destroy the pattern and we get a 50/50 detection at the end. Am I right? I presume I am wrong because this doesn't sound particularly remarkable, it's just what we would expect. What am I missing here - can you say why this is strange behaviour a bit more clearly?

But, like I say, this has nothing in particular to do with decoherence. It's just how quantum mechanics is predicted to behave. The many worlds interpretation would predict exactly the same behaviour.

2/2/10 16:39  
Blogger Neil B said...

Andrew: Yes you are close to appreciating my point, and it shouldn't be hard since I draw on (abstractly) your own explanation of decoherence to make it. The first logical part is to see what "standard" ("old") QM has to say about what should happen. (Maybe I should have done things in a more logical order up there.) That is based on the Copenhagen type stance of following the wave function all the way through the system until it is time to collect statistics at detectors. Then we square the incident amplitudes and get answers. The confuser is not a detector. It is like putting pieces of glass of different thickness in and out of the path. It introduces a random phase shift between one output from BS1 (of amplitude a) and the other output along the other leg (amplitude b) as shown in diagram. Despite their initial mixing at BS2, the random phase alteration can be separated from by running BS2 output back through BS3. Then the random phase cancels out in one channel, and mutually reinforces in another. Hence we get a signal of amplitude a in one channel, b in another. As Evan Berkowitz explained in comments at the critical Orzel post, this is what QM predicts. They just didn't appreciate why it matters, and that it contradicts DI as stated by its own adherents.

So if it is, so what? Well, this result contradicts what the strongest DI enthusiasts claim in their own words. So first, deco keeps there from being interference, that we would have found with detectors past BS2 and without decoherence. The output from BS2 is made by deco. to be 50/50 each way - NB! Also, if we had detectors one or the other of them pings - they don't remain in superposition like the Cat idea. Hence the idea arose that the superposition was converted into a mixture, or something "indistinguishable" from one. (This is what Chad illustrated in a few places. I belatedly give him credit for not thinking it makes a literal mixture.) You didn't use that phrasing at your page on "quantum decoherence" but what you show happening to a DM is the same idea.

Why does this matter? You and most others are under the impression, that if a mixture or something just like it came out of BS2, the results would be just the same as if we treat the output as a superposition the whole way through. But we can find that the results are not the same. Again, run a mixture output from BS2 through BS3, and the results are different (50/50 in turn) that from a true superposition all the way through. That is a verifiable fact based on comparing assumptions. You could think, maybe people didn't mean "really a mixture" but they have used the phrase "indistinguishable" - that means what it means. Any distinction makes that claim false by definition. So we can't imagine the detector behavior we would get outside BS2 as being from a mixture. So it has everything to do with the claims made by advocates, at least, and is remarkable and noteworthy since a popular concept can now be shown divergent to reality. (REM comparison: it took awhile to realize that Bell inequalities could actually, empirically disprove some local realist theories.) Note the irony, I am defending classic QM against the upstart notions of DI.

2/2/10 22:22  
Blogger Andrew Thomas said...

Hi Neil, the impression I get is that you're thinking that what comes out of BS2 is either a particle mixture which will ping one of two detectors, or else it is a superposition which can cause interference past BS3, but it can never be both. But this is not the case. What comes out of BS2 is entirely dependent on the observation we perform at BS3 (and whether or not we put particle detectors past BS2). It's as if what happens at BS2 is dependent on something which has not yet happened - the observation we decide we are going to perform at BS3. This is the basis of Wheeler's delayed choice experiment. It's as if quantum mechanics works "backwards in time".

So if we decide to put detectors past BS2, just one of the detectors will ping. But if we have no detector, we will get the interference observed BS3. It's hard to imagine how this could happen, but it's the way QM works because it looks at the final method of observation and works "backwards in time" to decide what happened earlier on in the experiment. There's a New Scientist article on this here.

So i'm not saying there's a mixture coming out of BS2, and I'm not saying there's a superposition coming out of BS2, I'm saying what comes out is yet to be determined and will only be determined by the observation we eventually perform past BS3.

3/2/10 11:32  
Blogger Andrew Thomas said...

It's really working just like the delayed choice experiment. What happens at the end determines what happened in the past.

3/2/10 11:35  
Blogger Neil B said...

OK ... You correctly understand the distinction in results. As for interpretation of it, I see what you are saying. Maybe that's the best way to look at the issue, and maybe most thinkers do. But some advocates have phrased the issue in ways implying that decoherence really "turns" the superposition into a mixture, or "indistinguishable" from one. (That's what I call "Strong DI.") They also imply it is unrecoverable - cannot be undone by further processing etc - it becomes part of the world. SDI makes a point of emphasizing "the environment" and its effects in themselves, and de-emphasizing the special status of measuring devices and measurement. That goes beyond mere delayed choice. You might think this is merely a "school" quarrel. Well that school is rather influential, and I simply judge them by their own accessible words.

However, I think my more significant proposal is the one above this one, now called "Proposal summary: can we find circularity of a single photon along a range of values?" Granted it would be real hard to do the experiment. It is however theoretically very important, to have a credible concept of how to find the proportions of circular states in a single photon. Normally we can only get binary answers to eigenstate questions. It is similar to Yakir Aharonov's "weak measurements" but easier to describe in popular terms.

Please, read it and get back to me, and we might give decoherence a rest. I want to flesh it out and have a diagram - it was a test of compact description.

A description of this thought experiment might be fitting for a post at your blog. I think you'd get interest there - an older version of the same at sci.physics.research propelled me into top search for "quantum measurement paradox," and I am back up with recent posts and controversies.

3/2/10 17:47  
Blogger Phil Warnell said...

Hi Andrew,

This explanation you’ve given Neil as for how his experiment would be considered under standard (Copenhagen) QM is of course correct yet. That’howrver not to insist that it stands as being an explanation of anything unless of course you count the answer because as an adequate response to the question how. You are of also course correct that it’s in many respects analogous to the Wheeler Delayed Choice experiment which is something I pointed out to Neil earlier in a comment.

The fact is in terms of what is going on is dealt with in a realistic way only with the de Broglie-Bohm interpretation. That is both for Neil’s decoherence concerns and the Wheeler experiment. Although I would be the last to insist this be the last word on the subject yet if one is interested in a clear, unambiguous and concise description it leaves all the rest well behind.

Actually I think the thing that’s going to burst the bubble on all this is will be as a result of the ongoing push towards realizing quantum computing in relation to what these machines will be able to do in respect to what problems they can deal with. That is in my opinion what’s revealed will have things go eventually more to the Bohmian perspective as discovering their truly no superposition of the particle or dehoherence, yet only of that of the information contained in the guiding wave. The fact is even currently most who work in the quantum computing think in terms of many worlds as there conceptual model of choice and as such I believe they avoid the Bohmian approach since although the conceptualization is more straight forward the actual working with it is much more complex. It’s more of a practical decision rather as apposed to worrying about working with the most correct model.

So Neil if your reading this I once again suggest you post a link to your experiment on the deBroglie-Bohm forum I mentioned, as to just ask one of them to explain it precisely in Bohmian terms. I think you will discover two things, first your experiment doesn’t actually demonstrate decoherence, that is at least not as it;s considered from the Bohmian conceptualization of it and second the action is all quite reasonable as almost render it trivial.



3/2/10 22:04  
Blogger Andrew Thomas said...

Hi Neil: "some advocates have phrased the issue in ways implying that decoherence really "turns" the superposition into a mixture, or "indistinguishable" from one. (That's what I call "Strong DI.") They also imply it is unrecoverable - cannot be undone by further processing etc - it becomes part of the world."

Well, yes, that's what I would say. Decoherence occurs when we make an observation, and explains why we never see superpositions in the real world. That's all.

So If you put a screen up past BS3 you will see the particles making points on the screen - decoherence makes the particles appear like points. And if you put a detector up past BS2, only one detector will fire, decoherence again making the particles appear like points. So whenever you make an observation, you just find point particles - you never see superpositions. That's all decoherence does.

But what is vital is that you can't break the experiment up into consituent parts and examine each part individually. You have to consider the experiment (and the environment, for that matter) as a whole. So when you put your screen up past BS3, the result you get will - at that point - depend on what is happening back at BS2 (whether there is a detector there or not). The whole thing must be considered as one system when you make the observation. What you cannot do is break the experiment up into bits and say something like "a mixture comes out of BS2" - you cannot consider the output of bits of the experiment and combine things together. QM really shows us how important it is to consider everything as being connected - we can't consider anything in isolation, and we can't even consider the measuring apparatus as being isolated from the environment.

Good luck with your work, Neil. I'll have a look at your new proposal.

4/2/10 11:01  
Blogger Neil B said...

This comment has been removed by the author.

4/2/10 13:18  
Blogger Neil B said...

Thanks for more helpful comments. I don't want to carry on much more about decoherence because the more interesting problem is now posted and deserves attention. Yet tying up loose ends, this from Andrew is notable:
Decoherence occurs when we make an observation, and explains why we never see superpositions in the real world.
If that is your opinion, fine (well, not that we understand then why the lost member of the superposition goes away ....and are back to square one with the Cat.) And Wheeler's DCE is also based on the "old" QM assumption that measurements are special acts creating "particle-like" nature when "applied."

But again, that is specifically not what the strong advocates of DI as a new approach or insight into the quantum measurement problem say, as I read. (BTW Orzel is not one of them.) They claim to take away the special importance of "measurement" and go to the environment as a generator of mixtures "by itself", from the action of decoherence on formerly coherent superpositions.

That's why Penrose complained, since he saw no way for evolving wave functions to be collapsed by the environment per se - unless some novel intervening agency like quantum gravity was involved. Phil and the Bohm school also understand that old QM, whatever the environment, did not explain how any specific outcome was "chosen" by nature (saying "random" is just a phrase ... I will look at the forum when more time.)

The DI attempted to explain not having all outcomes together. But their explanation is false if at face value, and worthless if merely figurative (as I said before.) According to my reading, they would expect a "mixture" effectively taking place at BS2, regardless of measuring devices being there - and that has testable consequences further down. But the only way to know is to ask them, not assume (however apparent to me that's just what they mean by extension.)

But again, the newer problem is more important to theory and doesn't clash schools. Think about it, it perplexes. Diagram up soon I hope.

[Sorry if anyone got lots of similars in notifi's. I should compose first and then go for it.]

4/2/10 13:24  
Blogger Neil B said...

For those seeking brevity: most quantum specialists say, decoherence occurs intrinsically in the environment apart from any attempts to measure anything. (It might be a byproduct of what we do, but need not be.) I agree with that much. It just happens due to noise etc. Stronger, DI advocates say this process - a priori of "measurements" - somehow sorts out superpositions all by itself and makes them like mixtures - wherever it occurs. Then, when we finally look we see a given example. Looking/measuring only reveals it to someone. My double MZI would provide literal falsification of that strong view.

Again, we can IMHO learn more by studying cases of repetitive interaction as in the current top post.

4/2/10 15:21  
Anonymous Ian Durham said...


I think I'm understanding your setup, but I need to run through the math on my own first before I comment (though I have an inkling of what's going on). I'm what is called a "processor" (someone told me that in a meeting once) - it means I have to mull things over first. Anyway, I'll take a look and let you know. Maybe I'll put up a blog post about it.

18/2/10 17:07  
Blogger Neil B said...

Ian, I appreciate your interest and hope the point becomes clear for you. In my comment at 6/1/10 01:20, I simplify the path explanation (and also settle down on a 90 degree phase shift from half-silver reflection - I had used 180 and my original version was confusing, no wonder some critics got mixed up.)

The basic point is that a mixture (or, by definition, anything "indistinguishable from" a mixture!) coming out of BS2 would give a different result after recombining into BS3, than would a true continued superposition. I hear different things from different people about whether waves affected by decoherence have become just like mixtures, and where in the scheme they do. But some enthusiasts like W H Zurek basically say it solves the measurement problem (e.g. Cat Paradox.)

Yes, I would appreciate someone else doing a post about it (how about a paper too?) I also am encouraged by your general agreement on the interference issue and want to read your FQXi article.

18/2/10 20:57  

Post a Comment

Links to this post:

Create a Link

<< Home