Home › Forums › 2015 International Workshop on Quantum Foundations › Retrocausal theories › Retrocausation vs Retrodiction
This topic contains 11 replies, has 3 voices, and was last updated by Ken Wharton 4 years, 7 months ago.

AuthorPosts

July 13, 2015 at 8:44 pm #2693
Excuse me for starting another topic, but this is a matter that seems to overlap several discussion topics with ‘Retrocausal’ in the title, and it raises points I do not see discussed elsewhere.
By “retrodiction” I mean using present data to infer what happened in the past. I remember what I ate for breakfast this morning. On the other hand, “retrocausal” means (I think) deciding now to alter what happened in the past, something which falls outside my experience–I cannot now change what I earlier ate for breakfast–and is thus counterintuitive. Granted, quantum mechanics is not very intuitive, but does it really help to replace the familiar idea of retrodiction with something that seems better at home in science fiction?
Let us consider EPRBohm and assume that Alice is a competent experimentalist who has built and tested her apparatus, and is confident she knows how it works. So when it is set up to measure S_x and the pointer reads S_x = 1/2, she is quite confident that the spin half particle that arrived a few moments ago, and has now been eaten up by the apparatus, had the property when S_x = 1/2 when it entered her detector. This and some theoretical calculation–she knows that Charlie prepared the pair in a spin singlet state–allows her to infer the state S_x = +1/2 for Bob’s particle at a time before he measured it. No need for superluminal influences. If you want the details, see my “EPR, Bell, and Quantum Locality”, Am. J. Phys. 79 (2011) 954. arXiv:1007.4281.
I would call Alice’s inference from the later pointer position back to S_x = 1/2 RETRODICTION, and see no reason to think of it as RETROCAUSAL. When my colleagues in experimental particle physics say that their detector was triggered by a fast muon coming from where two proton beams were colliding, they do not think that their detector somehow caused the muon to come in this direction–why should they? I think they would have a great deal of difficulty understanding their experiments if they thought that their apparatus caused, rather than reacted to, the muon’s arrival.
The problem of retrodiction in this context is what I have elsewhere called the “second measurement problem”. The first measurement problem is the one you’re familiar with: how to get the pointer to stop wiggling around and be in one place. When this has been solved one encounters the second problem: what does the pointer’s position tell us about the prior (microscopic) state of affairs the apparatus was designed to measure? You may find it helpful to take a look at the specific example in Sec. 3 of the attachment.
I have a certain suspicion–please forgive me if I step on a few toes–that a lot of talk about retrocausal effects arises from the inability of certain interpretations of quantum mechanics, including the “shut up and calculate” approach of textbooks, to provide a satisfactory solution to the second measurement problem using retrodiction.
There again, maybe stepping on a few toes will get a useful discussion going…
Bob Griffiths
July 14, 2015 at 1:47 am #2713Thanks, Bob, for raising an excellent and important issue!
I’m fine with your definition of retrodiction.
On your definition of “retrocausality”, since I’m a block universe fan, I’m always pouncing on phrases like “alter the past” as meaningless: Any event in spacetime is what it is, and can’t “change” or “be altered”. (Just as you describe your breakfast, or as one might assign some electric field to a spacetime point E(x,t). It’s logically impossible for that value to “change”, for a fixed value of “t”.)
But there are things about the past that are hidden. (You don’t know what I had for breakfast, for example.) If you could make a decision that would determine what *I* have already had for breakfast, that would be retrocausal. It wouldn’t “alter” my breakfast; it would always have been what it was, but you still would have *caused* it. That’s the only meaningful blockuniverse sense of retrocausality. It *requires* hidden variables. Really, to avoid paradox, it requires intrinsically hidden variables, hidden from everyone. (Which makes the uncertainty principle handy… or at least some obvious interpretations of it.)
In your singletstate example, it’s possible to show that it’s actually retrocausal, and not merely retrodictive. You are happy with a hidden variable, it seems (the original orientation of the two spins is hidden), so that piece is in place. The question is whether the future experimenter *causes* the earlier hidden spin, or merely *learns* about it.
The key tool to distinguish between these two options is what’s usually known as “statistical independence” (SI). SI is when the statistics on the possible values of the hidden variable are independent of the future measurement choices. SI is the natural assumption of most of the nogo theorems that leads to the conclusion there’s no spacetimelocal account of entanglement.
But in your example, as in any retrocausal story, SI is generally violated. And if SI is violated, that’s direct evidence for retrocausality. (At the very least, if a future choice is causing a past hidden variable as I described retrocausality above, that would be a blatant and direct violation of SI.)
So why is SI violated? In your example, Alice makes a choice: she can align her measurement any way she wants. And loandbehold, she always infers a prior spin that is aligned with her choice of measurement (or antialigned). That’s a very strange initial spin distribution, always aligned with a future free choice. Any initial hidden variable distribution that obeyed SI could not accomplish this amazing match; therefore your account is retrocausal.
It’s probably not a coincidence that your account of this experiment is (modulo some details) basically the same account that both Nathan Agraman and I have posted in this very forum. You might be better off reading Nathan’s, as it’s much simpler: it’s basically what you have described here except with an AliceBob symmetry restored, and an acknowledgement that this account is explicitly retrocausal.
Best, Ken
July 14, 2015 at 1:57 am #2714Hi Bob,
Retrocausal approaches were designed to explain (among other things) spacelike separated correlated experimental outcomes that violate Bell’s inequality without resorting to superluminal mechanisms. I don’t see how such outcomes would be explained by CH from reading your attachment. For example, how does CH explain the outcomes of the Mermin device attached?
Thanks,
MarkJuly 16, 2015 at 2:06 am #2785Dear Ken,
Thank you for your comments, but I think you may have misunderstood my point. When Alice says, on the basis of the output of her apparatus, that the particle had the property S_x = 1/2 when it entered her detector, she is NOT referring to a “hidden variable” in the sense that term is commonly used these days, as something in addition to or in place of the quantum Hilbert space. Instead, S_x = 1/2 refers to the quantum property represented by the subspace of the spinhalf Hilbert space that contains the ket x> and whose projector is
[x] =x><x. Welldesigned measurements measure quantum properties, and what I call the second measurement problem is to show, using quantum mechanics without ‘measurement’ as an axiom, that Alice’s conclusion is correct. See my ‘Consistent Histories Essentials’ in this workshop for additional comments; in particular my “EPR, Bell, and quantum locality”, Am. J. Phys., 79:954–965, 2011, arXiv:1007.4281 gives a lot of details.Regarding your use of statistical independence and its violation in the case of hidden variables as evidence for retrocausality, let me respond as follows. First, one can construct a perfectly consistent probability theory for quantum mechanics using Hilbert subspaces and taking account of the fact that the projectors for different properties in general do not commute: in the spin half case [x] and [z+] do not commute, so you cannot put both of them into the same sample space. Among other things this gets rid of any notion of mysterious nonlocal influences in the EPRBohm situation–more on that in my reply to Mark Stuckey–thus undermining one reason you think you need you retrocausality.
An important consequence of taking proper account of the noncommutation of quantum projectors is in blocking the following paradox: Alice measures S_x and determines an earlier values, say S_x = 1/2. But she could instead have measured S_z and then would have obtained a value for this at the earlier time. Therefore a spinhalf particle has both an S_x and an S_z value at the same time. (Doesn’t this sound a bit like EPR?) It is clear that such cannot be the case in Hilbert space quantum mechanics, since there is no ray in the Hilbert space that combines the two incompatible properties. And it supplies the real reason behind the textbook assertion that one cannot simultaneously measure both S_x and S_z for a spinhalf particle: what does not exist cannot be measured!
Alice’s choice of which measurement to perform is best regarded as determining the TYPE of information she acquires about the earlier state of the spinhalf particle, rather than (retro) causing it to have some property at this earlier time. The trouble with your assertion that Alice “always infers a prior spin that is aligned with her choice of measurement (or antialigned)” is that it is based upon a common but inadequate way of visualizing a spinhalf particle. We tend to think of such things as a little gyroscope with a precisely aligned spin axis. This is misleading from a quantum perspective because the gyroscope has zero angular momentum components in directions perpendicular to its spin axis, whereas we know that the sum of the squares in the quantum case is 3/4 hbar^2, not 1/4 hbar^2. While any classical picture will mislead to some extent, I think the following is preferable, and it is what I recommend to my students. Think of the gyroscope axis as randomly aligned, and of z+> as a random alignment except that the z component of its angular momentum is positive. Alice can only measure one component of angular momentum, so her answer will depend upon which component she chooses to measure, but the value (for that component) is already there in advance. As I say, one has to treat such classical analogies cautiously; they are not a substitute for a proper analysis based on Hilbert subspaces, but they do help to provide some intuitive feel for what can otherwise be rather abstract mathematics. When classical analogies are taken too seriously one ends up with various hidden variable oddities, such as nonlocal influences, and contextualy of quantum measurements, which a proper quantum analysis shows are incorrect.
In summary, I do not think retrocausality is needed to understand EPRBohm. Things are local, causal, and noncontextual, provided you use Hilbert subspaces rather than (classical) hidden variables.
Bob Griffiths
July 16, 2015 at 2:07 am #2786Dear Mark,
I have discussed in considerable detail why the outcomes violating Bell’s inequality are perfectly consistent with the complete absence of superluminal influences. There is a very complete, but not all that readable, treatment in Ch. 23 of my book (chapters available at http://quantum.phys.cmu.edu/CQT/) where I show by explicit calculation that nothing at all happens to particle ‘b’ when a measurement is made on particle ‘a’. And in the next Ch. 24 I make explicit use of Mermin’s example in the article you called to my attention, to show that there are no nonlocal influences (in contrast to nonlocal correlations)–when you throw out (classical) hidden variables and instead do the quantum mechanics properly. Continuing on, Mermin was the referee on my paper “Quantum Locality,” Found. Phys. 41 (2011) 705; arXiv:0908.2914, and did his very best to poke holes in the argument, but without success. In it I prove, using Hilbert subspaces and consistency conditions, a form of Einstein locality: Objective properties of isolated individual systems do not change when something is done to another noninteracting system. This last is in certain ways quite technical, whereas the Am. J. Phys. item I mentioned in replying to Ken Wharton above is more accessible, and a simpler read than Ch. 24 of my book.
Bob Griffiths
July 16, 2015 at 8:25 pm #2808Hi Bob,
Thanks for your detailed response… I do think we almost have exactly the same perspective about how to make sense of these very experiments. So it’s interesting that I see this perspective as retrocausal, while you merely see it as retrodictive. Let me try once more to see if I can figure out exactly where this divergence comes from.
First of all, however the term “hidden variable” is typically used, and whatever negative connotations you think it may have, I hope we can agree to use it whenever there is *something* to be retrodicted, something that is hidden/unknown, whatever that something might be.
In this case, QM’s maximallyinformative preparation of an entangled state (we know it is in a singlet state) does not tell us *everything* there is to know when it comes to the consistent histories viewpoint. I think we agree on that; otherwise you couldn’t even be invoking retrodiction. There is some hidden variable in play, by my above definition.
The hidden variable you’re using seems to have two aspects: (1) which Hilbertspace subspace should be considered (which sample space, which relevant set of consistent histories), and (2) which possibility in this subspace actually occured. We can leave (2) aside, I think, based on your latest response: this part really is retrodiction, at least for this singlet case. The retrocausality is happening for (1).
The evidence for this claim follows from: A) There’s no way to know the right sample space based on the complete QM preparation alone (it’s a hidden variable), B) the right sample space is correlated with the choice of future measurements, and C) this correlation violates the statistical independence condition on this sample space.
Given this (and the arguments I made in the previous post), the only reasonable way to wriggle out of the retrocausality is to claim that the sample space is not properly ontological, that it’s not a real thing which can be associated with causation. But you apparently think that was *does* exist lies within this sample space, and you also clearly stated just now that structures *outside* this sample space do not “exist”. (Even though they *would* exist under a counterfactual future measurement with a different future setting.) This seems to be all that’s needed, then, to deduce that the actual history has been retrocausally influenced by the future decision choice.
Again, this is all precisely compatible with the general approach and specific toy models that I lay out in http://www.mdpi.com/20782489/5/1/190 : the future measurement settings/geometry allow one to reduce these huge configuration spaces associated with multiple particles to a reasonable subspace in which one can assign ordinary probabilities. As I see it, this is exactly what you’re doing, just without noting the retrocausal aspect. Do you see any other differences that might explain our divergent interpretations?
Best, Ken
July 17, 2015 at 1:57 am #2818Dear Ken,
I appreciate your comments. However, I think only confusion and not clarity will result from trying to put the CH idea of a ‘framework choice’, specifying a quantum sample space, in the same bin with ‘hidden variables.’ Let me explain the difference.
A framework is chosen by a physicist constructing a stochastic quantum description; by contrast, hidden variables are regarded by their advocates as part of the ontology, the ‘beables’ to use Bell’s term. The closest analog to a framework in classical physics is a coarse graining of phase space in classical statistical mechanics: a division of phase space into cells of some size useful for discussing things like the microscopic origin of hydrodynamics. Just as for a coarse graining, the choice of a quantum framework is up to the physicist making up a quantum description, and the choice typically depends on the framework’s utility. It is not a physical property. See Sec. 4.3 in [1] for more details.
An important case comes up in the way CH deals with the wellknown (or ‘first’ in my terminology) measurement problem. The big superposition Psi> of the outcome pointer positions is, according to CH, perfectly fine, but if you put this into your framework you must pay attention to the single framework rule which prohibits combining incompatible quantum frameworks. Thus in the infamous example due to Schrodinger, one it is totally misleading to think of Psi> as representing a deadoralive cate, because it is incompatible with any of a large number of catlike properties (fur, etc.) represented by quantum projectors which do not commute with Psi><Psi. Conversely, any framework that allows a discussion of the cat as dead or alive cannot contain Psi><Psi. This is the same principle that prevents assigning to a spin half particle values of both S_x and S_z: the combination is nonsense. Similarly for the measurment outcome pointer. If you want to help Alice understand the outcome of her experiment, the “pointer basis” is much more useful than the state arising from unitary evolution, and the pointer basis is perfectly good CH quantum mechanics.
The same approach resolves the second measurement problem, relating the measurement outcome to the earlier property that was measured in a way that makes sense. If Alice’s apparatus is set to measure S_x and we are concerned about whether it is functioning properly, we should use a framework in which the S_x projectors make sense at the previous time. They do not commute with the singlet state (i.e., its projector). So if we insist on using the singlet state, that, too, is perfectly good CH quantum mechanics, but will be of no help in interpreting Alice’s result as a spin measurement, and it will not allow Alice to infer, on the basis of her outcome indicating S_x = 1/2, that Bob’s particle has the property S_x = +1/2 before he measures it–note again, that a proper framework, S_x for Bob’s particle must be used in order to arrive at this conclusion.
Thus in CH the same general principles are used for both predictions and retrodictions. Framework choice, absolutely essential in quantum physics and less essential in classical physics where the properties all commute, is neither causal nor retrocausal. Hidden variables theories, by contrast, start off by ignoring noncommutation, hoping or assuming that it is not true, and on this basis come up with all sorts of theorems that conflict with quantum mechanics, and when put to experimental tests the hidden variables always lose. If you’re interested in more details of how everything hangs together in a CH analysis of EPRBohm, I again recommend [2].
Bob Griffiths
[1] “The New Quantum Logic,” Found. Phys. 44 (June, 2014) pp. 610640. arXiv:1311.2619.
{2} “EPR, Bell, and quantum locality”, Am. J. Phys., 79:954–965, 2011, arXiv:1007.4281.
July 17, 2015 at 11:17 am #2824Hi Bob,
I read Ch 24 and I don’t see what you’re proposing for an ontology that accounts for the Mermin device outcomes. All I see are principles of quantum mechanical formalism, which don’t provide any ontology. What physically, not formally, explains the correlations?
Thanks,
MarkJuly 19, 2015 at 2:07 am #2890Thanks, Bob – I have a pretty clear idea of where our paths diverge now.
Still, I happen to think that far more clarity is gained than lost when thinking about the CH framework choice as a hidden variable. (It’s certainly “hidden” at first, right?) Specifically, when thinking this way, it becomes far clearer which aspect of Bell’s theorem doesn’t go through. (Statistical independence of the allowed histories, because those histories must be in the proper framework associated the future measurement settings).
I also don’t seem to be able to simultaneously hold the two views that:
A) The framework choice is merely that which is useful, and any framework choice is in principle valid, if perhaps awkward: therefore it is not ontological.
and
B) Histories that are not in the proper CH framework are not “real”, need not be assigned probabilities, or even said to exist.
Since you brought up the stat mech analog, one point you might consider is the case of finiteedge effects. The way these effects work is that for finite systems, the possibility space in the bulk is a function of how big the system is (because each microstate is assigned an equal probability, and those states are defined globally). Changing the edgegeometry changes the bulk properties. If I took the viewpoint that this possibility space had no ontological significance, then I would be unable to explain this causal channel — or at the very least, I would lose an important perspective on why the bulk behaved differently when the edgegeometry was changed.
Best,
Ken
July 19, 2015 at 10:17 pm #2907Dear Mark,
The ontology of CH is explained in some detail in [1]. Fundamentally it is based on the assumption that physical properties can be represented by subspaces of a Hilbert space. This last is, indeed, a mathematical formalism, but it enables one to reason rationally about the physical world. If you want to address the question of whether it is shorter in distance to fly from New York to Los Angeles with an intermediate stop in Chicago or an intermediate stop in Dallas, you have to somehow translate this into an abstract formalism of distances on the surface of a sphere. However, I suspect your question may have a bit of the flavor of “Explain those correlations in intuitive terms that make good classical sense”, to which I have to respond that I don’t think there is such an explanation. In developing an intuition about the quantum world we have to pay attention to the way in which it differs from the world of classical physics. I agree that intuition, not just formalism, is needed in order to do good physics, and when I wrote Ch. 24 I hoped it would make a significant contribution to an intuitive understanding of some of the oddities of EPRBohm. That is why I construct various frameworks in Sec. 24.2 in order to deal with the conundrum stated in what I hope are understandable physical terms in Sec. 24.1. Working through them helped me gain some measure of intuitive understanding of what goes wrong with Mermin’s device, and I hope that it will serve this purpose for others–although someone else might be able to word it better than I have.
Bob Griffiths
[1] R. B. Griffiths, “A Consistent Quantum Ontology”, Stud. Hist. Phil. Mod. Phys. 44 (2013) 93; arXiv:1105.3932
July 19, 2015 at 11:35 pm #2909Dear Ken,
I myself do not see where anything but confusion results from identifying framework choices with hidden variables. If you call a choice of coarse graining of the classical phase space a ‘hidden variable’ you are certainly not using the latter term in the way it is employed in Bohmian mechanics, which was one of Bell’s motivations: the particle positions in BM are NOT part of the Hilbert space; they are an added mathematical structure.
I cannot make any sense of your reference to “which aspect of Bell’s theorem doesn’t go through” somehow associated with “statistical independence of the allowed histories”. Would you care to clarify?
Regarding your views (A) and (B). View (A) I think I understand and probably agree with, but (B) seems confused. There is, to begin with, no “proper” CH framework; have you a particular task in mind? I am not sure what you mean by “real”. Various incompatible frameworks can be assigned probabilities, but they cannot be combined. Regarding “existence” I don’t understand what you are getting at. Given that you have trouble holding (A) and (B) simultaneously, my suggestion would be to get rid of (B), as it doesn’t seem to make sense.
Regarding your final paragraph. I am slightly familiar with boundary conditions on many particle systems in statistical mechanics, but I don’t understand your point. What is your ‘possibility space’? Changing boundary conditions can change bulk properties in an equilibrium ensemble; e.g., when one has a phase transition, but I miss the significance of this for the topic under discussion.
Bob Griffiths
July 20, 2015 at 12:57 pm #2911> I cannot make any sense of your reference to “which aspect of Bell’s theorem doesn’t go through” somehow associated with “statistical independence of the allowed histories”. Would you care to clarify?
I have a detailed but very simple toyIsing model showing how statistical independence can fail at the level of allowed histories in http://www.mdpi.com/20782489/5/1/190 . If you’re interested, it probably will also better explain what I mean about finiteedge effects, because it is built from a perfectlyanalogous model that doesn’t have time in it at all. (The histories are analogous to instantaneous global states.)
> Regarding your views (A) and (B). View (A) I think I understand and probably agree with, but (B) seems confused. There is, to begin with, no “proper” CH framework; have you a particular task in mind?
Instead of “proper”, I guess I should have said whatever “chosen” framework one uses to calculate probabilities. (You may be able to choose one arbitrarily, which is a great feature of CH, but you still have to choose one.)
>I am not sure what you mean by “real”. Various incompatible frameworks can be assigned probabilities, but they cannot be combined. Regarding “existence” I don’t understand what you are getting at.
It’s the failuretocombine that concerns me; I want to retrodict which history really happens (or at least a set of histories for which one of them has really happened). This is the same goal that GellMann and Hartle started with in a recent piece they did on CH.
I think the biggest difference between us is that you (and GellMann/Hartle) are willing to bend the rules of classical probability, and I’m not. In your Am. J. Phys piece you clearly explain that there’s this new type of quantum logic where you’re simply not allowed to consider the probability of histories that lie outside of any “chosen” framework. If you’re willing to give up on classical probability theory, maybe that’s fair.
But consider this: CH doesn’t fall apart if one sticks with classical probability! Instead, it works in almost the same way. There’s always some natural history framework, as you note, determined by the future measurement settings. Even when using classicalprobability, the results of CH go through if *these*, special histories are the only ones that are allowed to really happen. (The other histories aren’t ruled out on the basis of some new “single framework rule”, they’re ruled out by the future boundary constraints from the future settings.) In other words, you can swap out nonclassical probability for retrocausality.
Whether or not you’re inclined to explore this tradeoff, I hope you at least see that your singleframeworkrule can be effectively replicated by a truly retrocausal theory, without such a rule. (The only difference is that there would have to be one “proper” framework, not any “chosen” framework.)
But this raises the question: if your “quantumlogic” can be replicated by “classicallogic + retrocausality”, is there retrocausality hiding in your quantum logic in the first place? I’m sure you would say no, but perhaps that’s because you’ve already come to terms with this new style of thinking about histories that lie in different frameworks. But I haven’t come to terms with it. And because it’s impossible to tell which histories lie in different frameworks until one knows the future settings, your logic looks retrocausal to me.
Cheers,
Ken

AuthorPosts
You must be logged in to reply to this topic.
Comments are closed, but trackbacks and pingbacks are open.