Home › Forums › 2015 International Workshop on Quantum Foundations › Retrocausal theories › Quantum causal models, faithfulness and retrocausality (onl. 7/16 @ 11pm UTC+10)
June 30, 2015 at 3:53 am #2448
Wood and Spekkens (2015) argue that any causal model explaining the EPRB correlations and satisfying no-signalling must also violate the assumption that the model faithfully reproduces the statistical dependences and independences—a so-called “fine-tuning” of the causal parameters; this includes, in particular, retrocausal explanations of the EPRB correlations. I consider this analysis with a view to enumerating the possible responses an advocate of retrocausal explanations might propose. I focus on the response of Näger (2015), who argues that the central ideas of causal explanations can be saved if one accepts the possibility of a stable fine-tuning of the causal parameters. I argue that, in light of this view, a violation of faithfulness does not necessarily rule out retrocausal explanations of the EPRB correlations, although it certainly constrains such explanations. I conclude by considering some possible consequences of this type of response for retrocausal explanations.July 3, 2015 at 7:53 pm #2461
The argument that retrocausality in a block universe violates faithfulness is essentially reflected by an anonymous referee in footnote 3 of our paper http://www.ijqf.org/wps/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf:
“I do not see how anything truly ‘retrocausal,’ in a dynamical sense, can occur given global time-symmetric constraints on spacetime. The authors seem to me to be too charitable here, a future boundary condition implies an adynamical block world, in which talk of dynamics or intervention is superfluous at best, and inconsistent at worst.”
We really need a direction for the undirected link in Evans’ Fig 4 to have “objective” causality. I think it’s best to view causality as a matter of perspective within the block universe (“subjective” causality per Price, as Evans explains). Given the 4D perspective of the block universe, an “objective” explanatory mechanism need not involve causation in “retro-time-evolved” form or otherwise, as we argue on pp 128-130 of our paper.
Thus, in RBW, we have proposed a formal counterpart to the “nomological dependence” of Hausman and the “non-causal dependency constraints” of Woodward that we call the adynamical global constraint (AGC). A paragraph explaining the AGC conceptually is on p 130 and a mathematical explanation is provided on pp 144-145. An application to the twin-slit experiment is on pp 146-154. The AGC brings Price’s Helsinki toy model with its “global constraints” to fruition.July 12, 2015 at 12:47 am #2648
Thanks for this interesting and useful paper! I like the changes in 4.3 that I noticed from the original version, concerning how the final boundaries are implemented. And of course there’s a lot of connection between our papers here; the “internally cancelling paths” you refer to has got to be due to some *symmetry*, wouldn’t you think?
More thoughts…. I realize you’re throwing out a grab-bag of possible responses to Wood-Spekkens, but in 4.3 I kind of lost track of how all these various options relate to each other: which options were related and which were distinct approaches.
Concerning the analysis of Figure 4 that blends together A, B and \lambda into a “whole”, this sounds a little sketchy the way it’s described. The whole *point* of these retrocausal pictures is to get things back into spacetime, and this ‘holistic’ perspective backs off from that perhaps more than you need to. Why can’t they just be linked via “mutual causation”?
An explicit example might help here… and as you might guess, I have in mind the normal modes of a laser cavity, always my go-to-example for an “all at once” account. You could make the analogy where “A” is the position of the left cavity mirror, “B” the position of the right cavity mirror, and \lambda is the wavelength inbetween. The entanglement analog would then be a known probability distribution over possible wavelengths.
This example features what you’re going for, I think. Given a known distribution on lambda, you can treat A as causing a probability distribution for B, but you can also treat B as causing A. Furthermore, A and B together can “cause” particular values of lambda, etc. So all three of these parameters are *distinct*, but still maintain this mutual-causation relationship which I think is what you are going for with this Woodward-style account…?
I need to think a bit more about your treatment of amplitudes. My first instinct was that I didn’t like it, because I can’t imagine an “amplitude” is in any way ontological. But on another read I see that you’re just using mathematical cancellations, without much commitment as to what these amplitudes might mean. One warning I’ll give, though, is that the notion of “cancelling amplitudes” is not generally a spacetime-local concept, and only showed up that way in those papers of mine because we were using a discrete-path version of the path integral.
Best, KenJuly 12, 2015 at 1:57 am #2653
Thanks for your interesting paper. For what it’s worth I vote for Fig. 4 of your paper.
I got a bit stuck on Sect. 4.2. How do we choose the values of $ \alpha_2$ and $\beta_2$? Do we have to anticipate what angles we are going to choose in other repetitions of the experiment?
On a more general note, is there any mileage in thinking about the SEPRB approach or more formal methods of mapping a bipartite system (i.e. with a causal fork) onto a single quantum system? As you explain on p. 2, the idea is to discover causal structure from observed statistical correlations. The statistical correlations are the same in the two case just referred to so does that mean the causal structure should be the same? If so, maybe that can be used to support Fig.4, even, with more maybe’s, with backward pointing arrows!?
July 13, 2015 at 6:15 am #2675
- This reply was modified 5 years ago by David Miller.
Thanks, Mark – appreciate the input! A couple of comments…
On the footnote comment, I don’t think that dynamical and adynamical explanation should be exclusive. There are plenty of cases in science where two different explanations can be provided for the one phenomenon. For example (I can’t remember where I first heard this; maybe Harvey Brown’s book, or Jim Woodward’s), a helium balloon held by a passenger in a plane that is taking off floats towards the front of the plane. One explanation is that the air in the cabin moves inertially to the rear of the plane, making the front less dense—balloon moves forward. Another explanation is that the plane is an accelerating reference frame, which can be modelled as a gravitational field with the source toward the rear—balloon moves forward. We have a formal relationship between the explanations embodied by Einstein’s equivalence principle.
While the relationship between the 3D vs 4D cases of explanation the comment refers to are slightly different in form to this example, we have no reason to forgo our 3D explanations in favour of a 4D one (despite the beneficial elements of the 4D one). Keeping in mind that we humans are without exception dynamical beings (we experience the world dynamically in time), it seems disingenuous to deny the possibility of a causal (dynamical) explanation.
Keep in mind, also, that anything one says about global constraints and retrocausation must also be said about ordinary causation. Seeing as we’ve been able to produce very successful (albeit incomplete, according to this forum) dynamical theories modelling the world, and we regularly employ causal/interventionist concepts as we explore the world, if the world turned out to obey global constraints as per RBW, we should be able to provide a story joining the two explanations together. If we could do that for ordinary causation, there seems to be no reason it wouldn’t work for retrocausation.
Thus there’s no reason why retrocausality, dynamics or interventions should be “superfluous”. Yes, a properly retrocausal picture, like Ken’s Lagrangian framework and your RBW, is best represented as adynamical, but one would hope a dynamical picture can be extracted therefrom (of the sort that can give us causal and retrocausal explanations). Dynamics, causation, interventions, and so on look a bit different to what we would usually think (and it is an interesting, and largely incomplete, task to spell this out), but they are not, by my lights, inconsistent with the adynamical picture.
I’m just missing the connection, though, between this footnote comment and the issue of faithfulness. Could you please give a bit more detail for how these relate?
PeteJuly 13, 2015 at 8:32 am #2677
I’ll try to respond to your comments in the paragraph order you list them.
1. Agreed. In the source of the internal cancelling paths idea, a paper by Paul Näger, it is a quirk of the causal relationships that provides the cancelling. The model that I present, originating from Gerard Milburn, employs no such quirk, and the internal cancelling paths arises from restricting the possible amplitudes that contribute to the final joint probability. Insofar as you say, “imposing symmetries is itself a sort of fine tuning, in that a large parameter space is restricted to a special (symmetrical) subset”, I think that these are catching on to the very same idea (though obviously with different detail).
(In fact, putting it this way could provide a more general statement of the conditions under which an internal cancelling paths mechanism could work retrocausally.)
2. I think “grab-bag” is the right sort of phrase to describe the mess I left at the end of the paper! The main thrust of that section is that the analyses of Hausman and Woodward are essentially the same, and the view therein is amenable to being interpreted along perspectival lines. I’m hoping that this can be an illuminating exercise for understanding how to tell some sort of dynamical story about the dependence relations (a different story for each perspective)—this is related to the point I was making in reply to Mark above—and to illustrate that the right dynamical story may not be representable in an orthodox causal model.
3/4/5. Yes, sketchy it is. This issue also relates a little to the appropriateness of the instrument of causal modelling. Causal modelling requires distinct variables with distinct causal mechanisms relating them. Of course, it’s partly up to us as modellers to choose the most suitable variables, etc, and explicate the relations between them. I take your question to be, why can’t we describe the relation as mutual causation rather than some more “holistic” story? Well, I would like to think that that is the way I’m describing it. But my “grab-bag” obviously isn’t doing that job very well! Point taken.
I think your laser example is great—in fact, as I re-read it a couple of times it really does capture a lot of what I want to say about the EPRB case. This might provide a robust analogy… let me consider it some more.
6. In what sense do you mean spacetime-local? Am I right in assuming you mean Lorentz invariant or, equivalently, action-by-contact? Hmmm… perhaps you could say a bit more please about this problem?
PeteJuly 13, 2015 at 8:56 am #2679
Cheers for having a look at the paper!
The intermediate settings in Section 4.2 are supposed to be the alternate setting from whatever the final setting is set at, in that run of the experiment (and so other runs are not relevant to the calculation). This then gives probabilities for the actual measurement pairs matching EPRB probabilities. (Since the intermediate measurements are summed over, it’s not like they’re actually measured.)
Theoretically, they’re chosen as those settings simply because, as a result, the amplitudes cancel to give the correct joint probabilities. More practically (this is the original idea that led Gerard to think of the model), under the right circumstances, one could tune the intermediate measurement to be (i) no measurement at all (normal EPRB correlations), (ii) a projective measurement (destroying the correlations) or (iii) a partial measurement with correlations somewhere between these extremes. This would then allow an experimenter to tune between quantum and classical phenomena.
On your more general point, yes—this is exactly the sort of thing I’m thinking following on from this. I think there’s scope to point out some simple SEPRB-type cases that involve fine-tuning (one very simple system features in a pre-print of Huw and Ken’s that Huw kindly showed me) and then argue that they are, rather counter-intuitively, subject to Wood and Spekkens’ analysis too.
PeteJuly 15, 2015 at 2:31 pm #2771
If you don’t see the connection between my footnote and faithfulness, it’s likely because I don’t properly understand faithfulness and there is no such connection. I was thinking faithfulness implies no ad hoc causal mechanisms, such as fine-tuned future boundary conditions. A time-like causal link that isn’t directed constitutes fine tuning. That’s what I was thinking, but I’m clearly confused 🙂
Concerning your comments directed at the footnote per se, I agree that an adynamical explanation in 4D doesn’t preclude a corresponding 3D time-evolved explanation in general. However, in the case of EPRB correlations, it’s not time-evolved but retro-time-evolved explanation that is invoked to “save the appearances” (of dynamism). Given the co-reality of the present and future required to render retrocausal explanation in, for example, TI and TSVF, one must invoke “pseudo-time” processes to create a dynamical story. Where is this process taking place? If you rather have a “global constraint” (Huw’s language and ours) that explains the distribution of the 4D ontological entities in the BW (e.g., TI’s completed transactions), then the “pseudo-time” process is certainly superfluous from a physics standpoint. That is, the desire for dynamical explanation based on our dynamical perspective, when that dynamical explanation requires extraneous mechanisms relative to an empirically equivalent adynamical explanation, is superfluous. As Ken points out in his essay, and you acknowledge in your post, the desire for dynamical explanation is based on our biased dynamical perspective. Nature doesn’t seem to care about our biases, e.g., our Earth-bound perspective clearly indicates Earth is the center of the universe and our low-velocity perspective clearly indicates that velocities add without limit. I haven’t seen TI or TSVF mention a corresponding “global constraint,” but RBW provides one (c.f., the RBW and the TSVF explanations of the Danan et al. experiment starting on p 131, section 2, of http://www.ijqf.org/wps/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf ).
That’s not to say one can’t construct a robust time-evolved counterpart to RBW, I think PTI might be exactly that. And, you do actually gain something explanatory in PTI that you don’t have in RBW, so its additional mechanisms aren’t superfluous, i.e., PTI contains a robust model of Now (experience of a preferred present moment). Some (most?) physicists argue that physics needn’t bother attempting to model Now, but I think the lack of a robust Now in the BW creates a serious objection to BW that is arguably justified, as I posted in the General Block Universe Discussion.
I’ll stop prattling on and let you respond 🙂July 16, 2015 at 12:57 pm #2794
Let me start by saying that I don’t disagree with anything you’ve said there. But I also don’t think you’ve quite understood my point. I’m not trying to advocate a position wherein we need to add some dynamical element to our 4D description in order to satisfy our temporally biased perspective; in fact, quite the opposite. Let me try to explain…
Let us take it as given, for the sake of this discussion, that reality is best understood as a 4D block that obeys a global constraint of the sort you advocate (and a few others on this forum also advocate). If this is indeed the case, then since it is also the case that we occupy this reality and we have been able to provide rather successful causal and dynamical models representing the phenomena around us, then there must be some sort of story explaining how we can do this given that reality is actually a 4D block obeying said constraints.
But we don’t do this by giving some competing dynamical story in terms of time evolving laws. What we’re after is a dynamical story that arises as a result of a combination of the global constraints and our spatiotemporally embedded perspective.
If the global constraints were of a sort that gave us a kind of Newtonian reality, then the task of telling that dynamical story would be trivial: the 4D picture would be one that was consistent with the sort of time evolving laws that we usually take to describe/explain ordinary phenomena. But we know that there are significant problems created for this sort of dynamical view by the EPRB correlations; time evolving laws just won’t “save the appearances”.
When the global constraints contain final boundary conditions, the appropriate dynamical picture that we tell about our local experience of the phenomena will involve some sort of retrocausality. But we shouldn’t need to introduce any fancy new dynamical mechanism (like the pseudotemporal processes you mention), we just take note of the global constraints, and our particular perspective, and take these into account when enumerating our theory describing the phenomena. In particular, our spatiotemporally embedded perspective is going to mean that we have epistemic access only to initial boundary conditions and so we’re going to be providing some sort of probabilistic story based on these and in the absence of any final boundary conditions.
That this story is dynamical is a feature of how we tell it, not as a result of some objective “dynamicism” in reality. And I think this is the most fruitful way to understand retrocausality.
And just a last point on fine-tuning. Faithfulness is an assumption that statistical independences shouldn’t hide causal dependences—and if they did, then they would have to be fine-tuned to do so. Thus the fine-tuning under consideration is of the causal mechanisms relating variables such that they hide the statistical dependence that we would expect to see from such mechanisms. I think this might be slightly different from some claim that rules out ad hoc causal mechanisms.July 16, 2015 at 2:37 pm #2799
I agree completely with your premise that “since it is also the case that we occupy this reality and we have been able to provide rather successful causal and dynamical models representing the phenomena around us, then there must be some sort of story explaining how we can do this given that reality is actually a 4D block obeying said constraints.” This is essentially (and formally) the claim that correspondence with successful existing theories is required of all new theories. However, this is a bit tricky for a new theory (like RBW) that underwrites quantum physics (successful existing theory) because it’s precisely quantum physics that we want to interpret. So, if Bob is looking for a dynamical interpretation of quantum physics and is given an adynamical theory underwriting quantum physics, then Bob is not going to be satisfied.
I also agree completely that “we don’t do this by giving some competing dynamical story in terms of time evolving laws. What we’re after is a dynamical story that arises as a result of a combination of the global constraints and our spatiotemporally embedded perspective,” unless of course you’re using the formalism of that competing dynamical story to make formal correspondence with existing theories. In that case, you might then also satisfy Bob.
Finally, I agree completely with you about the epistemic (rather than ontic) view of retrocausality in a BW with a global constraint: “That this story is dynamical is a feature of how we tell it, not as a result of some objective ‘dynamicism’ in reality. And I think this is the most fruitful way to understand retrocausality.” But, I suspect Bob would disagree, as he seems to desire an ontic basis for retrocausality.
Thus, there doesn’t seem to be any disagreement between us, but does Bob have to abandon his desideratum? I don’t think so. I think it may be possible to construct a corresponding (ontic) dynamical model that isn’t superfluous relative to the 4D adynamical global constraint model, e.g., PTI and RBW. These two models aren’t ontologically equivalent, but they are complementary. It just comes down to how you view consciousness and experience.
For example, if you adopt an “Eastern” worldview a la Nisargadatta Maharaj (author of “I Am That”), the dynamical, time-evolved experience is not fundamental, as seen in this quote (442, 1973):
‘Who am I’. The identity is the witness of the person and sadhana consists in shifting the emphasis from the superficial and changeful person to the immutable and ever-present witness.
In that view, accounts in 4D with adynamical global constraints are (probably) fundamental to 3D time-evolved accounts with dynamical laws. However, in the “Western” worldview, physics is done by 3D time-evolved beings who can imagine a 4D perspective (and alter their perceptions through meditation, for example). But, just because we can imagine it doesn’t make it ontic. Thus, in the “Western” worldview, the 3D time-evolved accounts with dynamical laws are fundamental to accounts in 4D with adynamical global constraints. Note: A mere dynamical dressing for a 4D account a la “pseudo-time” processes, doesn’t constitute a fundamental dynamical account. I’m thinking, again, of something like the relationship between PTI and RBW where the formal mechanisms of PTI aren’t superfluous relative to those of RBW. That’s the sense in which I mean they’re complementary and not equivalent.
Hopefully, I haven’t strayed too far from rigorous discourse 🙂July 16, 2015 at 3:00 pm #2800
I think I’m close to understanding faithfulness. Let me respond to your last paragraph so you can correct me as necessary.
Retrocausality avoids the non-locality conclusion of Bell inequality violations by denying statistical independence (SI). It does this by providing a causal mechanism that hides a true statistical dependence (where we thought we had SI), thus Wood and Spekkens claim that such causal mechanisms are fine-tuned a la superdeterminism (and therefore, undesirable). Is this correct?July 20, 2015 at 8:19 am #2910
Yes – if by SI you mean that any hidden variables are independent of the future measurement settings, then this is a good gloss of the position.
You must be logged in to reply to this topic.