Home › Forums › 2015 International Workshop on Quantum Foundations › Retrocausal theories › Explicit models of retrocausation
Tagged: retrocausality, Stochastic Quantization, toy models
This topic contains 2 replies, has 2 voices, and was last updated by Nathan Argaman 5 years ago.

AuthorPosts

July 8, 2015 at 7:27 am #2524
First, the big picture:
Quantum Mechanics is now 90 years old, and still nobody understands it. It is remarkable for having many formulations – wave mechanics and matrix mechanics were introduced from the outset, and Bohmian mechanics was one of the primary motivations for Bell’s work. Bell’s nogo theorem implies (at least to many of the participants of this session) that the leading route for developing an understanding of QM is to seek a reformulation of QM which involves retrocausation.
In fact, there exists a reformulation of Quantum Field Theory, called Stochastic Quantization and developed by Parisi and Wu in 1981, which perhaps can achieve this. Unfortunately, it is generally applied to Euclidean field theory, and its application to Minkowsky space, in which a time axis is singled out, has not been sufficiently explored yet. Furthermore, in order to achieve an understanding of QM based on it, one would need to discuss measurements within this theory. This would involve singling out not only a time axis but also a direction of time, perhaps by imposing lowentropy boundary conditions in the past (in addition, it would involve introducing macroscopic pointer variables). This has not been seriously attempted yet, but perhaps by the time QM becomes 100, an understanding of QM along these lines will be developed.Now to my modest contribution:
I worked on this for a while several years ago. The main result was an introductory article in Am. J. Phys. which I would like to discuss here:
http://scitation.aip.org/content/aapt/journal/ajp/78/10/10.1119/1.3456564
In order to demonstrate the importance of the arrow of time in discussions of Bell’s theorem, an explicit retrocausal toy model, capable of reproducing the predictions of QM for Belltype experiments, was presented. In comparison with the explicit nonlocal toymodel which Bell himself included in his original paper, this model appears to be much more natural, i.e., it is not contrived. It appears unlikely that corresponding toymodels can be developed for the other options which have been put on the table, involving denial of free will, various interpretations of what denying realism may imply, or accepting conspiratorial options.
The topic of explicit models has already been brought up in this section, in the work of our chair, Ken Wharton, and collaborators. My model is essentially identical with the zerogamma limit of theirs.The connection with Stochastic Quantization was considered too much to include in an Am. J. Phys. article by the editors, and has not been properly published. It is available in a preprint version, on arXiv:
http://arxiv.org/abs/0807.2041v1
see Sec. IV there. I would love to see comments on this.I would also like to bring up the issue of the history of such models, in the hope that the various participants can help in clarifying the picture. Retrocausation has been brought up repeatedly in the discussion. The first instance even predated Bell’s work: Costa de Beauregard introduced it in the context of EPR correlations. Nevertheless, I was unable to find *explicit* retrocausal models in the literature at the time of my work. I still think that it is likely that an earlier explicit model exists somewhere out there. Can any of you help me find it?
Thanks, Nathan.
 This topic was modified 5 years, 1 month ago by Nathan Argaman. Reason: Typo
July 12, 2015 at 1:14 am #2649Hi Nathan,
I’m still meaning to get to the “Stochastic Quantization” literature you mentioned, but I did look at your summary of it, and I’m a bit concerned about this “extra” time dimension you noted in a footnote. (Some discussion of “metatime” has also recently come up in Cohen’s topic and Elitzur’s topic, and I hope to get a broader discussion about that issue going soon. Also, it’s relevant for Kastner’s general approach…)
But for now I would like to delve into your model a bit. I do think it is the first explicit retrocausal entanglement model in the literature. (Perhaps Rod Sutherland would lay claim to that with his Bohmian approach, actually; he’s just posted an updated version of that model in this forum.) I do think that your original model is almost the same as my model (in the gamma>zero limit), but I see a few small distinctions and want to raise them here.
First of all, you’re using information about the future settings to impose a special *initial* boundary constraint on the hidden polarization. In contrast, I find it more natural to impose a *final* boundary constraint that leads to much the same effect. Do you see this as an essential difference, or have a strong opinion about where the boundary should be imposed? One thing I’ll note is that if there is some intermediate polarizationrotation between the entanglement source and the final measurement, you’ll have to change your initial distribution to account for it. But if you put the boundary constraint on the *end*, then it’s accounted for automatically.
Another difference between our models arises at the one future measurement where the angles don’t match. I’m not positive exactly what you see happening to the polarization at that point, as I didn’t see it noted explicitly. Does your model imply a sort of “collapse” of the mismatched polarization at that point? (Where it snaps to alignment, one way or the other, according to Malus’s Law?) If so, I wonder if you see this as a timeasymmetry at that particular detector. Because run backward in time, the polarization would “snap” from a *matching* angle to a *mismatching* angle. Or maybe you’re envisioning a lesscollapsey version of Malus’s Law, more along the lines of the anomalousrotations in my model?
Best,
Ken
July 12, 2015 at 4:26 pm #2665Hi again, Ken,
Regarding the “extra” time, it was there in the original Stochastic Quantization paper and subsequent works, but it is also possible (as they occasionally note) to simply stipulate an “equilirium distribution” to begin with, and then there’s no need for the “equilibration” to “occur” as this “extra” time tends to infinity (of course, the “equilibration” here is distinct from that of Bohmian mechnics). So if you don’t like the “extra” time, you can simply do without it. I myself would also prefer it that way (of course, that begs the question how the different parts of space time “know” about each other, but I don’t think we should allow ourselves to be bothered by that; think of Newton, who disliked the longrange instantaneous character of gravitational forces, but developed his theory anyway).
Regarding the model, I was trying to stay as close as possible to Bell’s analysis, so I initially used $\lambda$ to denote all of the relevant hidden variables. I then focused on the one which represents the photons’ polarization as they leave the source (or, if you like, the direction of the angular momentum of the intermediate state of the emission cascade, assuming a source of doublyexcited Ca40 atoms). The remaining variables are then essentially redundant. Thus, I would look at your model as a more detailed version, where you describe the whole sequence of angles. You could have, say, a distinct angle for every picosecond of photon flight, but in the gammato0 limit there’s only one dominant rotation, so they’re mostly redundant. My lambda is then the angle corresponding to the moment the photons leave the source. It was just not necessary to give a more detailed description.
I like the details of your model which you stressed – the fact that it’s not “collapsey,” and the fact that it’s clearly determined by the boundaries at the time of measurement. I did mention in my work that there’s something special about irreversible measurements that must be a determining factor, because if you think of Alice just letting her photon go through a polarizing cube, then she may later recombine (with another cube) the two partial beams to recreate the original photon polarization (i.e., she may construct an interferometer and regain the original photon state, or a rotated one), and then she can measure with a different orientation by using yet another cube. In order for the toymodel to work, only the irreversible measurement counts, with the proper rotations taken into account.
Best, Nathan.

AuthorPosts
You must be logged in to reply to this topic.
Comments are closed, but trackbacks and pingbacks are open.