Are retrocausal accounts of entanglement unnaturally fine-tuned?

Home Forums 2015 International Workshop on Quantum Foundations Retrocausal theories Are retrocausal accounts of entanglement unnaturally fine-tuned?

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #2427
    Ken Wharton
    Member

    An explicit retrocausal model is used to analyze the general Wood-Spekkens argument that any causal explanation of Bell-inequality violations must be unnaturally fine-tuned to avoid signaling. The no-signaling aspects of the model turn out to be robust under variation of the only free parameter, even as the probabilities deviate from standard quantum theory. The ultimate reason for this robustness is then traced to a symmetry assumed by the original model. A broader conclusion is that symmetry-based restrictions seem a natural and acceptable form of fine-tuning, not an unnatural model-rigging. And if the Wood-Spekkens argument is indicating the presence of hidden symmetries, this might even be interpreted as supporting time-symmetric retrocausal models.

    (Joint work with SJSU students. Online Discussion: TBA)

    #2502
    Nathan Argaman
    Participant

    Great work! In my mind, this is just the right sort of reply to Wood and Spekkens.
    It is worth mentioning that our colleagues seeking foundational theories of nature in the field of high-energy physics, generally consider “protection by symmetry” to be a legitimate, and in fact standard, form of fine tuning.

    #2544
    Ken Wharton
    Member

    Thanks, Nathan… But I wonder what sort of “fine tuning” high-energy physicists are trying to justify using symmetries. Not this same “no-signalling” issue, I assume? Or is it essentially the same: are they worried about how to maintain the perfect commutation of spacelike-separated operators, and that’s what you’re talking about here?

    #2551
    Nathan Argaman
    Participant

    No. In high-energy physics it is typically the mass of a particle which is protected. Because of renormalization, the parameters of the Lagrangian change their values (“running coupling constants”), so the observed mass of a particle should “naturally” be on the order of the energy scale of the theory. Some masses are much smaller. The prime example is the photon mass. It is zero because of gauge symmetry (a mass term for the photon would break gauge symmetry). Another typical example is the pion, which has a small mass because of an approximate symmetry.

    #2583
    Nathan Argaman
    Participant

    However, your planned attempt to generalize this to partially entangled states leads me to think that the symmetry principle may not always work: it appears that introducing asymmetric states will be easy.

    More likely, the relevant physical principle is the increase of entropy, or the fact that the entropy was low in the past (I say this partly because of the relation between “information causality” and Tsirelson’s bound). After all, isn’t that always what prevents us from signalling into the past? If we didn’t have a “resource” of low entropy in the past, we wouldn’t be able to signal to the future either, in the following sense.

    Think of a protocol where Alice sends a spin-1/2 particle to Bob (who is to her future), and they both perform measurements on it. The usual thing is that Alice can encode one bit per particle, by “pre-selecting” the output of her measurement. That means that she acts upon the result of her measurement, passing the particle to Bob only if its spin is in the direction which corresponds to the data she wants to send. But we can design a protocol where she is prohibited (by fiat) from doing so – the particle is passed to Bob regardless of her output. In that case all she can control is the measurement she makes. Thinking in terms of causation, it would appear that she can still signal to Bob, but in fact all Bob will be able to measure are EPRB-type correlations with her.

    In fact, it is clear why: in order to convey information she has to pass to Bob a system or particle which has a large phase-space (or Hilbert space), and to encode the information by restricting the state of the particle to be in part of that Hilbert space, with the different parts corresponding to different messages. By pre-selecting, she does just that. If you think of the entropy of the particle or system she is sending in order to convey one bit of information, it must be smaller by at least one bit relative to its entropy were she to send a random signal (or the overall log of the size of the space of its possible states). However, if she can’t act on the output of her measurement, she can’t reduce the entropy – it remains at least as large as it was prior to her measurement.

    This type of entropic analysis works in both a classical or a quantum description of a system, and I guess it will have to work in any “good” retrocausal model as well. I don’t think that the retrocausal toy-models we worked with so far are “good” in this sense.

    What do you think?

    I want to make one more minor point, regarding your discussion of Tsirelson’s bound. First, you use the term in two distinct ways, one which I think accords with the usual usage, where the Tsirelson bound is fixed at $ 2 \sqrt{ 2 }$, and one where you imply that it may change when you vary $\gamma$. What changes is the maximum value that the relevant combination of correlators can take in your model, and not the Tsirelson bound itself. In fact, I don’t think it is a coincidence that your model always conforms to the bound (by some finite margin for non-zero values of $\gamma$). But I’m no longer clear as to how one could best hope to demonstrate that that’s necessary for such models – by symmetry or by entropy considerations.

    #2595
    Ken Wharton
    Member

    Hi Nathan; Thanks for the catch on the occasional misuse of the phrase “Tsirelson Bound”… That’s an easy fix, at least!

    On the more substantive issues, I do think that it will turn out that symmetry will play the main no-signaling role for even partially entangled states. You’re right that some of the symmetries will disappear in those cases, but some of the no-signaling disappears as well, in a certain sense. Namely, for maximally entangled states, there’s not even *self-signaling*; Alice can’t even signal to her own output, let alone Bob’s output. (Thanks to Pete Evans for bringing this to my attention.) Once the maximal-symmetry is broken, by going to partially-entangled states, self-signaling reappears (although of course Alice-Bob signaling does not). I can’t quite prove that the remaining no-signaling is due to a symmetry, since I don’t have the partially-entangled model working quite yet, but stay tuned…

    You’re absolutely right about the entropy issue being connected with not being able to signal to the past, of course. Take a look at section 4 of a piece I wrote with Huw Price, which is essentially the same argument you make above. (So maybe these models are “good” in the sense you mention after all.)

    That said, I still haven’t properly sorted out the objective and subjective roles of entropy on signaling… Some of the signaling asymmetry is certainly due to the effect of entropy on consciousness (we don’t know the future), and some is due to the direct accessiblity of low-entropy sources by experimenters. Right now I’m leaning towards putting most of the explanatory burden on the subjective (consciousness) side, and very little on the objective (source) side, but I need to set aside some time to think about all this more carefully and systematically. Your post has motivated me to do just that! 🙂

    #2640
    Nathan Argaman
    Participant

    Thanks, Ken. I now read your work with Price, and indeed my point above largely overlaps with your discussion there.

    Regarding consciousness, please don’t bring that into the discussion – I’m sure it won’t help, just like the introduction of the concept of “free will” led to much discussion, with only a fraction pertaining to the relevant quanum phenomena. These notions are too human, and therefore much harder to understand than physical phenomena, even quantum phenomena! And for the purposes of discussing and studying quantum phenomena, measuring devices which irreversibly register the results (in their memories) are completely sufficient (in the other case, “free variables” is a completely well-defined mathematical concept which plays the relevant role in the discussion, without leading one astray from quantum physics into human affairs).

    That said, I completely agree with you that one needs to spend some time clarifying these issues for oneself. I did so recently, re-reading parts of Price’s book in the process. The upshot is that the fact that we can only remember the past and not the future (or, to be more careful, our computer memories can only register information from their past…) is yet another instance of the Principle of Independent Incoming Influences, and is tightly related with the fact that there are sources of low entropy in the past. For example, if you want to store a bit A in memory, you can take a blank bit M=0 of memory, and perform a controlled-not operation, with A your control. After this, M will “remember” A. The controlled-not operation is reversible, but you can’t run the procedure in reverse because there’s no way to bring in a blank (low entropy) bit of memory from the future.

    I hope this helps. Nathan.

    #2762
    Ken Wharton
    Member

    Hi Nathan,

    You’re precisely right that the issue of “not remembering the future” is entropy-related, and it’s just as true for machines as it is for humans. I’m fine with taking humans out of the equation.

    And given this connection, I suppose it doesn’t make sense to imagine time-reversing the direction one remembers things without also time-reversing where the low-entropy sources lie. So I guess I shouldn’t be parsing up the analysis into separate “subjective” and “objective” aspects: they both should always go together. If low-entropy sources lay in both the future and the past then we could remember them both.

    That’s useful. One of the cases that was confusing me was signaling between agents with opposite arrows of time (even though years ago I published a science fiction story about this 🙂 ). But in such a case, any interaction would essentially provide both agents with low-entropy sources in both directions, so neither of them would be restricted to “remembering” in just one direction. I think that insight solves the biggest problems I was encountering, so thanks!

    But I still wish I could make sense of signaling on a micro-scale, fine-grained, below the level at which one could make entropy-related arguments. Sure, there are no agents at that level to send or receive signals, but this mismatch makes it hard to see how the no-signaling issue fits together with my low-level ontological models. Any further insight you have on this would be much appreciated…

    #2810
    Nathan Argaman
    Participant

    Hi Ken,

    There’s one point which has been nagging at the back of my mind these last few days: When I said “good” retrocausal models, what I meant is that they should be clear about what the ontic variables and the epistemic variables are, and that there would be a natural way to take the log of the number of possible ontic states and associate it with an entropy. Intuitively, I think in my model lambda does not represent an ontic variable – it is an angle which seems to divide the available phase-space into parts which lead to different outcomes. When the parameter settings are the same, there are just two relevant parts. When Alice and Bob choose different settings, apparently the structure of the available phase-space is different. You could think that it’s only the way the phase-space is subdivided, but you can’t go too far in that direction and here’s why: If the structure of the phase space is not changed then there’s no apparent reason for the probability density to change, and for such models Bell’s original analysis works (with lambda representing the phase-space variable), so they cannot violate the inequality and won’t explain anything.

    Now I’m not saying it’s going to be simple, if the parameter settings affect the structure of the phase space, but I think that’s a thing to explore. Also, in this sense your model is different, because you do have an explicit description of something rotating along the path, so it looks very much like an ontic variable. Again, you can think of my lambda as the value of that variable at the point along the path which correponds to the source. I think that’s just one way that you can subdivide a phase-space: the space of functions is clearly divided into classes which share the same value at a point. But intuitively I think that that’s not the relevant subdivision. I would think that the entropy would refer to the phase space of the values of the ontic variable at the source, at just one instant. So we have to keep looking.

    Best, Nathan.

    #2834
    Ken Wharton
    Member

    Hi Nathan,

    I think you’re almost exactly right about what should be considered a “good” variable, but I’ll throw one suggested change at you: Instead of taking the log of the number of possible ontic *states*, what about the log of the number of possible ontic *histories*? This associates “entropy” with regions of spacetime rather than instantaneous slices of some system, so it’s a bit different than what we’re used to entropy-wise, but I think this has to be the right way to think about block-universe retrocausality, as I outline in the toy models in http://www.mdpi.com/2078-2489/5/1/190 .

    This viewpoint also makes it natural to change the structure of the phase space, exactly as you describe. (Again, see the linked paper.) The 3D analog is finite-edge effects in stat. mech., where the structure of the border of the region changes the state space for which one calculates the probabilities inside the bulk. Now, extend this analysis to 4D, and you have a clear-cut way to allow the structure of the 4D border (including the future!) to change the state-space in the enclosed spacetime region.

    As for how this might tie into my rotating-spin-vector, you can try wading through http://arxiv.org/abs/1301.7012 for some ideas, but there are plenty of unresolved issues here, many of which I’ve placed on the back burner while I’m working on partially-entangled states.

    #2899
    Dustin Lazarovici
    Participant

    Hi Ken,

    I also wanted to give you some feedback on your paper. Unfortunately, I don’t have that much to add, because I think your discussion is very much on point. 🙂 The Schulmann model is quite interesting (I didn’t know it before) and your arguments concerning symmetry are, of course, correct.

    It’s not so much a factual critique, rather a personal feeling, that you’re still giving too much credit to the Wood-Spekkens argument, though. To me, it’s just one of those meta-results that seem deep but are actually quite irrelevant. In a toy-model, where any postulation of probability distributions is ad hoc and where you might have to introduce some artificial variables, the “fine-tuning” objection seems to have some bearing. In any more serious theory, where the probability distribution is either part of – or better – derivable from the fundamental postulates/law of the theory, the Wood-Spekkens argument amounts to the claim: if the theory was different, it’d be wrong. I mean: if the Boltzmann distribution was different, pigs might be able to fly. But who cares?

    Still, the Schulmann model is nice as an “intermediate step”, because it demonstrates how the “correct” (i.e. non-signalling) distributions can be justified by deeper principles (e.g. symmetry).

    Best, Dustin

    #2913
    Ken Wharton
    Member

    Hi Dustin; Thanks for the kind words!

    As for giving the Wood-Spekkens argument too much credit… If there was an existing retrocausal model they were attacking, that gave the right probabilities already, I’d certainly agree with you. But they’re framing it as an argument against trying to develop such a model in the first place. And since most people’s instincts are aligned against retrocausality to start with, I think it’s an argument that many people would be inclined to accept. So I’m not inclined to dismiss it quite so readily.

    And I do think it’s a reasonable argument… after all, I still can’t quite answer it definitively, because moving to partially-entangled states breaks some of the symmetries that make the two-particle version of Schulman’s model work so nicely. A closely related issue, that I’m having even more trouble with, is framing classes of retrocausal models for which you can’t signal into the past. (You can see some of the above discussion with Nathan relating to this point).

    Cheers!

    Ken

    #2968
    Nathan Argaman
    Participant

    Hi Ken,

    I’ve finally read not only your “information” article but also your 1307 arXiv preprint, which indeed required some “wading.” I must say I think you’re on the right path, with the most appropriate motivations I’ve seen yet (that is, of course, to the best of my judgement). And there’s a lot to do. I wonder why there aren’t more people working along such lines. For example, do you know what Rob Spekkens thinks about your line of argument? Does he accept now that retrocausation is worth pursuing?

    Two things on the technical level: (a) I think you will need a measure for paths, as in Feynman path integrals; they’re not discrete, and you can’t just count them with integers; (b) Even accepting your idea of an entropy for 4D histories, I still think we’ll need the 3D concept in addition, e.g., so as to be able to identify low-entropy past boundary conditions, etc.

    And one more thing: I think that by the time we’ve learned how to describe a measurement in a retrocausal theory of local beables, we will see that our ontic variables describe waves (with quantum noise, not just a solution of a PDE), but our epistemic variables are nevertheless “corpuscular,” in the sense that once something has been measured irreversibly, even just a single click in the detector, it’s either there or it isn’t. So the “it from bit” ideas will still apply in some sense, but only in the limited sense relevant to the epistemic variables. (And, of course, unitary evolution will be natural – you simply can’t change the information in your epistemic state between updates).

    OK, I guess that’s it. Thank you very much for your efforts in oganizing this, and especially for inviting me to join in. Please tell me if you’d like to look at the stochastic quantization ideas and discuss them as well.

    Cheers, Nathan.

Viewing 13 posts - 1 through 13 (of 13 total)
  • You must be logged in to reply to this topic.

Comments are closed, but trackbacks and pingbacks are open.