Thinking about QM as a variational principle (online 7/9 @ 9am UTC-6)

Home Forums 2015 International Workshop on Quantum Foundations Retrocausal theories Thinking about QM as a variational principle (online 7/9 @ 9am UTC-6)

This topic contains 8 replies, has 3 voices, and was last updated by Ken Wharton Ken Wharton 5 years ago.

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
  • #2548

    I have been thinking for some years about foundational issues, with the (rather contrarian) viewpoint that what is needed is not a new interpretation of existing theory, but a new theory, expressible for instance as a modified wave equation replacing the Schrodinger or Dirac equation. The approach that appeals to me is that of a variational principle, where the functional to be extremized is an integral over spacetime. Since this type of analysis would produce a solution for all spacetime, it can naturally accommodate nonlocal effects, and it is agnostic about the direction of time. I regard this as philosophically similar to Cramer’s transactional interpretation, but with possibly more predictive power since I claim the freedom to deviate from the standard QM wave equation(s). (On the other hand, I don’t yet understand how an approach in which one solves for all times “at once” can be used to make a “prediction”; ultimately that will depend on what type of spacetime boundary conditions–including initial and final conditions–must be specified.)

    I originally attempted to construct a relativistically covariant functional with a term resembling the uncertainty $\delta x^2 \delta p^2$ of the wavefunction (integrated over all time), so that minimizing such a functional over the period between two successive measurements (or preparations) would force the system to stay close to an eigenstate of whatever operator was implied by the nature of the first measurement, and then evolve smoothly to something close to an eigenstate corresponding to the second measurement. Here “first” and “second” are defined according to the observer’s sense of the direction of time; the explanation works equally well in both directions. This would account for the observation that the second measurement would show the wavefunction to have “collapsed”—although “decayed” would be a more accurate characterization. The functional used in this analysis should be construed as including the measurement apparatus, which in practice must always include macroscopic components at some level; therefore at the time of the measurement the apparatus always outvotes the microscopic system under study and ends up in an eigenstate of whatever variable it is designed to measure (“pointer position”). This is how the wavefunction is always observed to collapse into an eigenstate of the variable measured.

    My initial exposition of this approach is laid out in (Unfortunately, an approximation I made in section 3.5 of that paper turned out to be unjustified, so the proof of consistency with the Born rule is invalid.) I demonstrated the approach in an “all-at-once” solution of the electron two-slit experiment in, published at Phys. Scr. 2012 014006.

    Shortly thereafter, I discovered Ken Wharton’s persuasive essay The Universe is not a Computer, in which he motivated a variational-principle approach as a “Lagrangian Schema Universe.” With the benefit of those ideas and a long discussion with Ken, I am now trying to work from the starting point of a Lagrangian appropriate to relativistic quantum mechanics (possibly QFT), rather than constructing ad hoc functionals with the characteristics I desire.

    • This topic was modified 5 years, 1 month ago by Avatar Alan Harrison.
    • This topic was modified 5 years, 1 month ago by Avatar Alan Harrison.
    Ken Wharton
    Ken Wharton

    Hi Alan,

    I’m glad you found my perspective useful… any insight you can give us as to how you’ve been trying to implement the Lagrangian viewpoint?

    It looks like you’re planning to be online right about now, but I haven’t had a chance to refresh my memory about your prior work… Still, you did raise this problem about how to extract a “prediction” from an all-at-once Lagrangian-style approach. I don’t know if you recall my low-level explanation of this from the ‘Universe is not a Computer’ essay, but here’s a higher-level account:

    Suppose you have a time-symmetric, all-at-once theory that tells you the joint probability P(A,B) that a initial-preparation A will be associated with a final-outcome B. You also need a theory that tells you the allowed final outcomes B_i for any given measurement geometry G. But given these two pieces of the puzzle, I think predictions are fairly straightforward.

    Essentially, what you do is use your knowledge of the actual initial preparation A and the actual future measurement settings/geometry G, to figure out the *possible* all-at-once solutions (A,B_i). Each of these all-at-once solutions has a joint probability P(A,B_i). You can then predict the conditional probability for any particular outcome B_j by the usual connection between conditional and joint probabilities:

    P(B_j|A) = P(A,B_j) / [\sum_i P(A,B_i)]

    Of course, this only works if you know the future measurement settings/geometry; if G is unknown, you’re forced to use a big configuration space of possibilities, a space that collapses to something more reasonable once you know what type of measurement is coming up.

    Does this solution seem like it might mesh with your general approach? Any problems you see with this?


    Hi Ken,

    I had originally been trying to devise a variational principle (VP) with the desired characteristics essentially as a mathematical exercise, deferring the question of how to justify it physically. The Lagrangian viewpoint tells me that I should use the extremization of the action as my VP, so my current approach is to start with a given Lagrangian and see if the resulting VP does what I want it to. I can find candidate Lagrangians in QFT, where interestingly enough the theory makes it natural to pursue the VP approach, but the conventional approach is to go in another direction altogether (interaction picture, path-integral methods and so on). So I’m trying to see what I can learn from that.
    I agree with your use of the relationship between joint and conditional probabilities to understand what happens in a measurement. But I’ve been hopeful that an improved theory might allow one to calculate more than just a joint probability P(A,B). If the theory doesn’t take us any further than that, we still need a collapse model of some sort to explain how nature picks a particular outcome, and I’d like to see a theory that resolves that issue as well.
    In the work I wrote up previously, I proposed that the phase of the wavefunction (or equivalently, the start time of a measurement) might serve as a hidden variable that controls the final choice among possible outcomes. (Of course, here I use “final” in a logical rather than temporal sense!) [It turns out that Pearle proposed the same idea in 1976 (Phys. Rev. D 13,857-868).] So my idea of prediction would ultimately include something like this, and my comment above alluded to the fact that I don’t understand this mechanism in detail yet. I’d like to find a promising VP first, and then return to this issue.

    Ken Wharton
    Ken Wharton

    Thanks — I need to run right now, but I’ll respond in more detail later.

    Do keep in mind that you may not want to exactly extremize the action in all cases — that way leads classical theory. In the path integral, non-extremized action histories are also important.

    Cheers! -Ken

    Michael B. Heaney

    Hi Alan,

    In your paper, you say collapse does not need to be instantaneous. Won’t this violate the conservation laws?




    Well, I don’t think so, but if I’ve misunderstood something I’d appreciate your enlightening me. The point is that if I make a measurement (or prepare a state) at time t1 and again at t2 > t1, with no intervening measurement, then I have no experimental evidence for *how* the system got into the state I observe at t2. The Copenhagen interpretation tells me that it collapsed in an instant, at t2, but if instead I assert (as I do!) that it changed smoothly during the interval (t1,t2), there is no experimental evidence that can prove me wrong. Therefore I conclude that (A) my explanation cannot be refuted by the experimental record and (B) in particular, if there’s any failure of conservation, it can’t be measured.


    Actually, upon reconsidering what I just wrote, it occurs to me that weak measurements may actually give some information about what happens between the two “strong” measurements at t1 and t2, contrary to what I just said. I’m not certain that that’s the case–I’ll have to think more about weak measurements–but if it is, then the following items appear to support my idea of a continuous transition instead of an instantaneous collapse:
    Nature 511:570-573 (31 July 2014) doi:10.1038/nature13559
    Nature 511:538-539 (31 July 2014) doi:10.1038/511538a [This one is a Nature commentary on the first article.]


    One nice thing about a smooth transition between the states measured at two different times is that it looks the same for both directions of time–in keeping with the “Time-Symmetric Theories” theme.

    Ken Wharton
    Ken Wharton

    Hi Alan,

    Sorry it took me so long to get back here…

    On the joint-probability issue, I see two ways to go. In my first response, way above, I mentioned that one also needs the rule that tells one how to figure out which allowed outcomes correspond to which measurement settings/geometries. I would think that this rule would solve the problem you mention, in that it would be this rule that told you the outcomes with non-zero probability, and then P(A,B) that would allow you to work out the conditional probabilities for the allowed outcomes.

    But the other, better, way to go would be to have a function P(A,B) that went to zero (or nearly zero) whenever B was a non-allowed (non-eigenstate) outcome. That’s what I was trying to accomplish in , and I should try to get back to that style of story at some point (I’ve put it aside for now). From an experimental-verification perspective, I find this a very promising path forward, because if there was some tiny-but-nonzero probability of a non-eigenstate outcome, this might be detectable with current technology — and better yet, also plausible that most experimentalists would have thrown out any prior indications of such outcomes as experimental error (since there’s no framework in standard QM that would predict such things).

    As far as the hidden variable space goes, I absolutely agree that the global phase should be part of that space (it’s by far the most natural piece!), but you might like to see where this starting point leads for the case of spin-1/2 systems (where the global phase is hard to define without hitting coordinate singularities). The result is a much larger hidden variable space. (Also, such larger spaces are generally needed if we’re going to expand the sort of hidden-variable models that Nathan Argaman and I are developing to partially-entangled states.) If you’re interested, you can take a look at the recent JPA paper: Another motivation for such a large hidden variable space comes from relativity, where second-order-in-time equations are far more natural than the first order ones (standard quantum theory knows how to handle the latter).

    Best, Ken

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.

Comments are closed, but trackbacks and pingbacks are open.