Meaning of the Wave Function

Does Protective Measurement Bear on the Meaning of the Wave Function?

Viewing 30 posts - 1 through 30 (of 30 total)
  • Author
    Posts
  • #661

    From the start, the technical result of protective measurement has been claimed to have implications for the interpretation of quantum mechanics. In particular, it has been suggested that protective measurement establishes the reality of the wave function. Here I examine the entanglement and state disturbance arising in a protective measurement and argue that these inescapable effects doom the claim that protective measurement establishes the reality of the wave function. An additional challenge to this claim results from the exponential number of protective measurements required to reconstruct multi-qubit states. I suggest that the failure of protective measurement to settle the question of the meaning of the wave function is entirely expected, for protective measurement is but an application of the standard quantum formalism, and none of the hard foundational questions can ever be settled in this way.

    #917

    Hi Max,

    As you may already know, I answered one of your objections in my edited PM book. Here is a brief summary. But I am not sure whether it is successful.

    I think your objection based on the fact that a realistic protective measurement can never be performed on a single quantum system with absolute certainty does apply to the usual Einstein-Podolsky-Rosen criterion of reality. However, in my opinion, one may avoid this objection by resorting to a somewhat different criterion of reality, which seems more reasonable and also appropriate for realistic protective measurements.

    The new criterion of reality is that if, with an arbitrarily small disturbance on a system, we can predict with probability arbitrarily close to unity the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.

    Although a realistic protective measurement with finite measurement time T can never be performed on a single quantum system with absolute certainty, the uncertainty and the disturbance on the measured system can be made arbitrarily small when the measurement time T approaches infinity (in theory). Thus according to this criterion of reality, realistic protective measurements also support the reality of the wave function.

    I would like to know your response.

    Shan

    #918

    Moreover, I also addressed the scaling problem you pointed out in your PM paper.

    I think in order to argue for the reality of the wave function in terms of protective measurements, it is not necessary to directly measure the wave function of a single quantum system, and measuring the expectation value of an arbitrary observable on a single quantum system is enough.

    If the expectation values of observables are physical properties of a single quantum system, then the wave function, which can be reconstructed from the expectation values of a sufficient number of observables, will also represent the physical property or physical state of a single quantum system.

    Shan

    #925

    Hi Shan,

    Thank you for your comments. Here are my responses.

    First of all, in your paper you say that measurability of an unknown quantum state is a condition for the reality of the wave function that is “too stringent to be true.” But it was precisely this condition (albeit with reference to an unknown eigenstate of the Hamiltonian) that the architects of protective measurement have used to define the reality of the wave function. For example, Aharonov and Vaidman (1993) write: “We show that it is possible to measure the Schroedinger wave of a single quantum system. This provides a strong argument for associating physical reality with the quantum state of a single system.” They made no qualification of the term measurement: what can be measured is considered real, and protective measurement (they claim) allows us to measure the wave function, and so the wave function must be real.

    You also say that if the condition (measurability of an unknown quantum state) were true, “then no argument for the reality of the wave function including the PBR theorem could exist.” But you can have conditions that do not refer to measurability. Indeed, the PBR theorem is precisely of this kind — it does not infer the reality of the wave function from some notion of measurability, but rather from a clash between an “overlap” assumption (the possibility that a single real state is associated with two distinct nonorthogonal quantum states), a preparation independence assumption, and the predictions of QM.

    In my paper, I have argued that protective measurement does not allow one to measure the wave function in a sense needed to establish its reality simply because you can never transcend the statistical (indeterministic) aspect. There’s always a nonzero probability for projecting the system into an orthogonal subspace, and moreover there’s no way of telling whether this has happened (i.e., whether your measurement has failed). You suggest to circumvent this objection by introducing a new criterion for reality (“if, with an arbitrarily small disturbance on a system, we can predict with probability arbitrarily close to unity the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity”). I don’t find this terribly convincing, for two reasons:

    First, the criterion seems ad hoc, tailored precisely to what protective measurements can accomplish, and just so that the objection mentioned above can be circumvented. There is a well-established and well-motivated notion of reality based on (realiable) measurability, and I just don’t see how your criterion could be motivated in a similar way.

    Second, we are not talking about making an arbitrarily precise measurement of some physical constant, like adding more and more digits to improve numerical precision. What we are talking about in the context of protective measurement is an unpredictable, unavoidable disturbance of the system (no protective measurement leaves the initial state unchanged) that may lead to a complete new (indeed orthogonal) quantum state. These are two fundamentally different types of disturbance (or lack of “precision”). If protective measurement allowed me to measure the initial wave function every time, plus or minus an arbitrarily small deviation from its initial value (something like |psi> + delta|phi>, where delta can be made arbitrarily small and |psi> is the inital state), then your criterion might become sensible. But protective measurement doesn’t do that.

    As a side note, to reconstruct the wave function, we need to carry out many protective measurements. If, to become a meaningful condition for reality, a single measurement needs to be made essentially infinitely long, then how can we ever carry out more than one such measurement?

    Finally, I do not understand your reply to the “scaling problem” I pointed out. (The problem, to recap, is about the extraordinary large number of protective measurements required to reconstruct the wave function.) You say that “in order to argue for the reality of the wave function in terms of protective measurements, it is not necessary to directly measure the wave function of a single quantum system, and measuring the expectation value of an arbitrary observable on a single quantum system is enough.” Why? The expectation value is just a number and, on its own, doesn’t tell us anything about the state of the system.

    So far my thoughts. Perhaps they will also help spark discussion among the participants.

    #929

    Hi Max,

    Without wanting to put words in anyone’s mouth, I took Shan’s second point to be that if we can establish the reality of the expectation value of an arbitrary observable, then we have established the reality of the expectation values of all observables, and the latter is (more than) sufficient to reconstruct the wave function.

    Yours,
    Matt

    #930

    Hi Matt,

    Thanks for the clarification. But I’m not sure if the conclusion from the reality of one expectation value to all expectation values is so straightforward. Suppose I identify “reality” with “can be measured.” So the first expectation value would be real because I was able to measure it (somehow). But if I cannot subsequently measure any other expectation values, then why should I consider them real as well? I think QM tells us many cautionary tales of why counterfactual reasoning may fail. Unless I can actually (simultaneously or consecutively) measure all the required expectation values, I don’t see why measuring one value (and therefore deeming it real) should allow me to say that the expectation values of all other observables should be real too. And in protective measurement we have precisely a situation where such multiple measurements may be fundamentally impossible (if we take the limit of an infinitely long measurement) or practically impossible (because the number of required measurements increases exponentially with the dimension of the Hilbert space — that’s the scaling problem I mentioned).

    Best,
    Max

    #931
    Robert Griffiths
    Participant

    Dear Max,

    I read your the article you wrote with Claringbold in arXiv:1402.1217 v2, which
    I assume is close enough to what you recently posted here and to what will
    appear in the book that the following comments are relevant.

    First, I think that your demanding infinite precision or even extremely high
    precision is too strong a requirement, and I tend to agree with Shan on this.
    Genuine experimental physicists (I am not one) appreciate that none of their
    measurements can be made with infinite precision, and most physicists would
    accept that after a while things have been pretty well demonstrated, even if
    not with certainty. In this sense we differ from mathematicians who demand a
    rigorous proof, and certain philosophers who have a similar attitude. The
    argument that protective measurements gets you pretty close 90% of the time is,
    it seems to me, a fairly good argument that what you are measuring something to
    do with the microscopic reality which we (at least some of us non-QBists) think
    is there. Yesterday I heard a talk by an experimentalist working at the LHC
    where they have obtained evidence of the existence of the Higgs boson. Not
    conclusive, but enough to be quite convincing. That is the way physicists
    think, and is probably essential to our profession’s making significant
    progress in our understanding of the world.

    Second, with respect to what protective measurements really do tell us about
    quantum theory, I would argue that they are much like the tomographic
    measurements used to study density operators in situations where people think
    they can reliably repeat “the same” preparation procedure. The low energy
    scattering processes by which one might hope to approximately determine p(x) =
    |psi(x)|^2, x is position, are probing a probability distribution, and by
    combining these with other measurements (sensitve to phase variations) one can
    recover [psi] = |psi><psi| and thus |psi> up to a phase. As I will be
    arguing tomorrow, one way to interpret |psi> is as a pre-probability, and in
    this sense it is somewhat less, but certainly no more real, than a probability
    distribution, and most of us would not make probability distributions part of
    reality in either the classical or quantum world. Hence I would not be
    quite content with the Aharanov-Vaidman-Gao argument that this sort of thing
    proves that |psi> is real. So maybe in this sense I agree with you and
    Claringbold. By the way, I like to think of “protection” as serving to return
    the system to its initial state after its interaction with a probe, and from
    this perspective carrying out multiple successive measurements with probes is
    something like carrying out measurements on many identically prepared systems.

    Bob Griffiths

    #932
    Richard Healey
    Participant

    I have a question about what can be inferred from the final system-apparatus state in Max’s equation (11) (which he takes to be just the zeroth order approximation). It seems that to infer from (11) that we have measured the expectation value of O we need to appeal to some general principle connecting state assignments to value assignments. One such principle is the eigenstate-eigenvalue link (in that direction). EPR explicitly used this as well as their reality criterion in arguing for the incompleteness of quantum mechanical description, though Einstein’s own later versions of the argument did not.
    I think both the original EPR reality criterion and this eigenstate-eigenvalue link are controversial interpretative assumptions: moreover, I think they they are both false. What do you think?

    #933

    Dear Bob,

    Many thanks for your comments.

    First, let me reply to your suggestion that a “FAPP” solution may well be acceptable. I’m happy with FAPP solutions to most problems, even some foundational ones. But I’m not sure that protective measurement falls into that category. As mentioned in my reply to Shan above, in protective measurement we’re not determining the initial quantum state with a certain degree of precision, in the sense that the measured state will always be close to the initial state. (In which case I’d happily endorse such a FAPP solution.)
    Rather, in some measurements we’re obtaining information about the initial state, and in others we’re obtaining information about an orthogonal state (with the system then projected onto that state). There is an irreducible randomness in this process, which is simply a reflection of the unavoidable randomness in the outcome of any quantum measurement of a state that is not already known. In this way, protective measurement is just like any other quantum measurement, including the trade-off between disturbance and information gain. So given this, how could we get any foundational mileage out of protective measurement — which is essentially just a long weak measurement?

    I’ll respond to your second point in a moment.

    Best,
    Max

    #934
    Ken Wharton
    Member

    Hi Max (and Shan…),

    I enjoyed your paper, and I think I buy most (if not all) of your big points… Of course I’m interested in what you *do* think the wavefunction represents, but perhaps that’s too big a can of worms… 🙂

    So perhaps a question for Shan instead: You seem to be somehow linking “expectation values” and “physical properties”. One of the earliest examples I give to my intro physics classes about expectation values is the expected return from a hand of $5 blackjack. Say the expected return is minus $0.15. Is this expectation value a “property” of a hand of blackjack?

    Better yet, say I knew all of the expectation values, and could recover the full probability distribution over all outcomes of a hand of blackjack. In what sense is this distribution a physical property?

    Best,
    Ken

    #935

    Bob,

    Now regarding your second comment. I agree with your drawing of an analogy between protective measurements on a single system (assumed to remain in the initial state throughout) and tomographic measurements on an ensemble of identically prepared systems. There’s certainly a formal similarity here: we’re trading impulsive single measurements on a collection of systems, as in tomography, for a collection of weak measurements on a single system.

    There’s also another connection here. In protective measurement, we could reliably determine the expectation values if each measurement was infinitely long and weak. Similarly, in tomography, if we had infinitely many systems in the ensemble, then we could reliably determine the quantum state of each system. So why does nobody claim that the latter fact implies the reality of the wave function? Advocates of protective measurement to foundational problems would say, I think, that it’s because we’re dealing with an ensemble, not a single system. But as far as the limiting procedures go (infinite ensemble size vs. infinite measurement times), they strike me as similarly questionable, and therefore their ability to justify sweeping foundational conclusions about the wave functions strike me as similarly questionable, too.

    Best,
    Max

    #939

    Richard,

    Thanks, that’s a good question. In the usual formulation, protective measurement follows a von Neumann-style description of measurement. That is, Eq. (11) is taken to describe a situation in which information about the different energy eigenstates |n> is transferred to the mean position of the pointer wave packet, encoded in the corresponding quantum correlations. This, of course, calls for a second measurement stage, the actual readout of the pointer (which in turn is subject to the usual QM indeterminism, something that Dass and Qureshi discuss in their PRA).

    So protective measurement is essentially silent on its particular interpretation of the measurement process — once the correlations are established, it defers to the usual rules of QM. The e-e link would only come into play once we have made a secondary, projective measurement on the pointer, collapsing the entangled state on the rhs of Eq. (11). But protective measurement does not explicitly deal with that stage.

    I hope I understood your question correctly; if not, please follow up!

    Best,
    Max

    #940
    Robert Griffiths
    Participant

    Dear Max,

    Responding to your #933, you do agree that part of the time, maybe 90% of the time, I will get some information about the initial state. Well, that suggests that you would agree that there was an initial state to get information about. Now that we agree that we have seen the Higgs with 5 sigma certainty, we can be looking for further details with reasonable confidence that it won’t simply vanish into the noise. And my colleagues do think it is there.
    If it isn’t there that would really shake the foundations of modern particle physics.

    Bob G

    #941
    Richard Healey
    Participant

    Max,

    Thanks, that’s helpful. It seems that protective measurement avails itself of the usual “buck-passing” move of assuming that at some stage in the von Neumann chain one can either appeal to a collapse or read a superposition as a mixture FAPP.

    #942

    Hi Ken,

    Thanks! As far as my own view of the wave function is concerned, well, this may be opening a can of worms indeed. I’m happy to say, however, that I’m partial to what people call “epistemic” approaches. That is, I like to think of the wave function as a calculational tool for organizing information/knowledge/beliefs (I’m not exactly sure which notion is the least troublesome) about future measurement outcomes (or perhaps “experiences,” as QBism puts it). But I try not to be dogmatic about it.

    Ah, I should have never said anything.

    Best,
    Max

    #943

    Dear Bob,

    Responding to your #933, you do agree that part of the time, maybe 90% of the time, I will get some information about the initial state. Well, that suggests that you would agree that there was an initial state to get information about.

    Yes, certainly there “is” an initial state, in the sense that the system has been prepared in some (unknown) eigenstate of the Hamiltonian. So now I can go and try to extract information about that state using protective measurement. But in doing so, I will always disturb the state, in the sense that an entangled system-apparatus state will be created. Now, in the final readout of the pointer, I may happen to project the system back onto its initial state (and in the process get information about that state). But this, of course, is an indeterministic process, and in the end there’s no way for me to know whether whatever I have extracted is information about the initial state or information about some other state. Moreover, wouldn’t you agree that this uncontrollable back-projection is different from a situation in which the system has remained in its initial state at all times and I have learned something about it? The latter is what I would say we’d need to establish the reality of the wave function, but QM won’t let us have it, and protective measurement can’t change that fact, since it’s just a (clever) application of QM.

    Best,
    Max

    #944
    Robert Griffiths
    Participant

    Dear Max and Shan,

    I credit the two of you with inducing me to think about the protective
    measurement problem afresh. I have known about it for many years, but have not
    given it much thought until the past couple of days.

    Let me remark as a matter of historical interest that Jeeva Anandan, who was
    part of the Aharonov and Vaidman team in the early days of protective
    measurement, received his PhD at University of Pittsburgh, which is a short
    distance from my office at Carnegie-Mellon. He later went on to become a
    faculty member at South Carolina where Aharonov had some sort of appointment,
    and they were advisers to one of my postdocs, Shengjun Wu, when he got his PhD
    there. Alas, Jeeva died (I do not recall the cause) while Shengjun was still a
    graduate student; otherwise he might have been an interested party to our
    discussion.

    Back to my main topic, and to Max. If you employ the (consistent/decoherent)
    histories approach, then |psi> can also under suitable circumstances be
    considered part of the ontology, thus ‘real’ in what I think is the proper
    quantum sense of real. The circumstances that it has been appropriately
    prepared then give it ontic status, at least until it gets hit by and to a
    small extent entangled with a probing projectile. This status is one that, as
    I said, follows from the preparation, not the protective measurement. Thus I
    would not regard the latter as a “proof” of the reality of |psi> (in the sense
    of a quantum property or subspace of the Hilbert space), but nonetheless I
    would express my admiration for Anaronov, Anandan, and Vaidman for a clever
    idea that was pointing in the right direction. Good physicists tend to have an
    excellent physical intuition, even if they don’t get things precisely right
    when seen in the light of later developments.

    And, by the way, the ontic status of |psi> just before the probe arrives
    justifies using it as a pre-probability for the properties which the probe is
    probing, so things hold together in a reasonable way.

    But in this I am perhaps agreeing with you, Max, in that I think the
    fundamental questions are best solved by having a fully consistent theory that
    agrees with known experiments, which is what we arrogant historians claim we
    have. Nonetheless, wouldn’t you allow that protective measurement is an
    interesting idea and at least an indication that the folk who think that
    quantum states can only be associated with ensembles might want to think again?

    Bob G

    #945

    Dear Bob,

    Thank you for sharing these fascinating recollections about Jeeva.

    Nonetheless, wouldn’t you allow that protective measurement is an
    interesting idea … ?

    Absolutely! I think it’s a brilliant measurement scheme, just as brilliant as weak measurement (though the latter one seems to have received more attention and gained more experimental traction). I very much admire Aharaonov, Vaidman, and their co-workers for coming up with it. It has definitely enriched our conception of a quantum measurement.

    So I’m not here to dismiss protective measurement itself –– I’m just trying to point some cautioning fingers at the bolder foundational claims that have been associated with protective measurement. But even if none of those claims goes through, I think protective measurement is an important contribution that deserves attention and will hopefully, some day, be implemented experimentally as well.

    Best,
    Max

    #946

    Bob has raised an interesting question — namely, to what extent does protective measurement challenge the viewpoint that the wave function only describes ensembles, like Ballentine et al. once suggested?

    My own sense is that there is no real challenge, just as doing quantum experiments on single quantum systems does not amount to a challenge. The reason is that even when we deal with single systems, there always remains an irreducibly random (indeterministic) element in the quantum description, which is ultimately cashed out in statistical terms. And such statistical terms may always be interpreted as only meaningful in the sense of statistics of measurements on identically prepared systems, i.e., ensembles. At least this is how I imagine Ballentine might reply.

    What do you think?

    #947
    Robert Griffiths
    Participant

    Dear Max,

    Relative to your #943. I think we are actually pretty close, maybe close enough
    to call it FAPP. I certainly agreee that there is some probability that you
    will kick the system into a different energy eigenstate (assuming energy
    provides the protection), and in that case you will draw a wrong conclusion. I
    as a physicist may be a bit more bold and rush the result off to Phys. Rev.
    Letters (which publishes plenty of mistakes). And if my probe kicks the system
    into a different state, then it is in a different state; no denying that. And
    that textbook QM is not a good place to go if you want to understand what
    really goes on in a measurement of any sort, that I have to concede. Bob G

    #949
    Robert Griffiths
    Participant

    Dear Max,

    Re your #946. I confess that while I think I understand probability theory I am not sure what people mean by ensembles. Let us imagine that NSA has built their quantum computer to factor some long number and break somebody’s code.
    Shor tells them they have a probability of succeeding of, say, 75%, when they run the machine. They run the machine and they succeed on the first try, after which they disassemble the machine. (If they didn’t succeed the first time they would run the machine again.) How would the ensemble folk understand this application of quantum mechanics?

    Bob Griffiths

    #950

    Dear Bob,

    I certainly agreee that there is some probability that you
    will kick the system into a different energy eigenstate (assuming energy
    provides the protection), and in that case you will draw a wrong conclusion.

    Sure, and from a practical point of view, if I can make that probability small enough I might well be perfectly happy. I have no problem with that. But I don’t think that’s enough to draw fundamental conclusions about the meaning of the wave function, which is the conclusion I’m challenging.

    As a physicist may be a bit more bold and rush the result off to Phys. Rev. Letters (which publishes plenty of mistakes).

    You are bold indeed — Aharonov, Anandan, and Vaidman only rushed it off to Phys. Rev. A!

    Thank you for all your comments. I’m glad Shan and I motivated you to think about protective measurements again.

    Best,
    Max

    #951
    Richard Healey
    Participant

    To Max at #946,

    While I basically agree with you, I do think that protective measurement is helpful by forcing us to think about the significance of applying quantum mechanics to a single actual system, where one has no real statistics.

    Here it is important to distinguish between probability and statistics. As we all (should!) know, probability cannot be non-circularly defined in terms of relative frequencies, either actual or “virtual”, even though statistics can provide the best evidence for probabilistic claims when they are available.

    The kind of view of the wave-function you were suggesting (which I think is basically right) is better expressed by calling the wave-function a source of probabilities rather than as a description of the statistical properties of ensembles, whatever they are supposed to be (something I have never seen clearly explained). Probabilistic claims can be applied to any number of systems, and they function in the same way whatever that number—as guide to coherent degrees of belief.

    We can go on to argue about what probabilities are, but I doubt that talk of statistics in ensembles will be helpful in this debate.

    #955

    Dear Bob,

    Re your #949. Your question is essentially about how to understand probability assignments. I.e., what does a 75% chance of success mean if there’s just a single trial? And there are lots of different interpretations and ways to motivate that (see QBism for a radical and interesting take). I would say that for an ensemble person, the only meaningful interpretation of “75% chance of success” is based on relative frequencies of outcomes of repeated trials. And the quantum state of your quantum computer then reflects that property; you are welcome to assign a quantum state to an individual machine, but its operational meaning will only be cashed out in terms of statistics of many “trials.”

    Well, I may be leading myself down a rabbit hole here, so I better stop.

    Best,
    Max

    #956

    Hi all,

    Thanks so much for all your excellent comments. I have to dash and hold office hours now –- my students are banging down the door with questions about Newtonian physics. But I’ll be back tomorrow to answer any further comments and queries.

    Also, look for Matt Pusey’s and Shan’s talks on the same topic. We can continue the discussion there.

    Thanks again,
    Max

    #960

    Dear Richard,

    The kind of view of the wave-function you were suggesting (which I think is basically right) is better expressed by calling the wave-function a source of probabilities rather than as a description of the statistical properties of ensembles.

    I’m happy to go with that. But what do you exactly mean by “source of probabilities”? Is that source fed (determined) by something external, objective, physical? Or “source” just in the sense of a collection of my personal beliefs, like a subjective Bayesian would have it? I think you’re on the latter bandwagon, but I’d like to make sure.

    I do concur that the ensemble conception is confusing (see my confused attempt to explain it in #955), and certainly the derivation of probabilities from relative frequencies is circular, as is well known.

    Best,
    Max

    #971
    Richard Healey
    Participant

    Hi again, Max,

    My position lies between the two options you lay out. The wave function is a source of probabilities only in the shallow sense that in the conventional formulation one applies quantum mechanics by calculating probabilities from the wave function by applying the Born rule. Neither the wave function nor the probabilities one calculates from it are physical objects, fields, beables, or anything like that.
    But both wave function and probabilities are objective in ways subjective Bayesians deny.
    To make one thing clear, both wave functions and probabilities are relational: a system does not have a wave function, and a wave function is not (in general) uniquely assigned. But what these are relative to is not the actual epistemic state of any agent, but to something physical, representing the situation of a hypothetical localized agent. An important aspect of this physical situation is the space-time location of such a hypothetical agent. That’s why Alice and Bob (whether or not they exist!) should assign different wave functions to Alice’s photon immediately after Bob’s measurement, and use the Born rule to assign different probabilities to Alice’s outcomes.
    Given such a physical specification of a hypothetical agent’s situation, there is a right answer to what the wave function and probabilities are, no matter what any actual agent may think. That’s an important difference with subjective Bayesians’ view. What makes an answer right is again physical conditions in the world, which I take to be specifiable by true magnitude claims. Bob’s correct wave function assignment to Alice’s photon, for example, is what it is because of Bob’s outcome in his measurement. And what makes the entangled Bell state the correct state for any agent in Alice’s or Bob’s position to assign to the photon pair is the physical conditions involved in preparing that state.
    So wave function assignments and Born probability assignments are objectively true (or false) depending on how the world is: but this does not make wave functions or probabilities “elements of physical reality”. It is also objectively true that we are are currently exchanging views about probability, but that is not a physical fact, even though it is physical facts that make it true.

    #974

    Hi Max,

    Thanks for your answers! I admit one of your objections to the new criterion is valid; there are indeed two fundamentally different types of disturbance.

    I think the new criterion can be further improved to avoid this new objection. It is:

    “If, by disturbing a system with probability arbitrarily close to zero, we can predict with probability arbitrarily close to unity the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.”

    Think about the modern definition of derivative. I think this criterion can be equivalent to the original EPR’s, and it is more suitable for the quantum cases.

    Shan

    PS. Based on your objections, I am also beginning to doubt the direct road from measurability to reality. I think an argument to absurdity like PBR argument is better. See my PBR-like argument for psi-ontology in terms of protective measurements. We will discuss this later.

    #979

    Hi Shan,

    Thank you for your reply.

    As for your revised criterion, well, I think ultimately it’s a matter of taste which criterion each of us deems satisfactory for establishing the reality of the wave function. I would still say that “disturbing a system with probability arbitrarily close to zero” is not sufficient, because any nonzero disturbance whatsoever implies an irreducible indeterministic (statistical) element.

    The identification of (ideal) measurability with reality of course has its origin in classical physics, where we consider something a physical property (and hence “real”) if we can measure it. But since QM forbids reliable measurement of an unknown quantum state, this criterion of measurability cannot be applied to the wave function. Which is why I would agree with you in saying that equating measurability with reality (that is, deducing reality from measurability) is too simplistic an approach for quantum mechanics — and it is why, I suppose, PBR had to go a different route. (In my opinion, they still did not succeed in establishing the reality of the wave function, but their argument is nonetheless interesting for various other reasons.)

    But we should discuss more during your talk!

    Best,
    Max

    #984
    editor
    Keymaster

    Thanks, Max. See you then!

Viewing 30 posts - 1 through 30 (of 30 total)
  • You must be logged in to reply to this topic.