Latest Papers on Quantum Foundations - Updated Daily by IJQF

Authors: Run-Qiu Yang, Yu-Sen An, Chao Niu, Cheng-Yong Zhang, Keun-Young Kim

We make comments on some shortcomings of the non-unitary-invariant and non-bi-invariant complexity in quantum mechanics/field theory and argue that the unitary-invariant and bi-invariant complexity is still a competitive candidate in quantum mechanics/field theory, contrary to quantum circuits in quantum computation. Based on the unitary-invariance of the complexity and intuitions from the holographic complexity, we propose a novel complexity formula between two states. Our proposal shows that i) the complexity between certain states in two dimensional CFTs is given by the Liouville action, which is compatible with the path-integral complexity; ii) it also gives natural interpretation for both the CV and CA holographic conjectures and identify what the reference states are in both cases. Our proposal explicitly produces the conjectured time dependence of the complexity: linear growth in chaotic systems. Last but not least, we present interesting relations between the complexity and the Lyapunov exponent: the Lyapunov exponent is proportional to the complexity growth rate in linear growth region.

Authors: G.W. Gibbons

Two lectures given in Paris in 1985. They were circulated as a preprint Solitons And Black Holes In Four-Dimensions, Five-Dimensions. G.W. Gibbons (Cambridge U.) . PRINT-85-0958 (CAMBRIDGE), (Received Dec 1985). 14pp. and appeared in print in De Vega, H.J. ( Ed.), Sanchez, N. ( Ed.) : Field Theory, Quantum Gravity and Strings*, 46-59 and Preprint - GIBBONS, G.W. (REC.OCT.85) 14p.

I have scanned the original, reformatted and and corrected various typos.

Authors: M.D.C. Torri, V. Antonelli, L. Miramonti

This work explores a Standard Model (S.M.) extension possibility, that violates Lorentz invariance, preserving the space-time isotropy and homogeneity. In this sense HMSR represents an attempt to introduce an isotropic Lorentz Invariance Violation in the elementary particle S.M. The theory is constructed starting from a modified kinematics, that takes into account supposed quantum effects due to interaction with the space-time background. The space-time structure itself is modified, resulting in a pseudo-Finsler manifold. The S.M. extension here provided is inspired by the effective fields theories, but it preserves covariance, with respect to newly introduced modified Lorentz transformations. Geometry perturbations are not considered as universal, but particle species dependent. Non universal character of the amended Lorentz transformations allows to obtain visible physical effects, detectable in experiments by comparing different perturbations related to different interacting particles species.

Authors: Louis Marchildon

All investigators working on the foundations of quantum mechanics agree that the theory has profoundly modified our conception of reality. But there ends the consensus. The unproblematic formalism of the theory gives rise to a number of very different interpretations, each of which has consequences on the notion of reality. This paper analyses how the Copenhagen interpretation, von Neumann's state vector collapse, Bohm and de Broglie's pilot wave and Everett's many worlds modify, each in its own way, the classical conception of reality, whose local character, in particular, requires revision.

Authors: Gabriel R. Bengochea, Gabriel León, Elias Okon, Daniel Sudarsky

Recently it has been argued that a correct reading of the quantum fluctuations of the vacuum could lead to a solution to the cosmological constant problem. In this work we critically examine such a proposal, finding it questionable due to conceptual and self-consistency problems, as well as issues with the actual calculations. We conclude that the proposal is inadequate as a solution to the cosmological constant problem.

Authors: S. Sołtan, D. Dopierała, A. Bednorz

Recent group of experiments tested local realism with random choices prepared by humans. These various tests were subject to additional assumptions, which lead to loopholes in the interpretations of almost all the experiments. Among these assumptions is fair sampling, no signaling and faithful quantum model. We examined the data from 9 of 13 experiments and analyzed occurring anomalies in view of the above assumption. We conclude that further tests of local realism need better setup calibration to avoid apparent signaling or necessity of the complicated underlying quantum model.

Authors: A.P. Balachandran, I.M. Burbano, A.F. Reyes-Lega, S. Tabban

The algebraic approach to quantum physics emphasizes the role played by the structure of the algebra of observables and its relation to the space of states. An important feature of this point of view is that subsystems can be described by subalgebras, with partial trace being replaced by the more general notion of restriction to a subalgebra. This, in turn, has recently led to applications to the study of entanglement in systems of identical particles. In the course of those investigations on entanglement and particle identity, an emergent gauge symmetry has been found by Balachandran, de Queiroz and Vaidya. In this letter we establish a novel connection between that gauge symmetry, entropy production and quantum operations. Thus, let A be a system described by a finite dimensional observable algebra and $\omega$ a mixed faithful state. Using the Gelfand-Naimark-Segal (GNS) representation we construct a canonical purification of $\omega$, allowing us to embed A into a larger system C. Using Tomita-Takasaki theory, we obtain a subsystem decomposition of C into subsystems A and B, without making use of any tensor product structure. We identify a group of transformations that acts as a gauge group on A while at the same time giving rise to entropy increasing quantum operations on C. We provide physical means to simulate this gauge symmetry/quantum operation duality.

Sullivan, Emily (2019) Universality Caused: The case of renormalization group explanation. [Preprint]
Chen, Eddy Keming (2019) Realism about the Wave Function. Philosophy Compass.
Description unavailable
Dardashti, Radin and Hartmann, Stephan and Thebault, Karim P Y and Winsberg, Eric (2015) Hawking Radiation and Analogue Experiments: A Bayesian Analysis. [Preprint]
De Haro, Sebastian and Butterfield, Jeremy (2019) On symmetry and duality. Synthese. pp. 1-41.
McKenzie, Alan (2019) Levels of reality: emergent properties of a mathematical multiverse. [Preprint]
Cushman, Matthew (2019) Anthropic Indexical Sampling and Implications for The Doomsday Argument. [Preprint]
De Haro, Sebastian and Butterfield, Jeremy (2019) A Schema for Duality, Illustrated by Bosonization. Foundations of Mathematics and Physics one Century after Hilbert.

Authors: Krishan Saraswat, Niayesh Afshordi

Two seemingly distinct notions regarding black holes have captured the imagination of theoretical physicists over the past decade: First, black holes are conjectured to be fast scramblers of information, a notion that is further supported through connections to quantum chaos and decay of mutual information via AdS/CFT holography. Second, black hole information paradox has motivated exotic quantum structure near horizons of black holes (e.g., gravastars, fuzzballs, or firewalls) that may manifest themselves through delayed gravitational wave echoes in the aftermath of black hole formation or mergers, and are potentially observable by LIGO/Virgo observatories. By studying various limits of charged AdS/Schwarzschild black holes we show that, if properly defined, the two seemingly distinct phenomena happen on an identical timescale of log(Radius)/$(\pi \times {\rm Temperature})$. We further comment on the physical interpretation of this coincidence and the corresponding holographic interpretation of black hole echoes.

Authors: Steven B. Giddings

A succinct summary is given of the problem of reconciling observation of black hole-like objects with quantum mechanics. If quantum black holes behave like subsystems, and also decay, their information must be transferred to their environments. Interactions that accomplish this with `minimal' departure from a standard description are parameterized. Possible sensitivity of gravitational wave or very long baseline interferometric observations to these interactions is briefly outlined.

Authors: P. B. Lerner

The Gedankenexperiment advanced by Frauchiger and Renner in their Nature paper is based on an implicit assumption that one can synchronize stochastic measurement intervals between two non-interacting systems. This hypothesis, the author demonstrates, is equivalent to the complete entanglement of these systems. Consequently, Frauchiger and Renner's postulate Q is too broad and, in general, meaningless. Accurate reformulation of the postulate, Q1 does not seem to entail any paradoxes with measurement. This paper is agnostic with respect to particular interpretations of quantum mechanics. Nor does it refer to the collapse of the wavefunction.

Authors: Yuri G Rudoy, Enock O Oladimeji

In this paper the detailed investigation of one of the most interested models in the non relativistic quantum mechanics of one massive particle i.e., introduced by G. Poeschl and E. Teller in 1933 is presented. This model includes as particular cases two most popular and valuable models: the quasi free particle in the box with impenetrable hard walls (i.e., the model with confinement) and Bloch quantum harmonic oscillator, which is unconfined in space; both models are frequently and effectively exploited in modern nanotechnology e.g., in quantum dots and magnetic traps. We give the extensive and elementary exposition of the potentials, wave functions and energetic spectra of all these interconnected models. Moreover, the pressure operator is defined following the lines of G. Helmann and R. Feynman which were the first who introduced this idea in the late 30ies in quantum chemistry. By these means the baroenergetic equation of state is obtained and analyzed for all three models; in particular, it is shown the absence of the pressure for the Bloch oscillator due to the infinite width of the box. The generalization of these results on the case of nonzero temperature will be given later.

Authors: A. Hariri, D. Curic, L. Giner, J. S. Lundeen

The weak value, the average result of a weak measurement, has proven useful for probing quantum and classical systems. Examples include the amplification of small signals, investigating quantum paradoxes, and elucidating fundamental quantum phenomena such as geometric phase. A key characteristic of the weak value is that it can be complex, in contrast to a standard expectation value. However, typically only either the real or imaginary component of the weak value is determined in a given experimental setup. Weak measurements can be used to, in a sense, simultaneously measure non-commuting observables. This principle was used in the direct measurement of the quantum wavefunction. However, the wavefunction's real and imaginary components, given by a weak value, are determined in different setups or on separate ensembles of systems, putting the procedure's directness in question. To address these issues, we introduce and experimentally demonstrate a general method to simultaneously read out both components of the weak value in a single experimental apparatus. In particular, we directly measure the polarization state of an ensemble of photons using weak measurement. With our method, each photon contributes to both the real and imaginary parts of the weak-value average. On a fundamental level, this suggests that the full complex weak value is a characteristic of each photon measured.

In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.

Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?

The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?

Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”

The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”

Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”

But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”

The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”

After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.

Garden of Cosmic Delights

Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.

In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point.

Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.

Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.

The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)

Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.

The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.

One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.

The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.

The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.

In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.

“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Imaginary Universes

Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.

He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.

It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.

According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.  

In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.

That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.

For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.

There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.

Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.

No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”

Correction: This article was revised on June 6, 2019, to list Latham Boyle and Kieran Finn as co-developers of the CPT-symmetric universe idea.



show enclosure

(image/jpg)

When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

A new experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”

But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.

All Too Random

The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are “quantized”: Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow.

Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg’s quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. “The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation,” historian of science Mara Beller has written.

Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg’s math of discrete quantum states and instant jumps between them. Schrödinger’s theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don’t switch suddenly, in zero time, Schrödinger thought — discontinuous “quantum jumps” were just a figment of the mind. In a 1952 paper called “Are there quantum jumps?,” Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.”

The argument wasn’t just about Schrödinger’s discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore “not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general.” In other words, there’s a lot riding on the reality (or not) of quantum jumps.

Seeing Without Looking

To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reported them happening in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a “bright” state, where they could emit a photon of light, and a “dark” state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported jumps that correspond to what they called “the birth, life and death of individual photons.”

In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team’s setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail.

The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called “artificial atoms” because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms.

Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn’t monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior  — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they’d “collapse” the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger’s cat, which is allegedly placed in a coherent quantum “superposition” of live and dead states but becomes only one or the other when observed.

To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second “bright” state, so named because it’s the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the “dark” state — because it remains hidden from direct view.

The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter’s “click.”

The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it’s necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it.

The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. “Absence of an event can bring as much information as its presence,” said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the “curious incident” in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it “Baskerville’s Hound meets Schrödinger’s Cat.”

To Catch a Jump

The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that’s the only thing that can prevent flipping back and forth between the ground and bright states.

So here in these switches from “click” to “no-click” states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new.

Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. “As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur,” said Devoret.

That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach.

With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how?

The team found that jumps are in fact gradual. That’s because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It’s a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it’s a binary choice — but as the evening wears on and you get tired, the question “Are you staying or leaving?” becomes increasingly likely to get the answer “I’m leaving.”

The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That’s pretty fast, but it’s certainly not instantaneous.

What’s more, this electronic system is so fast that the researchers could “catch” the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all.

Flash of Insight

The experiment shows that quantum jumps “are indeed not instantaneous if we look closely enough,” said Oliver, “but are coherent processes”: real physical events that unfold over time.

The gradualness of the “jump” is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. “It is reassuring that the theory matches perfectly with what is seen” said David DiVincenzo, an expert in quantum information at Aachen University in Germany, “but it’s a subtle theory, and we are far from having gotten our heads completely around it.”

The possibility of predicting a quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. “To the best of our knowledge, this precursory signal has not been proposed or measured before,” he said.

Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, “in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference,” he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run.

DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as “quite far down the line.” To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo.

The real value of the result is not, though, in any practical benefits; it’s a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time.



show enclosure

(image/jpg)
Gryb, Sean and Palacios, Patricia and Thebault, Karim P Y (2019) On the Universality of Hawking Radiation. [Preprint]
Making a Difference: Essays on the Philosophy of Causation Edited by BeebeeHelen, HitchcockChristopher and PriceHuwOxford University Press, 2017. xii + 336 pp.

Nature, Published online: 03 June 2019; doi:10.1038/s41586-019-1287-z

Experiment overturns Bohr’s view of quantum jumps, demonstrating that they possess a degree of predictability and when completed are continuous, coherent and even deterministic.

Authors: Steven B. Giddings, Seth Koren, Gabriel Treviño

Two new observational windows have been opened to strong gravitational physics: gravitational waves, and very long baseline interferometry. This suggests observational searches for new phenomena in this regime, and in particular for those necessary to make black hole evolution consistent with quantum mechanics. We describe possible features of "compact quantum objects" that replace classical black holes in a consistent quantum theory, and approaches to observational tests for these using gravitational waves. This is an example of a more general problem of finding consistent descriptions of deviations from general relativity, which can be tested via gravitational wave detection. Simple models for compact modifications to classical black holes are described via an effective stress tensor, possibly with an effective equation of state. A general discussion is given of possible observational signatures, and of their dependence on properties of the colliding objects. The possibility that departures from classical behavior are restricted to the near-horizon regime raises the question of whether these will be obscured in gravitational wave signals, due to their mutual interaction in a binary coalescence being deep in the mutual gravitational well. Numerical simulation with such simple models will be useful to clarify the sensitivity of gravitational wave observation to such highly compact departures from classical black holes.

Authors: Peter Holland

We develop a trajectory construction of solutions to the massless wave equation in n+1 dimensions and hence show that the quantum state of a massive relativistic system in 3+1 dimensions may be represented by a stand-alone four-dimensional congruence comprising a continuum of 3-trajectories coupled to an internal scalar time coordinate. A real Klein-Gordon amplitude is the current density generated by the temporal gradient of the internal time. Complex amplitudes are generated by a two-phase flow. The Lorentz covariance of the trajectory model is established.

Authors: Andreas Aste

This paper addresses the question why quantum mechanics is formulated in a unitary Hilbert space, i.e. in a manifestly complex setting. Investigating the linear dynamics of real quantum theory in a finite-dimensional Euclidean Hilbert space hints at the emergence of a complex structure. A widespread misconception concerning the measurement process in quantum mechanics and the hermiticity of observables is briefly discussed.

Authors: Michael K.-H. Kiessling

This contribution inquires into Clausius' proposal that "the entropy of the world tends to a maximum.'" The question is raised whether the entropy of `the world' actually does have a maximum; and if the answer is "Yes!," what such states of maximum entropy look like, and if the answer is "No!," what this could entail for the fate of the universe. Following R. Penrose, `the world' is modelled by a closed Friedman--Lemaitre type universe, in which a three-dimensional spherical `space' is filled with `matter' consisting of $N$ point particles, their large-scale distribution being influenced by their own gravity. As `entropy of matter' the Boltzmann entropy for a (semi-)classical macrostate, and Boltzmann's ergodic ensemble formulation of it for an isolated thermal equilibrium state, are studied. Since the notion of a Boltzmann entropy is not restricted to classical non-relativistic physics, the inquiry will take into account quantum theory as well as relativity theory; we also consider black hole entropy. Model universes having a maximum entropy state and those which don't will be encountered. It is found that the answer to our maximum entropy question is not at all straightforward at the general-relativistic level. In particular, it is shown that the increase in Bekenstein--Hawking entropy of general-relativistic black holes does not always compensate for the Boltzmann entropy of a piece of matter swallowed by a black hole.

Hetzroni, Guy (2019) Gauge and Ghosts. The British Journal for the Philosophy of Science. ISSN 1464-3537
Rivat, Sébastien (2019) Renormalization Scrutinized. Studies in History and Philosophy of Modern Physics. ISSN 1355-2198
Rédei, Miklós (2019) On the tension between physics and mathematics. [Preprint]
Cambbell, Douglas Ian and Yang, Yi (2019) Does the Solar System Compute the Laws of Motion? [Preprint]

Authors: Florio M. Ciaglia, Alberto Ibort, Giuseppe Marmo

A new picture of Quantum Mechanics based on the theory of groupoids is presented. This picture provides the mathematical background for Schwinger's algebra of selective measurements and helps to understand its scope and eventual applications. In this first paper, the kinematical background is described using elementary notions from category theory, in particular the notion of 2-groupoids as well as their representations. Some basic results are presented, and the relation with the standard Dirac-Schr\"odinger and Born-Jordan-Heisenberg pictures are succinctly discussed.

Authors: Steven B. Giddings

The impressive images from the Event Horizon Telescope sharpen the conflict between our observations of gravitational phenomena and the principles of quantum mechanics. Two related scenarios for reconciling quantum mechanics with the existence of black hole-like objects, with "minimal" departure from general relativity and local quantum field theory, have been explored; one of these could produce signatures visible to EHT observations. A specific target is temporal variability of images, with a characteristic time scale determined by the classical black hole radius. The absence of evidence for such variability in the initial observational span of seven days is not expected to strongly constrain such variability. Theoretical and observational next steps towards investigating such scenarios are outlined.

Authors: Samir D. Mathur

The vacuum must contain virtual fluctuations of black hole microstates for each mass $M$. We observe that the expected suppression for $M\gg m_p$ is counteracted by the large number $Exp[S_{bek}]$ of such states. From string theory we learn that these microstates are extended objects that are resistant to compression. We argue that recognizing this `virtual extended compression-resistant' component of the gravitational vacuum is crucial for understanding gravitational physics. Remarkably, such virtual excitations have no significant effect for observable systems like stars, but they resolve two important problems: (a) gravitational collapse is halted outside the horizon radius, removing the information paradox; (b) spacetime acquires a `stiffness' against the curving effects of vacuum energy; this ameliorates the cosmological constant problem posed by the existence of a planck scale $\Lambda$.

Authors: Klaus Renziehausen, Ingo Barth

Bohm developed the Bohmian mechanics (BM), in which the Schr\"odinger equation is transformed into two differential equations: A continuity equation and an equation of motion similar to the Newtonian equation of motion. This transformation can be executed both for single-particle systems and for many-particle systems. Later, Kuzmenkov and Maksimov used basic quantum mechanics for the derivation of many-particle quantum hydrodynamics (MPQHD) including one differential equation for the mass balance and two differential equations for the momentum balance, and we extended their analysis in a prework [K. Renziehausen, I. Barth, Prog. Theor. Exp. Phys. 2018, 013A05 (2018)] for the case that the particle ensemble consists of different particle sorts. The purpose of this paper is to show how the differential equations of MPQHD can be derived for such a particle ensemble with the differential equations of BM as a starting point. Moreover, our discussion clarifies that the differential equations of MPQHD are more suitable for an analysis of many-particle systems than the differential equations of BM because the differential equations of MPQHD depend on a single position vector only while the differential equations of BM depend on the complete set of all particle coordinates.

Authors: F. Laloë

We discuss a model where a spontaneous quantum collapse is induced by the gravitational interaction, treated classically. Its dynamics couples the standard wave function of a system with the Bohmian positions of its particles, which are considered as the only source of the gravitational attraction. The collapse is obtained by adding a small imaginary component to the gravitational coupling. It predicts extremely small perturbations of microscopic systems, but very fast collapse of QSMDS (quantum superpositions of macroscopically distinct quantum states) of a solid object, varying as the fifth power of its size. The model does not require adding any dimensional constant to those of standard physics.

Gao, Shan (2019) Are there many worlds? [Preprint]
Halvorson, Hans (2019) There is no invariant, four-dimensional stuff. [Preprint]
Esfeld, Michael (2019) From the measurement problem to the primitive ontology programme. [Preprint]

Author(s): Lev Vaidman

Counterfactual communication, i.e., a communication without particles traveling in the transmission channel, is a bizarre quantum effect. Starting from interaction-free measurements many protocols achieving various tasks from counterfactual cryptography to counterfactual transfer of quantum states w...


[Phys. Rev. A 99, 052127] Published Wed May 29, 2019

Author(s): Andrew Lucas

There may be a universal bound on the dissipative timescale in a many-body quantum system for the decay of a small operator into a combination of large operators.


[Phys. Rev. Lett. 122, 216601] Published Wed May 29, 2019

Nature, Published online: 29 May 2019; doi:10.1038/d41586-019-01592-x

It is extremely difficult to observe the radiation that is thought to be emitted by black holes. The properties of this radiation have now been analysed using an analogue black hole comprising a system of ultracold atoms.

Author(s): Siddhant Das, Markus Nöth, and Detlef Dürr

It is well known that orthodox quantum mechanics does not make unambiguous predictions for the statistics in arrival time (or time-of-flight) experiments. Bohmian mechanics (or de Broglie–Bohm theory) offers a distinct conceptual advantage in this regard, owing to the well-defined concepts of point ...


[Phys. Rev. A 99, 052124] Published Tue May 28, 2019

Author(s): Paul Boes, Jens Eisert, Rodrigo Gallego, Markus P. Müller, and Henrik Wilming

The von Neumann entropy is a key quantity in quantum information theory and, roughly speaking, quantifies the amount of quantum information contained in a state when many identical and independent (i.i.d.) copies of the state are available, in a regime that is often referred to as being asymptotic. ...


[Phys. Rev. Lett. 122, 210402] Published Tue May 28, 2019

Authors: Valentina Baccetti, Sebastian Murk, Daniel R. Terno

In case of spherical symmetry the assumptions of finite-time formation of a trapped region and regularity of its boundary --- the apparent horizon --- are sufficient to identify the limiting form of the metric and the energy-momentum tensor in its vicinity. By comparison with the known results for quasi-static evaporation of black holes we complete the identification of their parameters. Consistency of the Einstein equations determines two possible types of higher-order terms in the energy-momentum tensor, and by using its local conservation we provide a method of their identification, explicitly determining the leading order regular corrections. Contraction of a spherically symmetric thin dust shell is the simplest model of gravitational collapse. Nevertheless, the inclusion of a collapse-triggered radiation in different extensions of this model inevitably leads to apparent contradictions. Using our results we resolve these contradictions and demonstrate how gravitational collapse may be completed in finite time according to a distant observer.

Authors: Mariano Bauer, Cesar Augusto Aguillon, Gustavo Garcia

The perspective is advanced that the time parameter in quantum mechanics corresponds to the time coordinate in a Minkowski flat spacetime local approximation to the actual dynamical curved spacetime of General Relativity, rather than to an external Newtonian reference frame. There is no incompatibility, as generally assumed in the extensively discussed "problem of time" in Quantum Gravity.

Nature Physics, Published online: 27 May 2019; doi:10.1038/s41567-019-0533-5

Strong quantum correlations in an ultracoherent optomechanical system are used to demonstrate a displacement sensitivity that is below the standard quantum limit.

Nature Physics, Published online: 27 May 2019; doi:10.1038/s41567-019-0537-1

According to the Unruh effect, for an accelerating observer the vacuum is filled with thermal radiation. Experiments now simulate this effect, recreating the statistics of Unruh radiation in the matter-wave field of a Bose–Einstein condensate.
Relational quantum mechanics suggests physics might be a science of perceptions, not observer-independent reality

-- Read more on ScientificAmerican.com


show enclosure

(image/jpeg; 4.77 MB)

Author(s): Philippe Faist, Mario Berta, and Fernando Brandão

Thermodynamics imposes restrictions on what state transformations are possible. In the macroscopic limit of asymptotically many independent copies of a state—as for instance in the case of an ideal gas—the possible transformations become reversible and are fully characterized by the free energy. In ...


[Phys. Rev. Lett. 122, 200601] Published Fri May 24, 2019

Eva, Benjamin (2019) Principles of Indifference. [Preprint]
ROVELLI, Carlo (2018) Space and Time in Loop Quantum Gravity. In: UNSPECIFIED.
Dewar, Neil and Eisenthal, Joshua (2019) A Raum with a View: Hermann Weyl and the Problem of Space. [Preprint]

Imagine someone came along and told you that they had an oracle, and that this oracle could reveal the deep secrets of the universe. While you might be intrigued, you’d have a hard time trusting it. You’d want some way to verify that what the oracle told you was true.

This is the crux of one of the central problems in computer science. Some problems are too hard to solve in any reasonable amount of time. But their solutions are easy to check. Given that, computer scientists want to know: How complicated can a problem be while still having a solution that can be verified?

Turns out, the answer is: Almost unimaginably complicated.

In a paper released in April, two computer scientists dramatically increased the number of problems that fall into the hard-to-solve-but-easy-to-verify category. They describe a method that makes it possible to check answers to problems of almost incomprehensible complexity. “It seems insane,” said Thomas Vidick, a computer scientist at the California Institute of Technology who wasn’t involved in the new work.

The research applies to quantum computers — computers that perform calculations according to the nonintuitive rules of quantum mechanics. Quantum computers barely exist now but have the potential to revolutionize computing in the future.

The new work essentially gives us leverage over that powerful oracle. Even if the oracle promises to tell you answers to problems that are far beyond your own ability to solve, there’s still a way to ensure the oracle is telling the truth.

Until the End of the Universe

When a problem is hard to solve but easy to verify, finding a solution takes a long time, but verifying that a given solution is correct does not.

For example, imagine someone hands you a graph — a collection of dots (vertices) connected by lines (edges). The person asks you if it’s possible to color the vertices of the graph using only three colors, such that no connected vertices have the same color.

This “three-color” problem is hard to solve. In general, the time it takes to find a three-coloring of a graph (or determine that none exists) increases exponentially as the size of the graph increases. If, say, finding a solution for a graph with 20 vertices takes 320 nanoseconds — a few seconds total — a graph with 60 vertices would take on the order of 360 nanoseconds, or about 100 times the age of the universe.

But let’s say someone claims to have three-colored a graph. It wouldn’t take long to check whether their claim is true. You’d just go through the vertices one by one, examining their connections. As the graph gets bigger, the time it takes to do this increases slowly, in what’s called polynomial time. As a result, a computer doesn’t take much longer to check a three-coloring of a graph with 60 vertices than it does to check a graph with 20 vertices.

“It’s easy, given a proper three-coloring, to check that it works,” said John Wright, a physicist at the Massachusetts Institute of Technology who wrote the new paper along with Anand Natarajan of Caltech.

In the 1970s computer scientists defined a class of problems that are easy to verify, even if some are hard to solve. They called the class “NP,” for nondeterministic polynomial time. Since then, NP has been the most intensively studied class of problems in computer science. In particular, computer scientists would like to know how this class changes as you give the verifier new ways to check the truth of a solution.

The Right Questions

Prior to Natarajan and Wright’s work, verification power had increased in two big leaps.

To understand the first leap, imagine that you’re colorblind. Someone places two blocks on the table in front of you and asks whether the blocks are the same or different colors. This is an impossible task for you. Moreover, you can’t verify someone else’s solution.

But you’re allowed to interrogate this person, whom we’ll call the prover. Let’s say the prover tells you that the two blocks are different colors. You designate one block as “Block A” and the other as “Block B.” Then you place the blocks behind your back and randomly switch which hand holds which block. Then you reveal the blocks and ask the prover to identify Block A.

If the blocks are different colors, this couldn’t be a simpler quiz. The prover will know that Block A is, say, the red block and will correctly identify it every single time.

But if the blocks are actually the same color — meaning the prover erred in saying that they were different colors — the prover can only guess which block is which. Because of this, it will only be possible for the prover to identify Block A 50 percent of the time. By repeatedly probing the prover about the solution, you will be able to verify whether it’s correct.

“The verifier can send the prover questions,” Wright said, “and maybe at the end of the conversation the verifier can become more convinced.”

In 1985 a trio of computer scientists proved that such interactive proofs can be used to verify solutions to problems that are more complicated than the problems in NP. Their work created a new class of problems called IP, for “interactive polynomial” time. The same method used to verify the coloring of two blocks can be used to verify solutions to much more complicated questions.

The second major advance took place in the same decade. It follows the logic of a police investigation. If you have two suspects you believe committed a crime, you’re not going to question them together. Instead, you’ll interrogate them in separate rooms and check each person’s answers against the other’s. By questioning them separately, you’ll be able to reveal more of the truth than if you had only one suspect to interrogate.

“It’s impossible for to form some sort of distributed, consistent story because they simply don’t know what answers the other is giving,” Wright said.

In 1988 four computer scientists proved that if you ask two computers to separately solve the same problem — and you interrogate them separately about their answers — you can verify a class of problems that’s even larger than IP: a class called MIP, for multi-prover interactive proofs.

With a multi-prover interactive approach, for example, it’s possible to verify three-colorings for a sequence of graphs that increase in size much faster than the graphs in NP. In NP, graph sizes increase at a linear rate — the number of vertices might grow from 1 to 2 to 3 to 4 and so on — so that the size of a graph is never hugely disproportionate to the amount of time needed to verify its three-coloring. But in MIP, the number of vertices in a graph grows exponentially — from 21 to 22 to 23 to 24 and so on.

As a result, the graphs are too big even to fit in the verifying computer’s memory, so it can’t check three-colorings by running through the list of vertices. But it’s still possible to verify a three-coloring by asking the two provers separate but related questions.

In MIP, the verifier has enough memory to run a program that allows it to determine whether two vertices in the graph are connected by an edge. The verifier can then ask each prover to state the color of one of the two connected vertices — and it can cross-reference the provers’ answers to make sure the three-coloring works.

The expansion of hard-to-solve-but-easy-to-verify problems from NP to IP to MIP involved classical computers. Quantum computers work very differently. For decades it’s been unclear how they change the picture — do they make it harder or easier to verify solutions?

The new work by Natarajan and Wright provides the answer.

Quantum Cheats

Quantum computers perform calculations by manipulating quantum bits, or “qubits.” These have the strange property that they can be entangled with one another. When two qubits — or even large systems of qubits — are entangled, it means that their physical properties play off each other in a certain way.

In their new work, Natarajan and Wright consider a scenario involving two separate quantum computers that share entangled qubits.

This kind of setup would seem to work against verification. The power of a multi-prover interactive proof comes precisely from the fact that you can question two provers separately and cross-check their answers. If the provers’ answers are consistent, then it’s likely they’re correct. But two provers sharing an entangled state would seem to have more power to consistently assert incorrect answers.

And indeed, when the scenario of two entangled quantum computers was first put forward in 2003, computer scientists assumed entanglement would reduce verification power. “The obvious reaction of everyone, including me, is that now you’re giving more power to the provers,” Vidick said. “They can use entanglement to correlate their answers.”

Despite that initial pessimism, Vidick spent several years trying to prove the opposite. In 2012, he and Tsuyoshi Ito proved that it’s still possible to verify all the problems in MIP with entangled quantum computers.

Natarajan and Wright have now proved that the situation is even better than that: A wider class of problems can be verified with entanglement than without it. It’s possible to turn the connections between entangled quantum computers to the verifier’s advantage.

To see how, remember the procedure in MIP for verifying three-colorings of graphs whose sizes grow exponentially. The verifier doesn’t have enough memory to store the whole graph, but it does have enough memory to identify two connected vertices, and to ask the provers the colors of those vertices.

With the class of problems Natarajan and Wright consider — called NEEXP for nondeterministic doubly exponential time — the graph sizes grow even faster than they do in MIP. Graphs in NEEXP grow at a “doubly exponential” rate. Instead of increasing at a rate of powers of two — 21, 22, 23, 24 and so on — the number of vertices in the graph increases at a rate of powers of powers of two — $latex 2^{2^1}, 2^{2^2}, 2^{2^3}, 2^{2^4}$  and so on. As a result, the graphs quickly become so big that the verifier can’t even identify a single pair of connected vertices.

“To label a vertex would take 2n bits, which is exponentially more bits than the verifier has in its working memory,” Natarajan said.

But Natarajan and Wright prove that it’s possible to verify a three-coloring of a doubly-exponential-size graph even without being able to identify which vertices to ask the provers about. This is because you can make the provers come up with the questions themselves.

The idea of asking computers to interrogate their own solutions sounds, to computer scientists, as advisable as asking suspects in a crime to interrogate themselves — surely a foolish proposition. Except Natarajan and Wright prove that it’s not. The reason is entanglement.

“Entangled states are a shared resource,” Wright said. “Our entire protocol is figuring out how to use this shared resource to generate connected questions.”

If the quantum computers are entangled, then their choices of vertices will be correlated, producing just the right set of questions to verify a three-coloring.

At the same time, the verifier doesn’t want the two quantum computers to be so intertwined that their answers to those questions are correlated (which would be the equivalent of two suspects in a crime coordinating their false alibis). Another strange quantum feature handles this concern. In quantum mechanics, the uncertainty principle prevents us from knowing a particle’s position and momentum simultaneously — if you measure one property, you destroy information about the other. The uncertainty principle strictly limits what you can know about any two “complementary” properties of a quantum system.

Natarajan and Wright take advantage of this in their work. To compute the color of a vertex, they have the two quantum computers make complementary measurements. Each computer computes the color of its own vertex, and in doing so, it destroys any information about the other’s vertex. In other words, entanglement allows the computers to generate correlated questions, but the uncertainty principle prevents them from colluding when answering them.

“You have to force the provers to forget, and that’s the main thing do in their paper,” Vidick said. “They force the prover to erase information by making a measurement.”

Their work has almost existential implications. Before this new paper, there was a much lower limit on the amount of knowledge we could possess with complete confidence. If we were presented with an answer to a problem in NEEXP, we’d have no choice but to take it on faith. But Natarajan and Wright have burst past that limit, making it possible to verify answers to a far more expansive universe of computational problems.

And now that they have, it’s unclear where the limit of verification power lies.

“It could go much further,” said Lance Fortnow, a computer scientist at the Georgia Institute of Technology. “They leave open the possibility that you could take another step.”



show enclosure

(image/jpg)

Author(s): Marie Ioannou, Jonatan Bohr Brask, and Nicolas Brunner

Quantum theory allows for randomness generation in a device-independent setting, where no detailed description of the experimental device is required. Here we derive a general upper bound on the amount of randomness that can be certified in such a setting. Our bound applies to any black-box scenario...


[Phys. Rev. A 99, 052338] Published Thu May 23, 2019

Author(s): Askery Canabarro, Samuraí Brito, and Rafael Chaves

The ability to witness nonlocal correlations lies at the core of foundational aspects of quantum mechanics and its application in the processing of information. Commonly, this is achieved via the violation of Bell inequalities. Unfortunately, however, their systematic derivation quickly becomes unfe...


[Phys. Rev. Lett. 122, 200401] Published Wed May 22, 2019

Author(s): Dmitry A. Abanin, Ehud Altman, Immanuel Bloch, and Maksym Serbyn

The route of a physical system toward equilibrium and thermalization has been the subject of discussion and controversy since the time of Boltzmann. This Colloquium reviews the recent progress in understanding many-body localization, a phase of matter in which quantum mechanics and disorder conspire to prohibit thermalization altogether. Many new phenomena emerge in lieu of conventional statistical mechanics and may be observed in systems of ultracold atoms, superconducting qubits, and certain quantum materials.


[Rev. Mod. Phys. 91, 021001] Published Wed May 22, 2019

Oldofredi, Andrea (2019) Is Quantum Mechanics Self-Interpreting? [Preprint]
Woodward, James (2019) Causal Judgment: What Can Philosophy Learn from Experiment? What Can It Contribute to Experiment. [Preprint]
Broka, Chris (2019) The Quantum Mechanics of Two Interacting Realities. [Preprint]

Publication date: Available online 17 May 2019

Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics

Author(s): Adam Koberinski

Abstract

In this paper I will focus on the case of the discovery of parity nonconservation in weak interactions from the period spanning 1947–1957, and the lessons this episode provides for successful theory construction in HEP. I aim to (a) summarize the history into a coherent story for philosophers of science, and (b) use the history as a case study for the epistemological evolution of the understanding of weak interactions in HEP. Iconclude with some philosophical lessons regarding theory construction in physics.

All most people hear about is quantum computing, but that's hardly the whole story

-- Read more on ScientificAmerican.com


show enclosure

(image/jpeg; 3.1 MB)

Author(s): Daniel Harlow and Hirosi Ooguri

Insights from the AdS/CFT correspondence provide a glimpse of what global kinematical properties of viable quantum theories of gravity might be.


[Phys. Rev. Lett. 122, 191601] Published Fri May 17, 2019

Authors: Marco Piva

We point out the idea that, at small scales, gravity can be described by the standard degrees of freedom of general relativity, plus a scalar particle and a degree of freedom of a new type: the fakeon. This possibility leads to fundamental implications in understanding gravitational force at quantum level as well as phenomenological consequences in the corresponding classical theory.

Authors: Robert L. Navin

This paper posits the existence of, and finds a candidate for, a variable change that allows quantum mechanics to be interpreted as quantum geometry. The Bohr model of the Hydrogen atom is thought of in terms of an indeterministic electron position and a deterministic metric and the motivation for this paper is to try to change variables to have a deterministic position and momentum for the electron and nucleus but with an indeterministic (quantum) metric that reproduces the physics of the Bohr model. This mapping is achieved by allowing the metric in the Hamiltonian to be different to the metric in the space-time distance element and then representing the two metrics with vierbeins and assuming they are canonically conjugate variables. Effectively, the usual Schr\"odinger space-time variables have been re-interpreted as four of the potentially sixteen parameters of the metric tensor vierbein in the distance element while the metric tensor vierbein in the Hamiltonian is an operator expressible as first-order derivatives in these variables or vice versa. I then argue that this reproduces observed quantum physics at the sub-atomic level by demonstrating the energy spectrum of electron orbitals is exactly the same as the usual relativistic Bohr model for the Hydrogen atom in a certain limit. Next, by introducing a single dimensionless running coupling that shows up in the analogous place as, but in addition to, Planck's constant in the commutator definition I argue that this allows massive objects to couple to the physical space-time geometry but not massless ones - no matter coupling value. This claim is based on a fit to the Schwarzschild metric with a few simple assumptions and thus obtaining an effective theory of how the quantum geometries at nearby space-time points couple to one another. This demonstrates that this coupling constant is related to Newton's gravitational constant.

Authors: Juerg Froehlich

To begin with, some of the conundrums concerning Quantum Mechanics and its interpretation(s) are recalled. Subsequently, a sketch of the "ETH-Approach to Quantum Mechanics" is presented. This approach yields a logically coherent quantum theory of "events" featured by physical systems and of direct or projective measurements of physical quantities, without the need to invoke "observers". It enables one to determine the stochastic time evolution of states of physical systems. We also briefly comment on the quantum theory of indirect or weak measurements, which is much easier to understand and more highly developed than the theory of direct (projective) measurements. A relativistic form of the ETH-Approach will be presented in a separate paper.

Authors: Arkady Bolotin

A common way of stating the non-cloning theorem -- one of distinguishing characteristics of quantum theory -- is that one cannot make a copy of an arbitrary unknown quantum state. Even though this theorem is an important part of the ongoing discussion of the nature of a quantum state, the role of the theorem in the logical-algebraic approach to quantum theory has not yet been systematically studied. According to the standard point of view (which is in line with the logical tradition), quantum cloning amounts to two classical rules of inference, namely, monotonicity and idempotency of entailment. One can conclude then that the whole of quantum theory should be described through a logic wherein these rules do not hold, which is linear logic. However, in accordance with a supervaluational semantics (that allows one to retain all the theorems of classical logic while admitting `truth-value gaps'), quantum cloning necessitates the permanent loss of the truth values of experimental quantum propositions which violates the unalterability of the past. The present paper demonstrates this.

Authors: Inge S. Helland

A conceptual variable is any variable defined by a person or by a group of persons. Such variables may be inaccessible, meaning that they cannot be measured with arbitrary accuracy on the physical system under consideration at any given time. An example may be the spin vector of a particle; another example may be the vector (position, momentum). In this paper, a space of inaccessible conceptual variables is defined, and group actions are defined on this space. Accessible functions are then defined on the same space. Assuming this structure, the basic Hilbert space structure of quantum theory is derived: Operators on a Hilbert space corresponding to the accessible variables are introduced; when these operators have a discrete spectrum, a natural model reduction implies a new model in which the values of the accessible variables are the eigenvalues of the operator. The principle behind this model reduction demands that a group action may also be defined also on the accessible variables; this is possible if the corresponding functions are permissible, a term that is precisely defined. The following recent principle from statistics is assumed: every model reduction should be to an orbit or to a set of orbits of the group. From this derivation, a new interpretation of quantum theory is briefly discussed: I argue that a state vector may be interpreted as connected to a focused question posed to nature together with a definite answer to this question. Further discussion of these topics is provided in a recent book published by the author of this paper.

Authors: Inge S. Helland

The interpretation of quantum mechanics has been discussed since this theme first was brought up by Einstein and Bohr. This article describes a proposal for a new foundation of quantum theory, partly drawing upon ideas from statistical inference theory. The approach can be said to have an intuitive basis: The quantum states of a physical system are under certain conditions in one-to-one correspondence with the following: 1. Focus on a concrete question to nature and then 2. Give a definite answer to this question. This foundation implies an epistemic interpretation, depending upon the observer, but the objective world is restored when all observers agree on their observations on some variables. The article contains a survey of parts of the authors books on epistemic processes, which give more details about the theory. At the same time, the article extends some of the discussion in the books, and at places makes it more precise.

Authors: Dayou Yang, Andrey Grankin, Lukas M. Sieberer, Denis V. Vasilyev, Peter Zoller

An ideal quantum measurement collapses the wave function of a quantum system to an eigenstate of the measured observable, with the corresponding eigenvalue determining the measurement outcome. For a quantum non-demolition (QND) observable, i.e., one that commutes with the Hamiltonian generating the system's time evolution, repeated measurements yield the same result, corresponding to measurements with minimal disturbance. This concept applies universally to single quantum particles as well as to complex many-body systems. However, while QND measurements of systems with few degrees of freedom has been achieved in seminal quantum optics experiments, it is an open challenge to devise QND measurement of a complex many-body observable. Here, we describe how a QND measurement of the Hamiltonian of an interacting many-body system can be implemented in a trapped-ion analog quantum simulator. Through a single shot measurement, the many-body system is prepared in a narrow energy band of (highly excited) energy eigenstates, and potentially even a single eigenstate. Our QND scheme, which can be carried over to other platforms of quantum simulation, provides a novel framework to investigate experimentally fundamental aspects of equilibrium and non-equilibrium statistical physics including the eigenstate thermalization hypothesis (ETH) and quantum fluctuation relations.

Publication date: Available online 11 May 2019

Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics

Author(s): Katsuaki Higashi

Abstract

According to a conventional view, there exists no common cause model of quantum correlations satisfying locality requirements. Indeed, Bell's inequality is derived from some locality requirements and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argued that in the derivation of the inequality, the existence of a common common-cause for multiple correlations is implicitly assumed and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, Graβhoff et al. showed that when there are three pairs of perfectly correlated events and a common cause of each correlation exist, we cannot construct a common cause model that is consistent with quantum mechanical prediction and also meets several locality requirements. In this paper, first, as a consequence of the fact shown by Graβhoff et al., we will confirm that there exists no local common cause model when a two-particle system is in any maximally entangled state. After that, based on Hardy's famous argument, we will prove that there exists no local common cause model when a two-particle system is in any non-maximally entangled state. Therefore, it will be concluded that for any entangled state, there exists no local common cause model. It will be revealed that the non-existence of a common cause model satisfying locality is not limited to a particular state like the singlet state.

Publication date: Available online 14 May 2019

Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics

Author(s): Jonathan F. Schonfeld

Abstract

I argue that the marquis characteristics of the quantum-mechanical double-slit experiment (point detection, random distribution, Born rule) can be explained using Schroedinger's equation alone, if one takes into account that, for any atom in a detector, there is a small but nonzero gap between its excitation energy and the excitation energies of all other relevant atoms in the detector (isolated-levels assumption). To illustrate the point I introduce a toy model of a detector. The form of the model follows common practice in quantum optics and cavity QED. Each detector atom can be resonantly excited by the incoming particle, and then emit a detection signature (e.g., bright flash of light) or dissipate its energy thermally. Different atoms have slightly different resonant energies per the isolated-levels assumption, and the projectile preferentially excites the atom with the closest energy match. The toy model permits one easily to estimate the probability that any atom is resonantly excited, and also that a detection signature is produced before being overtaken by thermal dissipation. The end-to-end detection probability is the product of these two probabilities, and is proportional to the absolute-square of the incoming wavefunction at the atom in question, i.e. the Born rule. I consider how closely a published neutron interference experiment conforms to the picture developed here; I show how this paper's analysis steers clear of creating a scenario with local hidden variables; I show how the analysis steers clear of the irreversibility implicit in the projection postulate; and I discuss possible experimental tests of this paper's ideas. Hopefully, this is a significant step toward realizing the program of solving the measurement problem within unitary quantum mechanics envisioned by Landsman, among others.

Ackermann, Matthias (2019) A Comparison of Two Presentations of Quantum Mechanics - Everett's Relative-State and Rovelli's Relational Quantum Mechanics. [Preprint]
Chall, Cristin and King, Martin and Mättig, Peter and Stöltzner, Michael (2019) From a Boson to the Standard Model Higgs: A Case Study in Confirmation and Model Dynamics. Synthese. ISSN 0039-7857
De Haro, Sebastian and Butterfield, Jeremy (2018) On symmetry and duality. [Preprint]
Schneider, Mike D. (2018) What’s the problem with the cosmological constant? [Preprint]
Schroeren, David (2019) Symmetry Fundamentalism. [Preprint]
Kiessling, Michael K.-H. (2019) The influence of gravity on the Boltzmann entropy of a closed universe. [Preprint]

Author(s): Eyuri Wakakuwa, Akihito Soeda, and Mio Murao

We prove a trade-off relation between the entanglement cost and classical communication round complexity of a protocol in implementing a class of two-qubit unitary gates by two distant parties, a key subroutine in distributed quantum information processing. The task is analyzed in an information the...


[Phys. Rev. Lett. 122, 190502] Published Thu May 16, 2019

Nature, Published online: 15 May 2019; doi:10.1038/s41586-019-1196-1

An array of superconducting qubits in an open one-dimensional waveguide is precisely controlled to create an artificial quantum cavity–atom system that reaches the strong-coupling regime without substantial decoherence.

As one of the most famous physicists of the 20th century, Richard Feynman was known for a lot. Early in his career, he contributed to the development of the first atomic bomb as a group leader of the Manhattan Project. Hans Bethe, the scientific leader of the project who won a Nobel Prize in Physics in 1967 (two years after Feynman did), has been quoted on what set his protégé apart: “There are two types of genius. Ordinary geniuses do great things, but they leave you room to believe that you could do the same if only you worked hard enough. Then there are magicians, and you can have no idea how they do it. Feynman was a magician.”

In his 1993 biography Genius, James Gleick called Feynman “brash,” “ebullient” and “the most brilliant, iconoclastic and influential physicist of modern times.” Feynman captured the popular imagination when he played the bongo drums and sang about orange juice. He was a fun-loving, charismatic practical joker who toured America on long road trips. His colleague Freeman Dyson described him as “half genius and half buffoon.” At times, his oxygen-sucking arrogance rubbed some the wrong way, and his performative sexism looks very different to modern eyes. Feynman will also be remembered for his teaching: The lectures he delivered to Caltech freshmen and sophomores in 1962 set the gold standard in physics instruction and, when later published as a three-volume set, sold millions of copies worldwide.

What most people outside of the physics community are likely to be least familiar with is the work that counts as Feynman’s crowning scientific achievement. With physicists in the late 1940s struggling to reformulate a relativistic quantum theory describing the interactions of electrically charged particles, Feynman conjured up some Nobel Prize-winning magic. He introduced a visual method to simplify the seemingly impossible calculations needed to describe basic particle interactions. As Gleick put it in Genius, “He took the half-made conceptions of waves and particles in the 1940s and shaped them into tools that ordinary physicists could use and understand.” Through the work of Feynman, Dyson, Julian Schwinger and Sin-Itiro Tomonaga, a new and improved theory of quantum electrodynamics was born.

Feynman’s lines and squiggles, which became known as Feynman diagrams, have since “revolutionized nearly every aspect of theoretical physics,” wrote the historian of science David Kaiser in 2005. “In the same way that computer-enabled computation might today be said to be enabling a genomic revolution, Feynman diagrams helped to transform the way physicists saw the world, and their place in it.”

To learn more about Feynman diagrams and how they’ve changed the way physicists work, watch our new In Theory video:

Decades later, as Natalie Wolchover reported in 2013, “it became apparent that Feynman’s apparatus was a Rube Goldberg machine.” Even the collision of two subatomic particles called gluons to produce four less energetic gluons, an event that happens billions of times a second during collisions at the Large Hadron Collider, she wrote, “involves 220 diagrams, which collectively contribute thousands of terms to the calculation of the scattering amplitude.” Now, a group of physicists and mathematicians is studying a geometric object called an “amplituhedron” that has the potential to further simplify calculations of particle interactions.

Meanwhile, other physicists hope that emerging connections between Feynman diagrams and number theory can help identify patterns in the values generated from more complicated diagrams. As Kevin Hartnett reported in 2016, understanding these patterns could make particle calculations much simpler, but like the amplituhedron approach, this is still a work in progress.

“Feynman diagrams remain a treasured asset in physics because they often provide good approximations to reality,” wrote the Nobel Prize-winning physicist Frank Wilczek three years ago. “They help us bring our powers of visual imagination to bear on worlds we can’t actually see.”

If you liked the fifth and final episode from season two of Quanta’s In Theory video series, you may also enjoy our previous videos on universalityquantum gravityemergence and turbulence.



show enclosure

(image/jpg)

Author(s): F. Laloë

We discuss a model of spontaneous collapse of the quantum state that does not require adding any stochastic processes to the standard dynamics. The additional ingredient with respect to the wave function is a position in the configuration space which drives the collapse in a completely deterministic...


[Phys. Rev. A 99, 052111] Published Tue May 14, 2019

Dieks, Dennis (2019) Quantum Reality, Perspectivalism and Covariance. [Preprint]
Henriksson, Andreas (2019) On the ergodic theorem and information loss in statistical mechanics. [Preprint]
Martens, Niels C.M. (2019) Machian Comparativism about Mass. The British Journal for the Philosophy of Science. ISSN 1464-3537
Henriksson, Andreas (2019) On the Gibbs-Liouville theorem in classical mechanics. [Preprint]
Palacios, Patricia (2019) Phase Transitions: A Challenge for Intertheoretic Reduction? [Preprint]
Kastner, Ruth (2019) The "Delayed Choice Quantum Eraser" Neither Erases Nor Delays. [Preprint]

Author(s): Alexander R. H. Smith

We generalize a quantum communication protocol introduced by Bartlett et al. [New J. Phys. 11, 063013 (2009)], in which two parties communicating do not share a classical reference frame, to the case where changes of their reference frames form a one-dimensional noncompact Lie group. Alice sends to ...


[Phys. Rev. A 99, 052315] Published Fri May 10, 2019

Author(s): P.-P. Crépin, C. Christen, R. Guérout, V. V. Nesvizhevsky, A.Yu. Voronin, and S. Reynaud

We propose to use quantum interferences to improve the accuracy of the measurement of the free-fall acceleration g¯ of antihydrogen in the gravitational behavior of antihydrogen at rest (GBAR) experiment. This method uses most antiatoms prepared in the experiment and it is simple in its principle, a...


[Phys. Rev. A 99, 042119] Published Mon Apr 22, 2019

Author(s): Florian Fröwis, Matteo Fadel, Philipp Treutlein, Nicolas Gisin, and Nicolas Brunner

The quantum Fisher information (QFI) of certain multipartite entangled quantum states is larger than what is reachable by separable states, providing a metrological advantage. Are these nonclassical correlations strong enough to potentially violate a Bell inequality? Here, we present evidence from t...


[Phys. Rev. A 99, 040101(R)] Published Wed Apr 17, 2019

Author(s): Krzysztof Ptaszyński and Massimiliano Esposito

We report two results complementing the second law of thermodynamics for Markovian open quantum systems coupled to multiple reservoirs with different temperatures and chemical potentials. First, we derive a nonequilibrium free energy inequality providing an upper bound for a maximum power output, wh...


[Phys. Rev. Lett. 122, 150603] Published Tue Apr 16, 2019

Author(s): Andrea Crespi, Francesco V. Pepe, Paolo Facchi, Fabio Sciarrino, Paolo Mataloni, Hiromichi Nakazato, Saverio Pascazio, and Roberto Osellame

The decay of an unstable system is usually described by an exponential law. Quantum mechanics predicts strong deviations of the survival probability from the exponential: Indeed, the decay is initially quadratic, while at very large times it follows a power law, with superimposed oscillations. The l...


[Phys. Rev. Lett. 122, 130401] Published Wed Apr 03, 2019

Volume 5, Issue 2, pages 80-97

Aurélien Drezet [Show Biography]

Dr. Aurélien Drezet was born in Metz, France, in 1975. He received his Ph.D Degree in experimental physics from the University Joseph Fourier at Grenoble in 2002 where he was working on nano optics and near-field scanning optical microscopy. After 6 years as a post-doc in Austria and France, he is currently head of the Nano-Optics and Forces team at Institut Néel in Grenoble (which belongs to the national scientific research center-CNRS-France which is also associated with the University Joseph Fourier in Grenoble). His activities range from experimental and theoretical plasmonics to near-field scanning optical microscopy in both the classical and quantum regime. He is also strongly involved in scientific works and discussions concerning quantum foundations in general and Bohmian mechanics in particular.

This is an analysis of the recently published article “Quantum theory cannot consistently describe the use of itself” by D. Frauchiger and R. Renner [1]. Here I decipher the paradox and analyze it from the point of view of de Broglie-Bohm hidden variable theory (i.e., Bohmian mechanics). I also analyze the problem from the perspective obtained by the Copenhagen interpretation (i.e., the Bohrian interpretation) and show that both views are self consistent and do not lead to any contradiction with a `single-world’ description of quantum theory.

Full Text Download (655k)

Volume 5, Issue 2, pages 69-79

Mohammed Sanduk [Show Biography]

Mohammed Sanduk is an Iraqi born British physicist. He was educated at University of Baghdad and University of Manchester. Before attending his undergraduate study, he pub-lished a book in particle physics entitled “Mesons”. Sanduk has worked in industry and academia, and his last post in Iraq was head of the Laser and Opto-electronics Engineering department at Nahrain University in Baghdad. Owing to his interest in the philosophy of science, and he was a member of the academic staff of Pontifical Babel College for Philosophy. Sanduk is working with the department of chemical and process engineering at the University of Surrey. Sanduk is interested in transport of charged particles, Magnetohydro-dynamics, and the renewable energy technology. In addition to that, Sanduk is interested in the foundation of Quantum mechanics, and the philosophy of science & technology.

The imaginary i in the formulation of the quantum mechanics is accepted within the axioms of the quantum mechanics theory, and, thus, there is no need for an explanation of its origin. Since 2012, in a non-quantum mechanics project, there has been an attempt to complexify a real function and build an analogy for relativistic quantum mechanics. In that theoretical attempt, a partial observation technique is proposed as one of the reasons behind the appearance of the imaginary i. The present article throws light on that attempt of complexification and tries to explain the logic of physics behind the complex phase factor. This physical process of partial observation acts as a process of physicalization of a virtual model. According to the positive results of analogy, the appeared imaginary i in quantum mechanics formulation may be related to a partial observation case as well.

Full Text Download (621k)

Volume 5, Issue 2, pages 51-68

Daniel Shanahan [Show Biography]

Dan Shanahan is an independent researcher with a passion for foundational issues in quantum theory and relativity. Born in Perth, Western Australia, he studied physics at the Universities of NSW and Sydney.

Effects associated in quantum mechanics with a divisible probability wave are
explained as physically real consequences of the equal but opposite reaction
of the apparatus as a particle is measured. Taking as illustration a
Mach-Zehnder interferometer operating by refraction, it is shown that this
reaction must comprise a fluctuation in the reradiation field of complementary
effect to the changes occurring in the photon as it is projected into one or
other path. The evolution of this fluctuation through the experiment will
explain the alternative states of the particle discerned in self interference,
while the maintenance of equilibrium in the face of such fluctuations becomes
the source of the Born probabilities. In this scheme, the probability wave
is a mathematical artifact, epistemic rather than ontic, and akin in this
respect to the simplifying constructions of geometrical optics.

Full Text Download (361k)

Volume 5, Issue 2, pages 16-50

Edward J. Gillis [Show Biography]

Ed Gillis received his B. A. in Philosophy from the University of Michigan, and his Ph.D in Physics from the University of Colorado for research on the relationship between quantum nonlocality and relativity. He has authored several papers on quantum foundations, dealing, in particular, with connections between wave function collapse and elementary processes, how these connections might lead to an explanation of the no-superluminal-signaling principle in fundamental physical terms, and possible tests for collapse. He has also worked as an engineer on the development of sensor systems and control algorithms based on the information provided by those systems.

The assumption that wave function collapse is a real occurrence has very interesting consequences – both experimental and theoretical. Besides predicting observable deviations from linear evolution, it implies that these deviations must originate in nondeterministic effects at the elementary level in order to prevent superluminal signaling, as demonstrated by Gisin. This lack of determinism implies that information cannot be instantiated in a reproducible form in isolated microsystems (as illustrated by the No-cloning theorem). By stipulating that information is a reproducible and referential property of physical systems, one can formulate the no-signaling principle in strictly physical terms as a prohibition of the acquisition of information about spacelike-separated occurrences. This formulation provides a new perspective on the relationship between relativity and spacetime structure, and it imposes tight constraints on the way in which collapse effects are induced. These constraints indicate that wave function collapse results from (presumably small) nondeterministic deviations from linear evolution associated with nonlocally entangling interactions. This hypothesis can be formalized in a stochastic collapse equation and used to assess the feasibility of testing for collapse effects.
Full Text Download (336k)