The Bunkum Mystification of Quantum Mechanics by Non-Physicists

LilWabbit

Senior Member
In quantum mechanics, there is a common philosophical (epistemological) confusion whereby a mathematical model is mistaken for a physical observation or entity. Due to this common philosophical confusion even amongst professional physicists, QM is frequently mystified especially by non-physicists.

The ‘strangeness’ of the probabilistic concepts of QM merely owe to physicists misconstruing them as actual physical entities, whereas to mathematicians there is nothing even remotely strange about statistical probabilities or probability amplitudes. These ‘strange’ notions are further erroneously used by non-physicists to account for various ‘spiritual’ claims. By saying this I am not implying that spiritual claims are automatically untrue. I am merely highlighting the bunkum in misinterpreting quantum physics as metaphysics.

The quantum-mechanical statement that ‘measurement/observation causes a wave function collapse’ (reinforced by the Quantum Zeno effect), while mathematically unproblematic, is physically nonsensical due to the aforementioned fundamental philosophical confusion.

The Quantum Zeno effect, like the double-slit experiment, is just one of many examples of the entirely non-mystical fact that an essentially statistical probabilistic concept (i.e. ’a wave function’, which in QM is often mislabelled ’observation’) is by definition not directly observable. Hence, unsurprisingly, actual physical measurement/observation always shows something more definite and different from an abstract mathematical bundle of probabilities (i.e. wave function).

This, in turn, is a function of the current limited mathematical language used in QM – comprising simplistic notions such as ‘wave’, ‘particle’, ‘momentum’ and ‘position’ – to account for the sophistication of actual physical reality in the quantum scale. Less a function of quantum-scale reality being necessarily dependent on, or affected by, our mere awareness of it (i.e. phenomenalism) or our observation of it (i.e. observer effect). Due to these linguistic constraints affecting the setups of experiments, a measurement of the ‘wave’-like character of a quantum entity produces results that cannot be expected from its ‘particle’ nature, and vice versa (the double-slit experiment). Similarly, any accurate measurement of the ‘position’ of a particle loses precision in measuring its ‘momentum’, and vice versa (Heisenberg’s uncertainty principle).

As weird as these effects may seem to some, they tell more about the inadequacy of our current concepts in quantum physics than about the quantum reality they are employed to describe. Ironically both, materialist neuroscientists as well as non-materialist consciousness gurus, unscientifically read into these strange paradoxes produced by current linguistic limitations in QM. They read into them to justify their mutually contradictory ideological projects: To demonstrate that consciousness has a physical base, or to show that it is non-physical in nature. Both projects are bunk.

Thoughts?
 
There is no unique "quantum physics", there are many interpretations. The choice of which interpretation of quantum mechanics you wish to adopt is *definitionally* metaphysics. And pretty much everything you've written will necessarily be interpretted differently in the different interpretations.
 
In quantum mechanics, there is a common philosophical (epistemological) confusion whereby a mathematical model is mistaken for a physical observation or entity.
Could you please cite some sources for this? (You claim it's "common", should be easy.) I have a hard time imagining that a physicist would confuse their theoretical model with an experimental observation.

And the existence of a physical "entity" such as gravity is metaphysically interesting anyway: the concept of gravity exists in our minds (if we've been educated to it), and it seems to describe physical reality, but does "gravity" as an entity have a physical existence? I doubt it.

I also admit I don't understand the Quantum Zeno paradoxon, it seems like an application of the Gambler's Fallacy to me:
Article:
The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the incorrect belief that if a particular event occurs more frequently than normal during the past it is less likely to happen in the future (or vice versa), when it has otherwise been established that the probability of such events does not depend on what has happened in the past. Such events, having the quality of historical independence, are referred to as statistically independent.
 
The problem with discussing quantum mechanics is that you can't make any definitive, non-interpretational statements beyond how the Schrödinger equation predicts a system will evolve. I've always thought the "blind men and the elephant" is a great analogy here. One blind man says, "Clearly there is confusion between the measurements of this thing and what the thing actually is," while another one says, "Clearly our measurements of the thing are the only honest statements we can make about the thing." They are both convinced they are right, but it's thoroughly unclear who is right, or maybe both of them are in their own way.
 
Coincidentally, at the moment I have playing in the bg Professor Dave Explains YouTube video(s) "Quantum Mysticism is Stupid." Highly recommended.
 
to mathematicians there is nothing even remotely strange about statistical probabilities or probability amplitudes

If I remember rightly, quantum mechanics makes some use of negative probability, which is strange even to mathematicians.
 
There is no unique "quantum physics", there are many interpretations.

Very true and very relevant. In quantum physics there are relatively few observations compared to the verbiage of unproven theories and speculation to explain them. It's this verbiage that gets all too often cited as scientific fact by dilettantes, and further speculated as an explication to phenomena way outside the field of QM. The latter is largely bunk.

The choice of which interpretation of quantum mechanics you wish to adopt is *definitionally* metaphysics.

The 'choice of interpretation'? Maybe, since it's a higher cognitive process. To be definitionally metaphysical is to be above and beyond physical laws and frameworks of existence. The entities studied in quantum physics are not. Philosophically, metaphysical entities, states and relations must in the least be extra-spatiotemporal (while they may determine spatiotemporality), i.e. not locatable within spacetime, in order to be metaphysical. Quantum physical entities are locatable. They're just difficult to measure (i.e. to locate accurately by measurement) due to the evident fact that quantum-scale phenomena are largely inaccessible to current technology in much the same way as cosmological singularities are well-nigh impossible to (currently) directly measure.

Bosons, fermions, photons, electrons, atoms, molecular structures, biological growth, proteins and sensory processes (in other words, any phenomenon from physics through chemistry to higher biological functions) possess certain crucial properties in common that has warranted their description as physical entities or processes. They're spatiotemporal (where time is also understood as a form of spatiality) and obey the laws of physics, including the laws of thermodynamics. Even quantum-scale phenomena are subject to entropy in the form of radioactive decay. Etc. Etc.
 
Could you please cite some sources for this? (You claim it's "common", should be easy.) I have a hard time imagining that a physicist would confuse their theoretical model with an experimental observation.

As explained in the OP, the common QM notion of wave function collapse in itself demonstrates the said confusion amongst professional physicists.

Article:
The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics, and is known as the measurement problem.


To recap parts of the OP using an external source:

Article:
But the wave function ψ is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, ψ is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function ψ must change abruptly after a measurement has been performed.


And the existence of a physical "entity" such as gravity is metaphysically interesting anyway: the concept of gravity exists in our minds (if we've been educated to it), and it seems to describe physical reality, but does "gravity" as an entity have a physical existence? I doubt it.

Here's an experiment. Jump off a balcony from the tenth floor. If you feel great hesitancy to do so, then your actual 'doubt' about gravity as 'physical reality' is nowhere near as serious as your theoretical thought-process makes it seem as part of an academic discussion.
 
There is no unique "quantum physics", there are many interpretations. The choice of which interpretation of quantum mechanics you wish to adopt is *definitionally* metaphysics. And pretty much everything you've written will necessarily be interpretted differently in the different interpretations.
Well, there exists one unique quantum physics, and it and its consequences are understood and agreed upon by all physicists. It's a handful of axioms describing how to ask and answer questions about experimental observations, and these questions have sharply defined answers (even if these answers often take a probabilistic character). All "interpretations" of quantum mechanics agree on these basic facts, and where they disagree they would probably be better described as different theories if not for the fact that -- by design! -- most of these alternatives are alleged to not be experimentally distinguishable. That said, progress in the foundations of quantum mechanics has answered many questions that were previously attached, and by some still are, to a choice of "interpretation". For example, the idea of "counterfactual definiteness" -- do results of experiments 'exist' when you're not looking -- has been answered decisively in the negative by the Kochen-Specker theorem: any theory in which experimental results exist before you look must disagree with quantum mechanics. Bell type experiments establish that any theory in which collapse is a physical process that happens outside the context of a measurement must be in conflict with locality (and nonlocal physics quickly devolves into grandfather paradoxes).

One of these concepts which was once thought to be a controversial "matter of interpretation", and which to the minds of many still is, is the interpretation of the wavefunction as described in the original post. It's become very clear that quantum objects follow a sort of generalization of probability theory, one in which the relevant quantities don't add up to one the way probabilities do, but rather their squared absolute values do. Just like probabilities in the Bayesian sense, the quantum state can be thought of in terms of one's knowledge about the experimental preparation, and what possible experimental outcomes are consistent with that knowledge. The collapse is nothing more than the update of that knowledge, though of course with different rules. In classical probability theory your mere knowledge of some quantity doesn't affect the (classical) statistical distribution of future observations. In quantum mechanics it does.

All valid interpretations must agree on this, since it is pretty much demanded by the mathematics. Some interpretations might try to explain why it is that this is the right way to think about the quantum state in order to connect it to experiment, and there are varied attempts to find classical statistical ensembles that follow the quantum rules (with no clear successes thus far). But they all agree that, at a minimum, the quantum state and its collapse can be thought of as information and its update. This is only not the case with e.g. explicit collapse models such as GRW theory and Penrose's pseudoscientific Orch-OR. But those models are, and are intended to be, in disagreement with quantum mechanics. They are hoping that quantum mechanics might approximate reality in a similar sense that Newtonian mechanics approximates relativity at sufficiently low speeds, not to make sense of the mathematics as it stands without modification.
 
This, in turn, is a function of the current limited mathematical language used in QM – comprising simplistic notions such as ‘wave’, ‘particle’, ‘momentum’ and ‘position’ – to account for the sophistication of actual physical reality in the quantum scale. Less a function of quantum-scale reality being necessarily dependent on, or affected by, our mere awareness of it (i.e. phenomenalism) or our observation of it (i.e. observer effect). Due to these linguistic constraints affecting the setups of experiments, a measurement of the ‘wave’-like character of a quantum entity produces results that cannot be expected from its ‘particle’ nature, and vice versa (the double-slit experiment). Similarly, any accurate measurement of the ‘position’ of a particle loses precision in measuring its ‘momentum’, and vice versa (Heisenberg’s uncertainty principle).
I would suggest that vague words such as "wave-particle duality" are seriously overrepresented in scientific outreach in comparison with actual physical practice. A physicist doesn't say the electron is behaving as a wave in this instant; he writes down the quantum state that encodes his knowledge of the experimental preparation, and from there he can compute the probability that a detector sensitive to interactions with electrons within a certain region will fire. We may call the mathematical object used to make this prediction the "position operator", but really we could call it anything. Mathematically, the concepts are extremely sharp: the various observations we can make are described by self-adjoint (sometimes called "Hermitian") operators acting on the Hilbert space that describes the system of interest.

An aside is that some of the features of quantum mechanics that are considered by some psychologically uncomfortable can be and have been verified in a model-independent manner. For example, the Bell inqualities say nothing whatsoever about quantum mechanics: they merely outline the possible correlation between measurements of classically correlated spins, regardless of how one arranges the spins beforehand. If the result is found to violate the inequalities, all classical local theories have been refuted in one fell swoop. The same is true of the Kochen-Specker theorem: properly worded, it describes a more complicated set of classical expectations, which nature violates. All "counterfactually definite" theories are thus refuted in one fell swoop. That's why these results are so powerful. Any discomfort with these ideas, whether or not justified, is not just a matter of linguistics.

As weird as these effects may seem to some, they tell more about the inadequacy of our current concepts in quantum physics than about the quantum reality they are employed to describe. Ironically both, materialist neuroscientists as well as non-materialist consciousness gurus, unscientifically read into these strange paradoxes produced by current linguistic limitations in QM. They read into them to justify their mutually contradictory ideological projects: To demonstrate that consciousness has a physical base, or to show that it is non-physical in nature. Both projects are bunk.
To me, understanding consciousness at a level beyond that where one writes a few equations, draws a couple pictures, and then declares without evidence "consciousness is just X", would be a prerequisite for getting anywhere. Explaining something you don't understand in terms of something else you don't understand is just stamp-collecting your confusion.
 
but does "gravity" as an entity have a physical existence? I doubt it.
Physics questions phrased in terms like "does X exist" are often bad questions, because they devolve into interminable arguments about "what does it mean for X to exist". What we can assert sharply is that experiment shows a tendency for massive objects to attract one another (it's not just mass that matters, but you get the idea). We call that tendency gravity. In that sense, gravity "exists". I don't know of any other sense in which the question can be made meaningful.
I also admit I don't understand the Quantum Zeno paradoxon, it seems like an application of the Gambler's Fallacy to me:
Article:
The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the incorrect belief that if a particular event occurs more frequently than normal during the past it is less likely to happen in the future (or vice versa), when it has otherwise been established that the probability of such events does not depend on what has happened in the past. Such events, having the quality of historical independence, are referred to as statistically independent.
That's not it. The quantum Zeno effect is about a specific and very important difference between classical physics and quantum physics: The fact that making an observation of a system necessarily changes the outcomes of future observations of that system. If you measure the position of, say, an electron, at the exact instant of measurement you know precisely where it is. Shortly after the measurement you know with extremely high probability that the electron is still in the vicinity, because the wavefunction -- the statistical tool that describes its future positions -- evolves under the Schrödinger equation in a manner analogous to diffusion (mathematically it is diffusion with an imaginary coefficient). If you make a second measurement after a comparably short amount of time, you again know precisely where it is, and can repeat the argument. The observations are not statistically independent: that's the key point.
 
All "counterfactually definite" theories are thus refuted in one fell swoop.

But not the fact that actual measurements remain definite. Which brings us back to the epistemological confusion described in the OP. Quantum mechanics is not the only empirical scientific discipline with probabilistic theories and models. Evolutionary biology, economics, sociology and political science (whilst we may debate on how 'scientific' various practices of the latter three really are) involve probabilistic models as well. So do intelligence-gathering disciplines within various military or civil intelligence agencies. Probabilistic models are not unique to QM, nor the theoretical "discomfort" associated with such models by some. That's another myth.

That's why these results are so powerful. Any discomfort with these ideas, whether or not justified, is not just a matter of linguistics.

It seems deterministically-inclined theoreticians (black-and-white engineer-types) have always been uncomfortable with uncertainty or indeterminacy irrespective of scientific discipline. And hence they dislike probabilism, despite the success of the latter in predicting certain behaviours of certain phenomena. For instance, the stochastic character of quantum fluctuation is definitely not only a linguistic issue. But it does create a situation where mathematical probabilistic models to account for such fluctuations are easily mistaken for classical physical theories corresponding to reality in a more straightforward manner. Statistical models, probability densities, should not even be expected to correspond with reality like classical logical statements with binary truth values of 1 or 0.

To me, understanding consciousness at a level beyond that where one writes a few equations, draws a couple pictures, and then declares without evidence "consciousness is just X", would be a prerequisite for getting anywhere. Explaining something you don't understand in terms of something else you don't understand is just stamp-collecting your confusion.

Plus it perpetrates what's in philosophy billed the definist fallacy (G.E. Moore). In neuroscience and the philosophy of mind, identity theorists (such as Daniel Dennett) appear to stumble upon a version of a definist fallacy; ie. any one thing can be claimed identical with any other thing if no explanation is required. If the identity of the two phenomena -- say mental processes and brain processes, or brain processes and quantum-mechanical processes -- is left open to explanation (the open question argument), we can always justifiably ask whether such an identity between the two classes of phenomena is in fact the case. If such an open question argument is deemed intellectually invalid, then one could validly state, as a brute fact which none may question, that anything x is identical to any other thing y. Obviously not a very scientific approach. It's not even acceptable in analytical philosophy.

There are very few truisms in philosophical argumentation but if we were to label anything as a methodical truism in philosophical discussion, it would be the duty of those who propound an argument to offer some credible explanation. To claim two different concepts as synonymous cries for explanation.
 
Last edited:
As explained in the OP, the common QM notion of wave function collapse in itself demonstrates the said confusion amongst professional physicists.

Article:
The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics, and is known as the measurement problem.


To recap parts of the OP using an external source:

Article:
But the wave function ψ is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, ψ is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function ψ must change abruptly after a measurement has been performed.
How are model and observation being confused here?
There's a statistical model of a system, an observation is made, the model is then updated. Where is the confusion betwen model and observation that you claim in the OP?
Here's an experiment. Jump off a balcony from the tenth floor. If you feel great hesitancy to do so, then your actual 'doubt' about gravity as 'physical reality' is nowhere near as serious as your theoretical thought-process makes it seem as part of an academic discussion.
This is why I hesitate to discuss with you at all.
I clearly stated that gravity describes reality. So even if gravity does not have existence, its invention as a concept useful indescribing reality would still keep me from jumping off that balcony. Your answer does nothing to distinguish between gravity-as-a-model and gravity-as-a-physical-entity. I might conclude that you are confused about the difference.

Throw a stick to a dog, the dog knows the stick must come down, does the dog know about gravity?

markus says that gravity is simply the fall-down-i-ness of the stick; it exists as much as the shape of the stick exists, and gravity-the-formula simply models that in general.

@markus I don't understand how that is un-classical; if I was observing a molecule under Brownian motion, I could say the same thing.
 
There's a statistical model of a system, an observation is made, the model is then updated. Where is the confusion betwen model and observation that you claim in the OP?

The confusion is with those (not all) physicists who still naively regard a probabilistic model (a wave function) as a physical entity which then suddenly collapses when measurement is made.

I clearly stated that gravity describes reality. So even if gravity does not have existence, its invention as a concept useful in describing reality would still keep me from jumping off that balcony.

Markus seems to have already addressed your doubt about the existence of gravity. Anyway, if you reject the hypothesis that objects falling consistently in a certain way and direction is just an amazing coincidence (which most rational persons do), then you are effectively, by the logical rule of contraposition, positing the existence of an invisible physical force or some other similar invisible physical cause, commonly called gravity, which predicts and explains such consistent behaviour across the universe. The theory of gravity goes far beyond merely describing such behaviours. It predicts them, and is pretty broadly considered to be a scientific truth, an actual entity/force/law/you name it but not just a concept, that quite well approximates reality.

However, wave function is something quite different from the theory of gravity. It's a statistical model describing likely behaviours of an object. It is not a scientific theory in the same sense as gravity. It's more of a useful concept in the same way you think gravity is.

markus says that gravity is simply the fall-down-i-ness of the stick; it exists as much as the shape of the stick exists, and gravity-the-formula simply models that in general.

Not just that. He quite correctly wrote that experiments in physics indicate that massive objects attract one another. This is not a mere observational description of the 'fall-down-i-ness' of objects. It represents an inductive leap beyond mere physical observation. It's a scientific theory positing an invisible force of sorts which predicts these observations. Gravity is not a 'law' of physics, i.e. a mere universal generalization of consistent behaviours.

And yet, even the laws of physics are not simply descriptions of repeated observations of consistent behaviours. They are essentially universal generalizations (UG, an inference rule in predicate logic) from thousands of observations, amassed throughout the history of physics, that have repeatedly and consistently displayed certain conditionalities ('if x, then y' properties) under widely differing contexts. Over the course of time, owing to their seeming inviolability, they have become validated as 'laws'. They are quite real 'entities'. Not just concepts offering us a purely subjective description.

Take the following formulation of the Law of Conversation of Energy as an example:

In a closed a system the total energy of the system is conserved.

This law is a logical inference from thousands upon thousands of observations whereby, invariably, 'the more closed the system, the more it conserves energy'. It governs objects/systems in the quantum-scale as well as cosmological scale. If we assume, as seems reasonable, that the universe has a finite number of systems (despite being an enormous number), then every new observation of energy-conservation being conditional to a system's level of openness further confirms the Law of Conservation of Energy. Every single positive instance increases the probability of the law applying to future instances closer to 1 (100 %). This conclusion can be made even before applying Bayesian models that add further credence to the law.

Throw a stick to a dog, the dog knows the stick must come down, does the dog know about gravity?

In some ways, yes. Even if it doesn't understand the theoretical concept.
 
Last edited:
If I remember rightly, quantum mechanics makes some use of negative probability, which is strange even to mathematicians.

If you're thinking of what I think you're thinking of, then those values are not just negative, but complex, and are called "amplitudes". The probabilities that are relevant are the squares of the magnitudes of those complex numbers. More bone dry stuff can be found here: https://en.wikipedia.org/wiki/Probability_amplitude
 
But not the fact that actual measurements remain definite. Which brings us back to the epistemological confusion described in the OP. Quantum mechanics is not the only empirical scientific discipline with probabilistic theories and models. Evolutionary biology, economics, sociology and political science (whilst we may debate on how 'scientific' various practices of the latter three really are) involve probabilistic models as well. So do intelligence-gathering disciplines within various military or civil intelligence agencies. Probabilistic models are not unique to QM, nor the theoretical "discomfort" associated with such models by some. That's another myth.

It seems deterministically-inclined theoreticians (black-and-white engineer-types) have always been uncomfortable with uncertainty or indeterminacy irrespective of scientific discipline. And hence they dislike probabilism, despite the success of the latter in predicting certain behaviours of certain phenomena. For instance, the stochastic character of quantum fluctuation is definitely not only a linguistic issue. But it does create a situation where mathematical probabilistic models to account for such fluctuations are easily mistaken for classical physical theories corresponding to reality in a more straightforward manner. Statistical models, probability densities, should not even be expected to correspond with reality like classical logical statements with binary truth values of 1 or 0.
I do see general discomfort with the notion of probability sometimes, no doubt about that. But there is something qualitatively novel about quantum mechanics that some, who otherwise have no trouble reasoning probabilistically, have trouble accepting. Take a shuffled deck of cards: I may not know what card I will draw, but I know that it is some card. There's a clear and unambiguous fact about what card it is; I'm merely ignorant of it. The same goes for other less trivial applications of classical probability. That is not so in quantum mechanics: the question of where an electron "is" isn't even meaningful outside the context of a measurement. The electron doesn't have a location unless you look for it. That rejection of a perfect classical description of reality is what so many have trouble accepting.
Plus it perpetrates what's in philosophy billed the definist fallacy (G.E. Moore). In neuroscience and the philosophy of mind, identity theorists (such as Daniel Dennett) appear to stumble upon a version of a definist fallacy; ie. any one thing can be claimed identical with any other thing if no explanation is required. If the identity of the two phenomena -- say mental processes and brain processes, or brain processes and quantum-mechanical processes -- is left open to explanation (the open question argument), we can always justifiably ask whether such an identity between the two classes of phenomena is in fact the case. If such an open question argument is deemed intellectually invalid, then one could validly state, as a brute fact which none may question, that anything x is identical to any other thing y. Obviously not a very scientific approach. It's not even acceptable in analytical philosophy.

There are very few truisms in philosophical argumentation but if we were to label anything as a methodical truism in philosophical discussion, it would be the duty of those who propound an argument to offer some credible explanation. To claim two different concepts as synonymous cries for explanation.
Interesting, I see that sort of fallacy all the time. I didn't know it had a name.
 
@markus I don't understand how that is un-classical; if I was observing a molecule under Brownian motion, I could say the same thing.
A dust particle under Brownian motion has a well-defined position at all times, and you can stare at it forever and it won't make any difference. You'll plot a jagged looking trajectory and that's it. It's true that the statistical ensemble of dust particles will spread according to a (classical) diffusion equation, but the observation of the particle at a given instant has no effect on measurements of position at future instants (though of course it can change your predictions of those measurements). If you could perfectly anticipate all the collisions the particle will experience you could predict its motion for all time, without relying on a probability density function that spreads through diffusion. The update rule for quantum mechanics is different: if you measure the electron position at instant t, the measurement of position at instant t+1 will be different that if you hadn't performed that measurement. It's not just your prediction that changes, the actual result of the measurement changes too. That's why staring at a dust particle doesn't lock it in place, but staring at an electron does.
 
If you're thinking of what I think you're thinking of, then those values are not just negative, but complex, and are called "amplitudes". The probabilities that are relevant are the squares of the magnitudes of those complex numbers. More bone dry stuff can be found here: https://en.wikipedia.org/wiki/Probability_amplitude
That's of course completely correct but there exist formulations of quantum mechanics that deal directly with probabilities, which may be negative. One important example is the Wigner formulation. Feynman also floated similar ideas. The negative numbers allow the probabilities to destructively interfere just like amplitudes do, and that does the right thing and you get the same answer as the usual formalism.
 
The confusion is with those (not all) physicists who still naively regard a probabilistic model (a wave function) as a physical entity which then suddenly collapses when measurement is made.
I have asked you to cite a physicist who put that in writing.
The wikipedia quotes you provided are not confused about the distinction between model and reality at all.
Markus seems to have already addressed your doubt about the existence of gravity. Anyway, if you reject the hypothesis that objects falling consistently in a certain way and direction is just an amazing coincidence (which most rational persons do), then you are effectively, by the logical rule of contraposition, positing the existence of an invisible physical force or some other similar invisible physical cause, commonly called gravity, which predicts and explains such consistent behaviour across the universe. The theory of gravity goes far beyond merely describing such behaviours. It predicts them, and is pretty broadly considered to be a scientific truth, an actual entity/force/law/you name it but not just a concept, that quite well approximates reality.
However, wave function is something quite different from the theory of gravity. It's a statistical model describing likely behaviours of an object. It is not a scientific theory in the same sense as gravity. It's more of a useful concept in the same way you think gravity is.
The wave functions "predicts and explains" the behaviour of the particle. I don't understand the distinction between that description of reality and the way that the theory of gravity describes it.

You seem to think that quantum mechanics (and specifically wave functions) are not a generalized inference from thousands of observations?
And yet, even the laws of physics are not simply descriptions of repeated observations of consistent behaviours. They are essentially universal generalizations (UG, an inference rule in predicate logic) from thousands of observations, amassed throughout the history of physics, that have repeatedly and consistently displayed certain conditionalities ('if x, then y' properties) under widely differing contexts. Over the course of time, owing to their seeming inviolability, they have become validated as 'laws'. They are quite real 'entities'. Not just concepts offering us a purely subjective description.
I understand this paragraph to say that a description (a model) becomes an a "real entity" over the course of time. How much time and how many observations does it take? Because it sounds that with this reasoning, Newtonian gravity (a "law" in your parlance) would have become a physical entity, and then suddenly stopped being a physical entity when Einstein came along?

You yourself seem to be the one person who is confused about the distinction between model and reality.
 
Last edited:
Take a shuffled deck of cards: I may not know what card I will draw, but I know that it is some card. There's a clear and unambiguous fact about what card it is; I'm merely ignorant of it. The same goes for other less trivial applications of classical probability. That is not so in quantum mechanics: the question of where an electron "is" isn't even meaningful outside the context of a measurement. The electron doesn't have a location unless you look for it.

That's assuming that the wave function directly describes a physical entity called 'electron' which suddenly collapses, upon measurement, into a definite locatable particle also called 'electron'. However, there is no scientific necessity to assume such thing, or in fact anything beyond the wave function being merely a statistical model describing the probabilities for the possible future behaviours of the electron based on earlier measurements. A model that is merely being updated by new measurements.

Philosophically, there's no mystery involved in a statistical mathematical concept having no physicality, including no locality. It's, by definition, a bundle of alternative behaviours, some of them even mutually contradictory. What we rather seem to be witnessing in the statement 'measurement results in the collapse of the wave function' is an unwarranted claim of identity between an abstract statistical statement (a wave function) and a definite observational statement (the measured object). Two very different statements are sloppily employed for the same referent 'electron', despite the former being concerned with possible behaviours of the electron and the latter with its actual observed behaviour. That is to say, these statements evidently have two very different referents. And yet, we are expected to accept the identity of their referent as a brute fact, perpetrating the very definist fallacy we just discussed.

Stating this in no wise implies that the wave function is purely conceptual without telling us anything about the actual quantum world. It tells us, in the very least, that there's observable indeterminism in the way quantum entities behave while these stochastic behaviours also fall within a finite set of predictable categories (probability amplitudes). A sort of macro-determinism in fact. In other words, probabilistic mathematical models describing quantum-scale behaviours are a function of the actual stochasticism intrinsic to quantum-scale behaviours, just like probabilistic mathematical models in evolutionary biology are employed to account for natural selection acting within populations at random variance. They attempt to describe the mechanism of natural selection which is macro-deterministically oriented towards more complex organisms while operating stochastically in and of itself.
 
Last edited:
A dust particle under Brownian motion has a well-defined position at all times, and you can stare at it forever and it won't make any difference. You'll plot a jagged looking trajectory and that's it. It's true that the statistical ensemble of dust particles will spread according to a (classical) diffusion equation, but the observation of the particle at a given instant has no effect on measurements of position at future instants (though of course it can change your predictions of those measurements). If you could perfectly anticipate all the collisions the particle will experience you could predict its motion for all time, without relying on a probability density function that spreads through diffusion.
We can't "perfectly anticipate" a nonlinear dynamic ("chaotic") system because we can never measure its initial state with sufficient accuracy. It's practically and theoretically impossible.

Therefore, the accuracy of any prediction of Brownian motion depends on the accuracy of our measurement of its initial state, the time elapsed since that measurement, and the speed of the motion (aka the temperature and pressure of the system).

Consider a longer time elapsed (involving very many collisions), and your expected location of the particle no longer depends on where you found it last (the i itial measurement); consider a shorter time elapsed (involving a few expected collisions), and it does, even if you can't pin it down exactly: it's probably still in the vicinity, though it could (improbably) have travelled straight out without hitting anything. Shorten the time elapsed even further, and you can trace it fairly accurately with repeated measurements .
The update rule for quantum mechanics is different: if you measure the electron position at instant t, the measurement of position at instant t+1 will be different that if you hadn't performed that measurement. It's not just your prediction that changes, the actual result of the measurement changes too. That's why staring at a dust particle doesn't lock it in place, but staring at an electron does.
I don't understand your claim that the result of the measurement changes.
Is there further reading you can point me to?

I understand that in the double-slit experiment, the interference pattern goes away once you install photon detectors on the slits; but I've always thought that this happens regardless of whether a conscious being observes that output or not, simply because the detector has to change the phase of the photon to be able to detect it at all.

So "staring at the electron" also sets up an interaction that can influence its trajectory, because in order to get information, you need energy as a carrier, and therefore you can't have that information without influencing the system; and therefore the predicted outcome for "no observation" is just what the electron would do, wheras "with observation" it is also what the electron would do, plus what the observation interaction does to it.

I understand your claim to say that there is a difference beyond that, is that correct? and if so, then I don't understand what it is.
 
I have asked you to cite a physicist who put that in writing.

The confusion of mathematical models with actually measured physical entities is so common in QM, and so inherent in the notion of 'wavefunction collapse', that it's better to cite a physicist, such as Matthew Saul Leifer, describing whole schools of QM interpretation (epistemic interpretations) dedicated to eliminating such a confusion:

Article:
2.4 The collapse of the wavefunction

The measurement problem is not so much resolved by ψ-epistemic interpretations as it is dissolved by them. It is revealed as a pseudo-problem that we were wrong to have placed so much emphasis on in the first place. This is because the measurement problem is only well-posed if we have already established that the quantum state is ontic, i.e. that it is a direct representation of reality. Only then does a superposition of dead and alive cats necessarily represent a distinct physical state of affairs from a definitely alive or definitely dead cat. On the other hand, if the quantum state only represents what we know about reality then the cat may perfectly well be definitely dead or alive before we look, and the fact that we describe it by a superposition may simply reflect the fact that we do not know which possibility has occurred yet.


The second wikipedia quote I provided in my earlier response to you is a similar clarification offered in the article for resolving the self-same confusion. The confusion arises from well-known dilemma in quantum mechanics described in the first quote. We are also discussing it in more detail with Markus on this thread.

The wave functions "predicts and explains" the behaviour of the particle. I don't understand the distinction between that description of reality and the way that the theory of gravity describes it.

There's a subtle difference that becomes clear when you look at it carefully. Firstly, the wave function does not offer a definite prediction but a bundle of probabilities. The theory of gravity offers very specific and definite predictions from the theory, confirmed by specific and definite observations. The wave function does not describe any invisible variable or force causing quantum-scale behaviours like the theory of gravity explains the causes of gravitational observations (massive objects attract one another). In other words, the latter goes beyond mere description of observed behaviours and attempts to explain their underlying causes. Which is what scientific theories essentially do.

You seem to think that quantum mechanics (and specifically wave functions) are not a generalized inference from thousands of observations?

No. Rather, that's precisely what quantum mechanics appears to be. Generalized mathematical inferences from thousands of observations, which are often mistaken for a direct representation of reality.

It seems to me you actually agree on this point. Perhaps you are just not seeing how the theory of gravity is any different from these QM models. It's a separate but interesting discussion. Maybe you would like to start a thread.
 
What a fascinating thread!

For what it's worth, this is what I think:
  • The big difference between classical and quantum physics is in how they treat probabilities: real numbers in classical, complex numbers in quantum (where you need to take the module of a+ib to get a 'physical' real probability). This is the root source of all the weirdness (*).
  • Or, to say it in a different way, it's the 'i' in Schrödinger's equation which creates all the trouble (**).
  • The wavefunction has no physical reality, it's not possible to build a device which directly measures a wavefunction. This is not strange: energy is another thing which has no physical reality (you cannot build a device which directly measures energy, you can only calculate it from physical observables).
  • There is no 'measurement problem', there is just a problem of interactions (or degrees of freedom): a free, non-interacting electron can be anywhere, but the more interactions the more the electron gets pinned, so to speak, and to make a 'measurement' one needs to interact. Needless to say, there is no need for a (conscious) observer, an electron is measured as well when it smashes into an atom as it is when measured by a physicist.
  • I subscribe to Feynman: 'shut up and calculate'. Once one gets the correct results, meaning the calculations are confirmed by experiments, it does not matter and it does not even makes sense to speculate about an underlying metaphysics (of course if the calculations show discrepancies then the theory needs to be revised). But we have a problem: the calculations are so difficult that this is easier said than done. Nothing strange here too though: the Navier-Stokes equation (motion of fluids) is completely classical (and I remember it's quite easy to write it down starting from basic principles, not that I could do that today xD), but nonetheless it's one of the deepest unsolved mathematical mysteries. I guess that, if we could write down the Hamiltonian (a mathematical operator which describes the evolution of a quantum system) of a quantum state interacting with something macroscopic, and then solve the corresponding Schrödinger's equation, we would find that the quantum system 'collapses' to a classical one (if we don't find that, there would be something wrong in quantum mechanics!).

(*) I once stumbled upon a website which treated quantum mechanics as a probability theory, a very interesting read. Unfortunately I lost the link :(.

(**) Silly example, but I hope it makes the idea. This is the plot of y=x^2 (x is a real number, from Google):
Pretty nice and straightforward 2D plot.

While this is the plot of y=z^2 (z is a complex number, from WolframAlpha):
Ugh, now it's in 4 dimensions (two different 3D plots) and it's much more complicated. Complex numbers are a mess by themselves.
 
Last edited:
Side note:
What we can assert sharply is that experiment shows a tendency for massive objects to attract one another (it's not just mass that matters, but you get the idea). We call that tendency gravity.
markus says that gravity is simply the fall-down-i-ness of the stick; it exists as much as the shape of the stick exists, and gravity-the-formula simply models that in general.
Not just that. He quite correctly wrote that experiments in physics indicate that massive objects attract one another. This is not a mere observational description of the 'fall-down-i-ness' of objects. It represents an inductive leap beyond mere physical observation. It's a scientific theory positing an invisible force of sorts which predicts these observations. Gravity is not a 'law' of physics, i.e. a mere universal generalization of consistent behaviours.
Both markus and I have avoided calling gravity a force. You do, and you also claim it's not a mere law.

Newton himself wrote that it's only a law (via https://plato.stanford.edu/entries/newton-philosophy/ ), to him, gravity is the description of the phenomenon, but not its cause, which he doesn't know:
Gravity must be caused by an agent acting constantly according to certain laws; but whether this agent be material or immaterial, I have left open to the consideration of my readers. (Newton 2004: 102–3)[
Content from External Source
And that actually tallies with relativity theory, which models gravity as a property of spacetime, and not a force.
 
The confusion of mathematical models with actually measured physical entities is so common in QM, and so inherent in the notion of 'wavefunction collapse', that it's better to cite a physicist, such as Matthew Saul Leifer, describing whole schools of QM interpretation (epistemic interpretations) dedicated to eliminating such a confusion:
Ok, so you just flat out refuse to support the claim, even though you've now labeled it "common" again. Gotcha. (I'm not surprised, btw.)
There's a subtle difference that becomes clear when you look at it carefully. Firstly, the wave function does not offer a definite prediction but a bundle of probabilities. The theory of gravity offers very specific and definite predictions from the theory, confirmed by specific and definite observations.
I feel that's overstated.
Article:
In physics and classical mechanics, the three-body problem is the problem of taking the initial positions and velocities (or momenta) of three point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's law of universal gravitation.[1] The three-body problem is a special case of the n-body problem. Unlike two-body problems, no general closed-form solution exists,[1] as the resulting dynamical system is chaotic for most initial conditions, and numerical methods are generally required.

At some point, gravity stops being deterministic, at least in practice.

The wave function does not describe any invisible variable or force causing quantum-scale behaviours like the theory of gravity explains the causes of gravitational observations (massive objects attract one another). In other words, the latter goes beyond mere description of observed behaviours and attempts to explain their underlying causes. Which is what scientific theories essentially do.
I already explained in my side note ( backed up a quote from Sir Isaac Newton himself) that the theory of gravity did not explain an underlying cause.
Your personal difficulty seems to be making the leap from a deterministic concept of reality to a nondeterministic one. You simply reject any nondeterministic description as not being real, while you elevate deterministic descriptions to the status of being "real entities".

No. Rather, that's precisely what quantum mechanics appears to be. Generalized mathematical inferences from thousands of observations, which are often mistaken for a direct representation of reality.
But that's what you used as argument to show that gravity is real!
Either gravity is just a model, or quantum mechanics is real; or you have a contradiction.
 
Side note:



Both markus and I have avoided calling gravity a force. You do, and you also claim it's not a mere law.

Newton himself wrote that it's only a law (via https://plato.stanford.edu/entries/newton-philosophy/ ), to him, gravity is the description of the phenomenon, but not its cause, which he doesn't know:
Gravity must be caused by an agent acting constantly according to certain laws; but whether this agent be material or immaterial, I have left open to the consideration of my readers. (Newton 2004: 102–3)[
Content from External Source
And that actually tallies with relativity theory, which models gravity as a property of spacetime, and not a force.

A footnote to your sidenote:

Article:
However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the squareof the distance between them.
 
Ok, so you just flat out refuse to support the claim, even though you've now labeled it "common" again. Gotcha. (I'm not surprised, btw.)

I feel that's overstated.
Article:
In physics and classical mechanics, the three-body problem is the problem of taking the initial positions and velocities (or momenta) of three point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's law of universal gravitation.[1] The three-body problem is a special case of the n-body problem. Unlike two-body problems, no general closed-form solution exists,[1] as the resulting dynamical system is chaotic for most initial conditions, and numerical methods are generally required.

At some point, gravity stops being deterministic, at least in practice.


I already explained in my side note ( backed up a quote from Sir Isaac Newton himself) that the theory of gravity did not explain an underlying cause.
Your personal difficulty seems to be making the leap from a deterministic concept of reality to a nondeterministic one. You simply reject any nondeterministic description as not being real, while you elevate deterministic descriptions to the status of being "real entities".


But that's what you used as argument to show that gravity is real!
Either gravity is just a model, or quantum mechanics is real; or you have a contradiction.

Many misreadings and misunderstsndings of what was written.
 
What a fascinating thread!

For what it's worth, this is what I think:
  • The big difference between classical and quantum physics is in how they treat probabilities: real numbers in classical, complex numbers in quantum (where you need to take the module of a+ib to get a 'physical' real probability). This is the root source of all the weirdness (*).
  • Or, to say it in a different way, it's the 'i' in Schrödinger's equation which creates all the trouble (**).

I like the way the 'problem' with the wave function as well as the Schrödinger's Equation is encapsulated in this somewhat recent Scientific American article:

Article:
The wave function has embedded within it an imaginary number. That’s an appropriate label, because an imaginary number consists of the square root of a negative number, which by definition does not exist. Although it gives you the answer you want, the wave function doesn’t correspond to anything in the real world. It works, but no one knows why. The same can be said of the Schrödinger equation.

Maybe we should look at the Schrödinger equation not as a discovery but as an invention, an arbitrary, contingent, historical accident, as much so as the Greek and Arabic symbols with which we represent functions and numbers. After all, physicists arrived at the Schrödinger equation and other canonical quantum formulas only haltingly, after many false steps.


Ugh, now it's in 4 dimensions (two different 3D plots) and it's much more complicated. Complex numbers are a mess by themselves.

Which takes us back to the inadequacy of currently used mathematical models used in QM, and the valuable effort by mathematical physicists and mathematicians to diversify the math used in physics. One of these recent efforts, making use of geometry, has yielded the beautiful amplituhedron which challenges both, the condition of unitarity in Schrödinger's Equation, as well as the principle of locality.

Article:
The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like “amplituhedron,” which yields an equivalent one-term expression.


amplutihedron_2000.jpg
 
Last edited:
What a fascinating thread!

For what it's worth, this is what I think:
  • The big difference between classical and quantum physics is in how they treat probabilities: real numbers in classical, complex numbers in quantum (where you need to take the module of a+ib to get a 'physical' real probability). This is the root source of all the weirdness (*).
  • Or, to say it in a different way, it's the 'i' in Schrödinger's equation which creates all the trouble (**).
Classical physics also use complex numbers. The solution of the harmonic oscillator, the most basic problem in physics, uses complex numbers.

The solution to the classical wave equation, involves complex numbers.

Electromagnetic waves are described with complex numbers, and you square(*) the values to relate to other real quantities like energy or power, in the same way you square(*) quantum complex coefficients to obtain the probabilities.

The Schrodinger equation is strictly speaking a diffusion equation, but the extra 'i' "converts" the equation into a wave equation, whose solutions are wave functions.

But, theres is nothing special about complex numbers and QM. Complex numbers are just another mathematical tool to describe physics, either classical or quantum.


(*) Not really "square", strictly speaking, but it's the same operation in both cases.
 
Last edited:
That's assuming that the wave function directly describes a physical entity called 'electron' which suddenly collapses, upon measurement, into a definite locatable particle also called 'electron'. However, there is no scientific necessity to assume such thing, or in fact anything beyond the wave function being merely a statistical model describing the probabilities for the possible future behaviours of the electron based on earlier measurements. A model that is merely being updated by new measurements.
It's the other way around: when you toss a baseball and, knowing its initial position and velocity you get to make statements about where it is even when you're not looking, you're making use of a framework of classical physics in which objects have well-defined positions and velocities at all times, and there's (seeminlgy) no reason to assume that framework conspires to change if you're not looking. In quantum mechanics, there is nothing whatsoever that allows you to make that sort of inference, however heuristic: what is meant by the position of the electron are measurements of electron position. And we know from a variety of theoretical and experimental results (Bell's inequality, Kochen and Specker's theorem, Leggett inequalities, GHZ states, etc) that nothing so naive as an actual mote-like object with an actual well-defined position can work. Those are rejected in a model-independent manner.

So, put another way, it's not that an unjustified assumption leads people to say that the electron doesn't have a location unless you look for it: it's that quantum mechanics forces us to abandon a previously unchecked, reasonably sounding but ultimately wrong assumption that object properties are just as well defined and meaningful when they're not being measured.
Philosophically, there's no mystery involved in a statistical mathematical concept having no physicality, including no locality. It's, by definition, a bundle of alternative behaviours, some of them even mutually contradictory. What we rather seem to be witnessing in the statement 'measurement results in the collapse of the wave function' is an unwarranted claim of identity between an abstract statistical statement (a wave function) and a definite observational statement (the measured object). Two very different statements are sloppily employed for the same referent 'electron', despite the former being concerned with possible behaviours of the electron and the latter with its actual observed behaviour. That is to say, these statements evidently have two very different referents. And yet, we are expected to accept the identity of their referent as a brute fact, perpetrating the very definist fallacy we just discussed.
To address this specifically, say there is some object that is ultimately responsible for what we call "the electron", and the properties of this object are distributed according to some classical ensemble with well-defined properties at all times. Because the above theorems and experiments are model-independent, it's still guaranteed that whatever that object is, it doesn't have anything that could correspond to an "electron position" that's well-defined at all times (and revealed by measurement). It's guaranteed that the measurement result is generated during the measurement process itself, because any measurement independent "plan" would (per Kochen and Specker) give results that are in disagreement with quantum mechanics. Moral: even if quantum mechanics is ultimately falsified, whatever replaces it is guaranteed to be at least as weird.
 
We can't "perfectly anticipate" a nonlinear dynamic ("chaotic") system because we can never measure its initial state with sufficient accuracy. It's practically and theoretically impossible.
It's of course impossible in practice, but certainly not in the context of a thought experiment such as this. But it doesn't matter too much: the main takeaway here is that you never make a mistake if you assume that the Brownian dust particle has a well-defined position at all times. With electrons that assumption leads to crassly incorrect predictions.
Consider a longer time elapsed (involving very many collisions), and your expected location of the particle no longer depends on where you found it last (the i itial measurement); consider a shorter time elapsed (involving a few expected collisions), and it does, even if you can't pin it down exactly: it's probably still in the vicinity, though it could (improbably) have travelled straight out without hitting anything. Shorten the time elapsed even further, and you can trace it fairly accurately with repeated measurements .
Yes, but it does move; the only thing that gets reset upon measurement of the Brownian mote are your predictions. With an electron, future measurements get reset as well.
I don't understand your claim that the result of the measurement changes.
Is there further reading you can point me to?
That's the (in)famous collapse of the wavefunction. You can do an experiment yourself with polarizing filters: put two filters at 90 degrees with one another, no light goes through. Put a third filter at a 45 degree angle with both, and now some light goes through. That can be understood classically without much difficulty, of course, but the quantum perspective is illustrative: each filter corresponds to a measurement of photon polarization, and every photon that passes will be polarized along the direction of the filter. It turns out that this much is completely general: if you make a measurement, and get a result, an identical measurement immediately following the first must give the exact same result. It's one of the postulates of quantum mechanics, albeit the most controversial.

A textbook that is unusually clear about these points (in my opinion) is Ballentine's, particularly chapter 9.

Scott Aaronson's series "Quantum Computing since Democritus" is fantastic reading in its entirety, but in this context I especially recommend this page:

https://www.scottaaronson.com/democritus/lec11.html

The relevant section is titled "Invariance under Time-Slicings".
I understand that in the double-slit experiment, the interference pattern goes away once you install photon detectors on the slits; but I've always thought that this happens regardless of whether a conscious being observes that output or not, simply because the detector has to change the phase of the photon to be able to detect it at all.
Note I never said anything about a "conscious being". To my knowledge quantum mechanics doesn't say anything useful or meaningful about consciousness. It does tell you, as in, a creature or artificial intelligence sophisticated enough to perform measurements, what results you should expect. And it turns out that it what you know (or, more properly, what you could possibly know, given the information available to you) does affect future results.
So "staring at the electron" also sets up an interaction that can influence its trajectory, because in order to get information, you need energy as a carrier, and therefore you can't have that information without influencing the system; and therefore the predicted outcome for "no observation" is just what the electron would do, wheras "with observation" it is also what the electron would do, plus what the observation interaction does to it.

I understand your claim to say that there is a difference beyond that, is that correct? and if so, then I don't understand what it is.
So, if I'm understanding you correctly, your proposal is that the photon is some object akin to a classical wave, and that running it through a detector introduces some uncontrolled phase error that shifts the interference around in such a way that no pattern emerges after many trials? It's an appealing, plausible, reasonable sounding suggestion, but it doesn't work: one clear example of why it doesn't work is given by the delayed-choice quantum eraser experiment.

In this (widely misunderstood) experiment, you start with the two slits and single photons as usual, only after the photon goes through the slits it enters a nonlinear crystal that splits it into an entangled pair. One half of the pair (termed the "signal" photon) gets taken to a movable detector, which takes the role of the screen and records detection events (which will evince, or not, an interference pattern). The other half (termed the "idler" photon) goes through a set of mirrors and splitters, eventually hitting one of a set of 4 detectors. If it hits detectors 1 or 2, it could've come from either slit. If it hits detectors 3 or 4, it could only have come from slit B or A respectively. So an idler detection event at one set of detectors gives which-slit information (which can be used to reason about what the idler photons will do), and a detection event at the other set does not.

Important: the path taken by the idler photons is much longer than that taken by the signal photon, so the signal photon is detected first, then the idler.

You let the experiment run, accumulating detection events at both signal and idler detectors. You can now tally up these events based on what detector the idler photons hit, and what you find is that, with the cases where idler gives you "which slit" information you don't get an interference pattern with the signal photons, but if you tally up the events where you don't know where the idler came from -- when it hits detectors 1 or 2 -- you do get an interference pattern! The detection of the idler photon can't possibly have disturbed the phase of the signal photon in any way -- it's a different particle, far away, and it's already been detected!

What happens is something more subtle. With any detection event, by definition, the detector must get entangled with the detectee. "Entanglement" means that you no longer get to ignore the detector if you want a complete statistical description of what will happen, and in particular, if you want see interference effects, you have to put the entire detector in a coherent superposition. This is de facto impossible with macroscopic objects, so what we see with the ordinary two-slit experiment is the utter destruction of the interference pattern upon detection. What the delayed-choice quantum eraser experiment does is mess up the coherence of the photon in a controlled way so that this can be (partly) undone and the interference pattern recovered.
 
Which takes as back to the inadequacy of currently used mathematical models used in QM, and the valuable effort by mathematical physicists and mathematicians to diversify the math used in physics. One of these recent efforts, making use of geometry, has yielded the beautiful amplituhedron which challenges both, the condition of unitarity in Schrödinger's Equation, as well as the principle of locality.
It's worth noting that the amplituhedron is an attempt to prove locality and unitarity arise from more fundamental principles, not to reject them. Breaking unitarity is disastrous for quantum mechanics, since it would mean probabilities no longer sum to 1, and breaking locality is the sort of thing that lets you go back in time to murder your ancestors.
 
It's the other way around: when you toss a baseball and, knowing its initial position and velocity you get to make statements about where it is even when you're not looking, you're making use of a framework of classical physics in which objects have well-defined positions and velocities at all times, and there's (seeminlgy) no reason to assume that framework conspires to change if you're not looking. In quantum mechanics, there is nothing whatsoever that allows you to make that sort of inference, however heuristic

It is true that the behaviour of a 'classical' non-stochastic system or an entity can be predicted in a more deterministic fashion from our knowledge of the initial conditions. It is true such inferences do not work with a system or entity that behaves, at least partially, indeterministically (or stochastically). Hence the employment of probabilistic models. These models describe a set of possibilities for the future behaviour of an electron, and assign these possibilities various probabilities. Measurement provides us information on the actual behaviour of these systems, falling into one of the possibilities outlined by the probability amplitude. There is nothing particularly mysterious about this.

What is 'weird' and indeed fascinating is the stochastic character of quantum-scale behaviours, which impels us to use probabilistic models instead of deterministic ones. Confusion arises, giving rise to quite a bit of mystification of QM, when these probabilistic models are treated as direct descriptions of actual physical behaviours instead of matrices of possible behaviours.

So, put another way, it's not that an unjustified assumption leads people to say that the electron doesn't have a location unless you look for it: it's that quantum mechanics forces us to abandon a previously unchecked, reasonably sounding but ultimately wrong assumption that object properties are just as well defined and meaningful when they're not being measured.

Again, this is merely a function of the actual stochasticism within the system rather than a 'strange' problem of measurement. There is no other way except by the virtue of probabilistic mathematical models to describe indeterministically behaving systems (which, to repeat, are not unique to QM). Such models, by default, cannot assign definite object properties outside measurements. Again, no real mystery here.

To address this specifically, say there is some object that is ultimately responsible for what we call "the electron", and the properties of this object are distributed according to some classical ensemble with well-defined properties at all times. Because the above theorems and experiments are model-independent, it's still guaranteed that whatever that object is, it doesn't have anything that could correspond to an "electron position" that's well-defined at all times (and revealed by measurement). It's guaranteed that the measurement result is generated during the measurement process itself, because any measurement independent "plan" would (per Kochen and Specker) give results that are in disagreement with quantum mechanics. Moral: even if quantum mechanics is ultimately falsified, whatever replaces it is guaranteed to be at least as weird.

A good analysis which does not really contradict (at least to my understanding) with what I have written. In other words, the 'weirdness' of QM is more a function of the stochastic behaviour and technically difficult measurability of quantum-scale phenomena, rather than the probabilistic math used to describe them. There's a lot of math that's weird or weirder.

However, what adds an unnecessary additional lawyer of weirdness is mistaking the probabilistic math for actual physical behaviours despite the math being merely a description of possible behaviours. The statement 'wave function collapse' inheres such a mistake, at least seemingly, by referring to a single physical entity that suddenly transforms (a complex non-definite wave function collapsing into definite behaviour). It conflates two qualitatively very different statements into a single physical entity despite the 'wave function' merely describing a set of possible behaviours of the electron and the 'collapse by measurement' describing its actual behaviour. Yes, both terms concern the 'electron', but in an entirely different way.

To put it in philosophical terms, the former (wave function) describes at least a part of the ontological space within which electrons exist and behave, while the latter (collapse at measurement) describes the actual behaviour within that space. In the least, it is misleading to string these two very different 'entities' into one simple statement 'wave function collapse'.
 
Last edited:
It's worth noting that the amplituhedron is an attempt to prove locality and unitarity arise from more fundamental principles, not to reject them. Breaking unitarity is disastrous for quantum mechanics, since it would mean probabilities no longer sum to 1, and breaking locality is the sort of thing that lets you go back in time to murder your ancestors.

A worthy clarification. As to the part in italics: Such attempts, when successful, usually result in the replacement of inadequate (as opposed to totally incorrect terms) terms by more fundamental principles. So they still qualify as a 'challenge' to 'locality' and 'unitarity' as they are currently defined.
 
Last edited:
It's of course impossible in practice, but certainly not in the context of a thought experiment such as this. But it doesn't matter too much: the main takeaway here is that you never make a mistake if you assume that the Brownian dust particle has a well-defined position at all times. With electrons that assumption leads to crassly incorrect predictions.
I understand that as equivalent to the idea that an photon has to pass through both slits at once to interfere with itself in the double-slit experiment? So the idea that even though we inject a whole photon into the setup, and retrieve a whole photon out of it at the detector, the idea that the photon has remained whole on the journey between these points is a fallacy.

Thank you for the references; I plan to look at them later.

I'm thinking my way through https://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser .
image.png

What it seems to say is that if we eliminate the "idle" part of the setup (from prism PS onward), we'll see a "signal" diffraction pattern with no obvious interference at D0; but that adding the detection circuit allows us to pick two subsets of "signal" photons at D0 that do exhibit an interference pattern.
image.png
I think this happens because the photons interfere with themselves at Beam Splitter C, and that beam splitter sorts the "idle" photons in the same way that the upper "signal" path of the experiment sorts their "entangled" partner copies into a refraction pattern.

Another way to put my understanding is that the photons have a property (similar to polarization) that determines where they'll land at D1; if you could select all photons with 22°, they'd all land in the exact same spots, and these spots would be slightly offset to the spots where the 23° photons land, etc. You wouldn't see this pattern when you have light where this property is equally distributed across all photons; but if you introduce a filter that would, say, transmit t=22° and 202° fully, and t=102° and 292° not at all, and the others something in-between (e.g. with probability p=cos(t-22°)^2), then you'd see a refraction pattern. The angle of the filter wouldn't even matter!
So if the Beam Splitter C acts as that kind of filter, then you'd see this outcome; but it only works like that when there's blue/red interference.

Thinking about this has convinced me of something I wasn't convinced of before, and that is that the photon has to be on the red and blue beams both. Would it be ok to call that a superposition?

It's also clear that somehow you can retrieve the energy of a photon at D3 or D4, but not at both. And that's a little weird as well, but it's kinda like if you have X amount of water in a bucket with multiple taps, and you take that water out via one of the taps, you can't take it out from another tap any more -- except with energy instead of water, and Thermodynamics. And that's really hard to wrap my head around, that this "single outflow" can happen along with the superposition, since I'd expect a photon detected at blue detector D3 to also have to traverse the red part of the idle setup because of the superposition, but there's never any energy being removed on that side. [Ah well, maybe it'll become clearer once I peruse the other references you provided me with!]

What I'm not convinced of is that the choices I make in the "idle" part of the setup influence what happens in the "signal" part of the setup, or vice versa.
[And that, too, may become clearer by looking at other experiments.]
 
Pity you gave up on pursuing the epistemological question of what gravity really is, and how we know that.
The author of this article is indeed confused about the distinction between reality and model, but he's a writer and not a physicist by education. (Tip: mathematics deals with abstract structures, nothing in mathematics is real, but if you identify a structure in reality, you can apply the corresponding maths to it.)
I've understood the amplituhedron discovery to show that you can have two different (mathematical) models for the same physical phenomenon that both "work" in terms of predictive power, even though one model has more explanatory power than the other. (There's another analogy to Newton's vs. Einstein's gravity!)
 
Pity you gave up on pursuing the epistemological question of what gravity really is, and how we know that.

The author of this article is indeed confused about the distinction between reality and model, but he's a writer and not a physicist by education. (Tip: mathematics deals with abstract structures, nothing in mathematics is real, but if you identify a structure in reality, you can apply the corresponding maths to it.)

I've understood the amplituhedron discovery to show that you can have two different (mathematical) models for the same physical phenomenon that both "work" in terms of predictive power, even though one model has more explanatory power than the other. (There's another analogy to Newton's vs. Einstein's gravity!)

No, dear @Mendel. You gave up trying to understand what was written to you, and the dispassionate non-adversarial tenor in which it was written, far earlier. Debating with your own caricatures of your interlocutor's statements is a pattern I've noticed a while back. It happens especially when you are evidently out of your depth while failing to acknowledge you are. It's also evident in your responses to Markus, minus the antagonism. If, however, you wish to properly discuss what gravity is, I already welcomed you to start a new thread.

Otherwise, you're just trolling.
 
Based partly on the preceding discussion, I am offering here a summary of three (3) main factors that have contributed to the seeming ‘strangeness’ of QM.

These factors are easily misunderstood and have thereby contributed to later mystifications of QM by various authors. These authors range from solid QM physicists, such as John Stuart Bell with his later hypothesis of ‘super-determinism’ to account for the Bell’s inequalities he himself had earlier articulated, to the likes of known pseudo-scientific authors such as Deepak Chopra.

Three factors contributing to the impression of the unique strangeness of QM:

(1) Genuine, model-independent, strangeness of actual quantum-scale observations, such as nonlocality (cf. Bell’s inequalities), complementarity (cf. Heisenberg’s uncertainty principle), randomness and indeterminism (cf. radioactive decay, vacuum fluctuation) and wave-particle duality (cf. double-slit experiment). However, these observations, alone, do not render QM notably stranger than, say, the strange and mind-boggling observations pointing to singularities in cosmology and astronomy (including black holes and the cosmological events preceding the Planck Epoch). However, the propensity probability, or macro-determinism, of these observations when taken together – that is, the simultaneity of microscale random processes and deterministically predictable and measurable result-types – is both fascinating and baffling.

(2) The dominance, in QM more than in any other field of physics, of a positivistic interpretive tradition tracing all the way back to Niels Bohr and Werner Heisenberg.

Article:
Quantum mechanics (QM) was developed over the first quarter of the 20th Century, when scientists were enthralled by a new philosophy known as Positivism, whose foundations were based on the assumption that material objects exist only when measured by humans – this central assumption conflates epistemology (knowledge) with ontology (existence).


The positivist tradition partly owes to the great difficulty to explain the underlying physical reality that could satisfactorily account for these strange observations (factor #1 above) by using ‘classical’ physical theories and mathematical models. According to the positivist tradition (also called the ‘Copenhagen interpretation’), QM does not, nor should it even attempt to, describe physical reality as is, but only as it appears to the observer. The theory of quantum mechanics is merely an instrument of prediction and organization of the observable phenomena, not a description of reality.

As Niels Bohr himself wrote back in 1948 (which was tersely echoed in the Scientific American article cited earlier):

Article:
“The entire formalism is to be considered as a tool for deriving predictions of definite or statistical character, as regards information obtainable under experimental conditions described in classical terms and specified by means of parameters entering into the algebraic or differential equations of which the matrices or the wave-functions, respectively, are solutions. These symbols themselves, as is indicated already by the use of imaginary numbers, are not susceptible to pictorial interpretation; and even derived real functions like densities and currents are only to be regarded as expressing the probabilities for the occurrence of individual events observable under well-defined experimental conditions.”


As the article demonstrates, Niels Bohr was actively engaged with, and influenced by, the Vienna Circle which, in turn, was committed to the program of logical positivism. To therefore interpret realistically the instrumentalistic QM models, which were not even seriously entertained as direct descriptions of physical reality, obviously results in further confusions, and is vulnerable to unnecessary mystification.

(3) Misleading terminology, partly owing to the positivistic interpretive tradition (factor #2 above), in statements such as ‘observation causes a wave function collapse’. This terminology is misleading whether or not it is interpreted positivistically (not referring to actual reality) or realistically (referring to actual reality). The conflation problem was discussed earlier. Such statements add an unnecessary layer of weirdness by mistaking the probabilistic math defining the concept of ‘a wave function’ as reference to the same conceptual or real entity as described by the ‘definite actual observable properties’ at measurement. This despite the fact that the former is a description of ’possible’ behaviours of an electron and the latter a measurement of ‘actually’ observed behaviours. These two statements therefore have two quite different referents which are conflated into one entity by the term ‘collapse’ without any clear justification.

Recent mathematical discoveries such as the amplituhedron may offer an avenue for a realistic non-positivistic quantum theory, describing non-local hidden variables underlying QM in terms of geometry. Similar models could account for both the probabilistic and deterministic character of QM observations without appeal to imaginary numbers (Schrödinger’s equation), or stumbling on locality and unitarity. And yet doing so while employing a mathematical language that could unify macroscopic and microscopic physics through quantum field theory.
 
Measurement provides us information on the actual behaviour of these systems, falling into one of the possibilities outlined by the probability amplitude. There is nothing particularly mysterious about this.
The problem with the italicized portion is that we don't know to what extent a notion of "actual behavior" may be meaningful, if at all. An enormous amount of ink has been spilled on the question of "elements of reality" (terminology commonly attributed to Einstein), which today might be called "hidden variables", with the eventual result of the vast majority of physicists regarding the problem as hopeless and working on something else. The mystery is not in interpreting the quantum state as a recipe for calculating probabilities of experimental outcomes; that's straightforward. The mystery, for those who see a mystery, is that nobody's been able to come up with a classical statistical ensemble that's described by the quantum rules, and not for lack of trying.
Again, this is merely a function of the actual stochasticism within the system rather than a 'strange' problem of measurement. There is no other way except by the virtue of probabilistic mathematical models to describe indeterministically behaving systems (which, to repeat, are not unique to QM). Such models, by default, cannot assign definite object properties outside measurements. Again, no real mystery here.
I'm not sure what you mean here. Take the example of the classical Brownian particle: it has a well-defined position at all times.
A good analysis which does not really contradict (at least to my understanding) with what I have written. In other words, the 'weirdness' of QM is more a function of the stochastic behaviour and technically difficult measurability of quantum-scale phenomena, rather than the probabilistic math used to describe them. There's a lot of math that's weird or weirder.
Right, I'm not disagreeing with the overall point that it's important to avoid mistaking the model for the physical system, and that many routinely do, and that doing so is a big source of confusion. But at the same time it's not "just" classical stochastic behavior and the fact that measuring small things is hard. There's something new here. Now, that something may turn out to be just stochastics in some deeper sense, but nobody's been able to find a way to make that connection, and the corpses of failed classical models are strewn about everywhere, impaled with no-go theorems.
However, what adds an unnecessary additional lawyer of weirdness is mistaking the probabilistic math for actual physical behaviours despite the math being merely a description of possible behaviours. The statement 'wave function collapse' inheres such a mistake, at least seemingly, by referring to a single physical entity that suddenly transforms (a complex non-definite wave function collapsing into definite behaviour). It conflates two qualitatively very different statements into a single physical entity despite the 'wave function' merely describing a set of possible behaviours of the electron and the 'collapse by measurement' describing its actual behaviour. Yes, both terms concern the 'electron', but in an entirely different way.

To put it in philosophical terms, the former (wave function) describes at least a part of the ontological space within which electrons exist and behave, while the latter (collapse at measurement) describes the actual behaviour within that space. In the least, it is misleading to string these two very different 'entities' into one simple statement 'wave function collapse'.
Personally, I wouldn't even say anything about "actual" behaviors because we don't really know what lies underneath quantum mechanics, if anything. We know that when we make what's conventionally called a measurement we get a result and that making the same measurement immediately afterwards gives an identical result. Those discrete nuggets of information are all we get. It's not clear (at least to me) that notions such as "measurement" or even "dynamics" make much sense at whatever level a hypothetical classical ensemble operates, if one even exists at all.
A worthy clarification. As to the part in italics: Such attempts, when successful, usually result in the replacement of inadequate (as opposed to totally incorrect terms) terms by more fundamental principles. So they still qualify as a 'challenge' to 'locality' and 'unitarity' as they are currently defined.
Not necessarily: for example, the path integral didn't replace the operator formalism, it supplemented it. Heisenberg's picture coexists with the Schrödinger picture (and the interaction picture!). In physics, typically the deeper you go, the more totally equivalent ways there are of looking at the exact same thing. That's the case with the amplituhedron. It seems computationally advantageous to calculate scattering amplitudes by just computing the volume of some polytope instead of adding up Feynman diagrams and keeping careful track of the many cancellations that come with N=4 super Yang-Mills' highly supersymmetric structure, but that's assuming you want to calculate scattering amplitudes in the first place. There are other questions to be asked of a particle theory, and for other questions the perturbative diagrammatic approach may be more convenient.

The idea of "replacing" one picture for the other would only be unavoidable if the formalism were, say, fundamentally non-local and non-unitary but approached a local and unitary description in the low energy limit. But the amplituhedron is both local and unitary, only the locality and unitarity are not manifest, just as the Lorentz symmetry of Maxwell's equations is not obvious just by looking.
 
I understand that as equivalent to the idea that an photon has to pass through both slits at once to interfere with itself in the double-slit experiment? So the idea that even though we inject a whole photon into the setup, and retrieve a whole photon out of it at the detector, the idea that the photon has remained whole on the journey between these points is a fallacy.
It's that same idea, yes, but I try to avoid language such as "the photon has to pass through both slits at once" because whatever the photon is "really" doing is not the sort of question that quantum mechanics is equipped to answer. We can say that calculating the probability correctly requires considering all paths that a particle could take in between its origin and destination, with their associated interference effects.
Thank you for the references; I plan to look at them later.

I'm thinking my way through https://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser .
image.png

What it seems to say is that if we eliminate the "idle" part of the setup (from prism PS onward), we'll see a "signal" diffraction pattern with no obvious interference at D0; but that adding the detection circuit allows us to pick two subsets of "signal" photons at D0 that do exhibit an interference pattern.
image.png
I think this happens because the photons interfere with themselves at Beam Splitter C, and that beam splitter sorts the "idle" photons in the same way that the upper "signal" path of the experiment sorts their "entangled" partner copies into a refraction pattern.
Just right -- the entanglement is manifested as a correlation between the measurement of position of the signal photon and the choice of which among D1...4 will fire.
Another way to put my understanding is that the photons have a property (similar to polarization) that determines where they'll land at D1; if you could select all photons with 22°, they'd all land in the exact same spots, and these spots would be slightly offset to the spots where the 23° photons land, etc. You wouldn't see this pattern when you have light where this property is equally distributed across all photons; but if you introduce a filter that would, say, transmit t=22° and 202° fully, and t=102° and 292° not at all, and the others something in-between (e.g. with probability p=cos(t-22°)^2), then you'd see a refraction pattern. The angle of the filter wouldn't even matter!
So if the Beam Splitter C acts as that kind of filter, then you'd see this outcome; but it only works like that when there's blue/red interference.
Here's where the no-go theorems come in. Let's say the photon does indeed have a property like that, which you can use to come up with a 'plan' for what the measurements will result given various initial conditions of that photon hidden variable, as well as variations on the experiment (measurements in different bases). By the Kochen-Specker theorem, I can now come up with a slight tweak to this experiment so that your plan will necessarily disagree with quantum mechanics. So it can't be a plan, and it can't be the interaction with the detectors.
Thinking about this has convinced me of something I wasn't convinced of before, and that is that the photon has to be on the red and blue beams both. Would it be ok to call that a superposition?
Yep, this is exactly a superposition.
It's also clear that somehow you can retrieve the energy of a photon at D3 or D4, but not at both. And that's a little weird as well, but it's kinda like if you have X amount of water in a bucket with multiple taps, and you take that water out via one of the taps, you can't take it out from another tap any more -- except with energy instead of water, and Thermodynamics. And that's really hard to wrap my head around, that this "single outflow" can happen along with the superposition, since I'd expect a photon detected at blue detector D3 to also have to traverse the red part of the idle setup because of the superposition, but there's never any energy being removed on that side. [Ah well, maybe it'll become clearer once I peruse the other references you provided me with!]
What you just described sounds a lot like what Scott Aaronson called the "flow theory", which did happen to be in that reference! It's probability flowing through those pipes, not energy, but you seem to have a similar intuition.
What I'm not convinced of is that the choices I make in the "idle" part of the setup influence what happens in the "signal" part of the setup, or vice versa.
[And that, too, may become clearer by looking at other experiments.]
Right, I think the researchers could've been clearer on this point, since it's widely misunderstood. Choice doesn't factor into it at all. The photons go where they go, and the detectors that fire fire whether you want them to or not. All we can do is separate out the events after the fact and see the distribution of positions for the signal photon in those events where the idler hit D1, or respectively D2, D3, or D4. They chose, you're just analyzing their choices.
 
Back
Top