Wigner’s Dopes
Wigner’s Friend is a real dope for taking orthodox QM too seriously. On the other hand one should take quantum mechanics seriously. The issue is, what is “QM”? Is it our model, or is it the (perhaps unknowable) way ℕ𝕒𝕥𝕦𝕣𝕖 works? The answer is it is our model, which is why you aught to take it seriously, but not too seriously. You do not really want to go around interpreting the model. The model is our interpretation of ℕ𝕒𝕥𝕦𝕣𝕖. There’s no need for further layers. Unless you are a professional someone who gets paid just to think.
Essentia Fondulations
The latest grist for my grindstone comes from Renato Renner and the Essentia Foundation . Their videos are well produced and always worth a listen. But they will always frustrate me.
I wrote about the gedankenexperiment in Inconsistent Persistent , but let’s do a quickfire review.
Renner, Frauchiger, Brukner, Cavalcanti et al., imagine a system of four computers programmed to “know” quantum mechanics. Experiments are performed, and measurements obtained. (Hey, I said it’d be quickfire.) The last pair of computers yield inconsistent predictions even though all computers are only exactly replicating quantum mechanics.
The claim is this proves orthodox QM is inconsistent.
I buy that argument.
Also, take note, we could perform the experiment! Would the universe disappear in a cloud of logic? No! We’d just get two conflicting computer print outs. Big deal. What does a bright hacker say when this happens? One of the computers had a bug in it’s software.
Only in this case all were consistently programmed. So was it a case of GIGO? No! All computers were fed only objective correct data.
Conclusion: all the software had a bug, the same bug — QM is not a consistent formal system.
There is a sniff of Gödel–Turing in this gedankenexperiment, but it is pretty interesting the result is novel, it amounts to a kind of twisted Gödel argument. In the Gödel argument the result is Peano arithmetic cannot be complete if it is consistent. Frauchiger–Renner have it the other way around for QM: if QM is complete then it is inconsistent, and so cannot be complete complete.
What do I mean? I mean that you can still use QM for “doing physics”, but not all the time for everything.
Escaping the Inconsistency
The more recent papers following from the Frauchiger–Renner Inconsistency have found various ways to recover from the paradox. So it was a fake paradox.
However, all such proposals will still violate at least one of the original three “reasonable assumptions” of Frauchiger–Rener:
(Q) — the universal validity of quantum theory (or, more specifically, that an agent can be certain that a given proposition holds whenever the quantum-mechanical Born rule assigns probability 11 to it).
(C) — demands consistency, in the sense that the different agents’ predictions are not contradictory.
(S) — the requirement that, from the viewpoint of an agent who carries out a particular measurement, this measurement has one single outcome.
Just like all other (re)interpretations of QM, how one goes about avoiding the Inconsistency (making it a proper noun) depends on aesthetic taste. But I think (I do not know for sure) all such escapes are purely logical loopholes, they are not motivations for any advance in physics theory. To be sure, a logical tidy-up can be a precursor to a theoretical physics advance. If we consider experimental clues as also part of logical consistency requirements (as we do in physics††) then the theoretical advances are almost always of this type — a logical inconsistency forces some advance.)
††The logic that says theory should agree with experiment within respectable uncertainties.
Personally, I do not like such wriggling-out loopholes. What I want is a physically grounded reason why there is no inconsistency. Not just a logical loophole escape.
There is no Objective Reality?
At a point early on in the episode (here ) Cavalcanti, I think, claims there is no way to transform between reference frames in QM. But I think this is ill-conceived. My comment:
@4:30 it is not true “there is no script” as in ‘no objective reality’. Reference frames are still valid, the issue posed by Frauchiger-Renner is one of consistently describing frames of reference. QM is an inconsistent description. In other words, the dude is projecting his metaphysical biases. What the Frauchiger-Rener-Wigner paradoxes show is only that orthodox QM is an inconsistent description. This is not surprising, since in orthodox QM the “measurement postulate” is ad hoc. It should. never have been interpreted as anything but a stop-gap so that we can make predictions for nearly isolated systems. But no large system is even “nearly” isolated. Once you realize the isolation refers to not just any old interactions but entanglement breaking and forming interactions then you realize the Wigner’s Friend gedankenexperiments are deeply ill-posed, and so the fact they give rise to an inconsistency is not shocking, it might have even been suspected.
There is no such inconsistency in the Feynman path integral formulation, since that is a pure spacetime formulation and only yields probabilities — no measurement postulate is needed. After a measurement process the physicist just does a Bayesian update. The metaphysician can always imagine all paths in the Sum Over Histories were “real”, hence a “collapse” has to be also imagined, but we have no need to, it is a non-classical statistical theory, not an ontological theory — q.v. the framework given by Jacob Barandes.
I would also say that the Feynman Path Integral formulation is also not a complete account of reality, but at least it is not inconsistent.
I had a burst of frustration and wrote more when Mr Essentia pulled out Schrödinger’s Cat from his PNG databass.
@4:45 ever heard of quantum necromancy? Every sane person knows the Cat was always alive (apart from the fact SPCA disabled the poison vile). Physics nerds should know too, but they’re nerds, so what can I tell you? Macroscopic objects are never in complete superposition, except w.r.t our knowledge. The only superpositions we need to admit are real are the off-shell paths in the Sum Over Histories, but not all of them need be actual, if Nature samples them, then we have to account for them by imagining a fictional complete superposition. Only qubits can be entangled, and entanglement is the source of all superposition. A system, like a Cat, with heaps of entanglement entropy is always in a pretty definite state. Only its individual electrons and quarks etc., are a bit indeterminate for nanoseconds in stretches. It’s really f-ing stretching philosophy into absurdity to claim the whole Cat is macroscopically indeterminate because many of Its parts are indeterminate. (It’s similar brainworms to neoclassical economics — totally lazy and uncritical thought.)
This was only friggin’ 5 minutes in, so I am doomed to a day of frustration if I watch this thing. Time to check out All Blacks vs. France. Beauden Barrett is the only thing on Earth with a mass greater than 0.00000001 kg that is in coherent superposition, will he kick or will he pass? No open side flanker knows.
Black Hole Wigner
In the lecture by Renato Renner he provides a black hole information version of the extended Wigner Friend inconsistency theorem. It seems all ok. The set-up is that we have a Black Hole complementarity situation that gets uncomplemented when a third agent, Delores, checks first with Wigner outside, then goes and confuses Alice who fell inside. Apparently the inconsistency can then be generated. I was half asleep during the lecture so was not entirely sure how complementarity does not still hold, but I guess Delores is going to tell Alice that Alice will certainly measure ‘up’ on a spin if and only if she has measured ‘down’.
That is to say, Delores will observe Alice reporting ‘up’ but only when in actual fact Alice herself measured ‘down’, or at least ‘not up’.
Since Delores is the nerd neoliberal physicist telling Alice what she saw, it is actually Delores who got confused. (Is she really a ‘Sharon’?) She tried to tell the truth, but got it wrong.
It seems like that’s the story. All good from what I can tell. QM as formulated or interpreted in terms of fictional Heisenberg Cuts is inconsistent. That’s what T4G would agree with.
Black Hole Boy: Do not try and cool the horizon; that’s impossible. Instead, only try to realize the truth.
Geo: What’s that?
Black Hole Boy: There is no horizon.
… “Then you’ll see that it is not the horizon that heats; it is only your coffee.”
However…
… at one point Renato starts using unphysical language, like, “to the in-falling observer the Hawking radiation does not exist.”
I had to write to object:
@3:04:00 the Hawking radiation is not “observer dependent”. The detection of the Hawking radiation is frame dependent. Big difference. By basic analysis the free falling frame nerd can compute the fact they are crossing an horizon and can therefore also infer there is Hawking radiation around them, they just cannot detect it relative to vacuum noise. YAIoAoEiNEoA.
Yet another instance of absence of evidence is not evidence of absence.
I am so not in the mood for a long essay, so that’s it for today.
((I am so phreakin’ strongly and bitterly anti-Positivism.))
Wigner’s Dope Redux
Curt had Renato Renner here on the podcast. A nice episode. They discuss the Frauchiger–Renner No Go theorem in many ways from different angles.
It only reinforced my take on it — which is that one indeed cannot apply QM willy nilly. The first principle the offer up for sacrifice is
Q: QM is universal.
The other two were:
C: The weak generalized Copernican Consistency Principle — two agents describing the same event must agree (up to Lorentz transform),but on on the possible epistemic `quantum state’ rather just on when they are certain what the outcome of a measurement will be.
S: A measurement carried out by an agent has a definitive outcome from the viewpoint of the measuring agent.
The generalized Consistency Principle is a bit dicey still. Some folks at IBM (for one lot) have attacked it by claiming that we should not expect outcomes made by measurements in different computational bases to be in agreement. But I think they are idiots. Measurements should be basis independent. The probability need not be. Our choice of measurement basis should not effect reality. And the F-R paradox is made, I believe, to be an inconsistency in final measurement results, not probabilities.
In any case,I am singling out the first principle as the one which needs to be discarded, because I think — on physical grounds — it is indeed false! It is not a good principle.
But as Renato explains, this is not a simple principle, it has subtleties, because there are competing view of what it means for QM to be universal. That alone makes it ti sensitive point-of-failure imho.
In T4G theory we would say there is no longer a quantum cobordism when the critical systems that need to be entangled for any interference at all to occur become unentangled. This is what happens though when any of the Players in the Frauchiger–Renner Wigner’s Game do when a measurement is made. You cannot model the spacetime cobordism after a measurement has occurred with superposition and interference, precisely because you have lost the physical direct cause of these phenomena,namely entanglement.
Renato takes a fourth escape route these days (he mentions he changes his mind because he is never quite sure about what the gedankenexperiment really implies) — which is that he now thinks the gedankenexperiment is invalid because the near-enough real world experiment would be impossible. Why? Because, he says, there are four agents presumed in the minimal F–R gedankenexperiment, Alice needs to have a larger system than Bob to make her measurements, Bob needs a larger system than Charlie, and Charlie needs a larger system than Wigner, but finally Wigner needs a larger system than Alice.
I have sympathy for this view, because it is the end result of the T4G escape, which is that QM is not truly universal. But I need to heavily caveat this! When I do, it comes across quite a lot closer to Renato’s view. It also reminded me a little of Scott Aaronson’s “Quantum Necromancy Hard Theorem” — the Cat is not alive & dead at the same time, since this would be tantamount to necromancy (raising the dead). More technically precisely, Aaronson’s result says it does not matter.
Aaronson: “We prove that, if one had a quantum circuit to determine if a system was in an equal superposition of two orthogonal states (for example, the |Alive〉 and |Dead〉 states of Schrödinger’s cat), then with only a slightly larger circuit, one could also swap the two states (e.g., bring a dead cat back to life). In other words, observing interference between the |Alive〉 and |Dead〉 states is a ‘necromancy-hard’ problem, technologically infeasible in any world where death is permanent.”
It is a beautiful result I have always thought.
Note that the theorem does not really say the superposition is impossible, but just that we can never confirm the superposition exists. However, in Jacob Barandes’ view the superposition was never “real”, and was always an accounting tool. That is consistent with Aaronson’s theorem of course. You have to be wary though of experimentalists who routinely publish results showing superpositions of systems like entire atoms or molecules. What are they really showing? From what I can tell they are only showing the results of many ensembles of measurements.
When a fuzzy glow from Two Slits, for example, is detected, you have not duplicated matter! That’d violate conservation laws that really are sacrosanct. But it is possible for ER=EPR wormhole traversals to generate short-time correlations between the Two Slits. Over a longer time you can thus expect to see two fuzzy glows, from a single article, near both splits. Necromancy of whole atoms being hard, but not too hard, a gazillion times easier than necromancy on a Cat, since proverbially no atom is actually “alive”.
Back to Renato Renner, and his new opinion that the gedankenexperiment is impossible. We might say,“Duh! No kidding!” But that’d be a bit too unfair. It is interesting to consider why with even as advanced technology as cosmically possible the gendankenexperiment is ill-formed.
In T4G:
When entanglement structure collapses (literally, the ER=EPR wormhole bridges reform) then strict QM no longer applies to whoever’s measurements get implicated, they must reason classically now!
Hence once Bob measures “Alice” then Alice too has to re-analyze the whole experiment classically, and so do Charlie and Wigner.
So Charlie never has to apply any QM superposition principle to analyze Alice’s system, Charlie would know Alice is evolving classical — at least minimally with respect to the spin she is measuring.
Note at no stage are we saying QM is inapplicable, all we need to note is that entanglement gets destroyed (or reformed elsewhere).
As long as the reformed entanglement structure does not put the Alice, Bob, Charlie, Wigner, systems back into Wigner style entanglement and hence neither do they go back into superposition, then all is fine and the classical experimental reasoning applies, and presuming superposition (or “many world” crutches) no longer applies to those particular measurements.
Renato’s argument about needing a circular chain of larger systems does not even get-going in T4G.
Another way to say this is that QM is still universal, but QM is not the theory of everything in superposition all the time. In this light Jacob Barandes’ Indivisible Stochastic QM is in agreement:— the superpositions are fictions, mere accounting methods for tracking entanglement structure that we cannot perceive or measure — since attempts to probe ER=EPR wormholes teds to collapse them! Now literal collapse, not weird metaphysical Copenhagen collapse.
All these metaphysics issues about quantum mechanics really always seem to melt away when we realize gravity is the cause of quantum mechanics. You do not re-quantize gravity, it was already a quantum theory, but classical GR was ignoring the Planck scale wormhole topology.
Probability Interpretations of QM Do Not Cut It
Renato once was a Many Worlder, and then once was a QBist. He is now more agnostic, but still has intuitions about what Nature is trying to tell us. Disappointingly he still seems to think QM is somehow about subjective probability, and that we just currently do not have any idea for a mathematical measure theory for such terms. (Bayesianism is not adequate here, he explains this around 20 minutes .)
I agree with Renato that there is only quantum mechanics and there is no quantum/classical divide. So I do not believe in “collapse” theories. But this is for subtler reasons than Renato gives. He just thinks it is ugly. Whereas in T4G theory we do have collapse of wormhole structure, which is nothing more than breaking of entanglement between particles. And since entanglement is monogamous the breaking is only ever between two elementary particles. So bulk systems still remain partially entangled, pairwise, even when one pair lose entanglement. This is how a classical world seems to emerge out of the quantum. No such thing occurs, it is all always quantum, but sometimes with less entanglement internal to a system than at other times. When we make measurements we probe and disturb the systems and so break a lot of entanglement (which always reforms elsewhere).
Once again I find myself somewhat sympathetic, but also frustrated. I can tell a better story. Once you appreciate the “wavefunction” ψ is a transformation instruction, not a model of physical reality per se then it becomes easy to think about QM, but especially if you take ER=EPR seriously.
The issue is that ER-bridges are not EPR-bridges. I’ve been recently writing about this. I identify three separate bridges in quantum theory.
EPR-bridge — the stochastic correlations.
ER-bridge — the physical wormhole topology,which makes spacetime deeply non-Minkowski, which is why EPR correlations can exist.
The CC-bridge — the algebraic “connexion” in complexified Clifford algebra. This is what I use in T4G theory to model the wormhole topology.
You can see they are all closely related. Since I cannot do any geometrodynamics modelling (way too difficult) the algebraic modelling suffices.
However, these subtleties are often lost on many physicists, especially the Many Worlders, QBists, operationalists, and field theorists in general. Field theorists have to think about entanglement in mystical terms, and are hard pressed to explain why entanglement is monogamous, they have no rhyme nor reason for this. Their “explanation” is just an appeal to the liner algebra of the Hilbert space. An EPR state simply can never be entangled with any other qubit. I, on the other hand, do have rhyme and reason, I can think in particle terms. All all then makes sense. At least it did yesterday.
To be clear, I can also appeal to the Hilbert space theorems, the difference is that T4G theory explains why the Hilbert space forms ψ are effective in describing stochastic processes in the first place!

