Szydagis' point 3: Interstellar travel is too hard

I view this as insufficiently abstract reasoning. If a hypothesis about advanced life forms can be expressed mathematically, then it will be one of provable, undecidable, or false *whether or not* there are advanced life forms to ponder over it. I don't see the need for the limitation that the system can only be examined from the perspective of something in the system. A simulation scenario would be an alternative one to the mathematical one, too. Why shouldn't the ultrabeings give the same attention to the sim runs that produced no advanced life as to the ones that did produce it? That we're inside one of them pondering it too isn't necessary for that pondering to take place.

No, the anthropic principle is perfectly sound reasoning for anything that requires our very existence in order to observe. It can be used in any situation where the question is 'howcome we are here ?'

The anthropic principle can be ( and has been ) used to explain the 'fine tuning' of the universe. All those little tweaks to fundamental constants and parameters that seem 'designed' to create intelligent life. Of course, the real situation is that were the constants and parameters not what they are....we likely would not be here to ponder over it. That then becomes the basis for belief in a multiverse, in which we just 'happen' to be in a universe with the 'right' parameters.

And the same does apply to aliens. Your imagined 'simulation scenario' requires the aliens to exist in the first place to run the scenario ! Someone has to exist in order ' for the pondering to take place'.
 
It would seem to me that any probability calculation on complex life in our known one universe would have to postulate random genetic mutation by radiation trauma (a) as either infinite or finite in the possible configurations unfavourable for life it can generate (without even considering other causes of genetic mutation such as incomplete chemical processes), and (b) with a figure for the range of favourable configurations. For ideological rather than scientifically valid reasons people tend to have very strong opinions as to which way to postulate. Those prone to embrace 'design' postulate one way, whereas those allergic to it in another. Both claim to speak on behalf of established science citing this and that author and model, and yet, in terms of empirical science, we don't have the first damn clue.

(1) If there's theoretically an infinite number of ways for radiation trauma to impact genes unfavourably at random variance, and yet;

(2) If there's only a finite stretch of punctuated evolutionary periods during which these mutations must occur at a certain averaged frequency to cause more complex species;

(3) Then the probability for even a single organism to produce mutated offspring falling within a finite range of more complex favourable mutations is zero, let alone for their offspring to continue mutating favourably within mutant subpopulations into more complex and therefore less probable configurations.

But:

If it's a finite catalogue of unfavourable and favourable options which can be fully randomly exhausted (including the equivalents of human brains) by genetic mutation within say 600 million years anywhere in the universe, and/or otherwise bound towards complexification of life overtime (for whatever reason), then complexification of life towards some manner of complex brains is likely or even inevitable in other similar planets given sufficient time.

So either we 'leave' our one-universe model and invoke the infinite universes metatheory or something equivalent (i.e. whatever model that postulates infinite experiments to exhaust infinite options) to preserve an infinite unbiased by 'design' random variation within the model while factoring in our human existence as an amazing series of consequences that occurred successively on earth and nowhere else in 'our' universe.

Or else the catalogue of possible configurations in our model must be finite and (whilst still extensive) variously biased towards complex life, and thereby bound to produce human-equivalent level brains under random experimentation given sufficient time and earth-like conditions anywhere in the universe.

Scientifically, we simply don't know which one it is. We're just wildly speculating. Philosophically, however, it's a whole 'nother convo! ;)
 
Or else the catalogue of possible configurations in our model must be finite and (whilst still extensive) variously biased towards complex life, and thereby bound to produce human-equivalent level brains under random experimentation given sufficient time and earth-like conditions anywhere in the universe.
Would it also not be the same for ANY trait that favors survival, not just "big brains?" Or is "big brains" a somehow favored outcome? Not a criticizm, just trying to understand your point fully.
 
Would it also not be the same for ANY trait that favors survival, not just "big brains?" Or is "big brains" a somehow favored outcome?

Since the type of consciousness that's capable of producing technologically ever-advancing civilizations is the case in point in this thread, my point was merely to highlight that the assumption of a finite set of potential genetic configurations in our universe which include our brains (that we know are connected to civilization-generating consciousness) is necessary to generate our civilization via a random process without invoking multiverse speculations. And if so, then our civilization would imply the generation of similar brains in other similar planets over similar stretches of time.

Yet, we have no scientific way of knowing whether such a set of potential genetic configurations in our universe is finite or infinite.
 
Kipping's actual position, which he has stated very regularly, is ' we simply don't know'.
That seems perfectly reasonable.

But the paper we're discussing isn't an analysis of the possibility of life starting early on other planets, and Kipping doesn't address the possibility of worlds where conditions suitable for abiogenesis, or once established, stable biospheres, might persist for billions of years longer than they might on Earth.

External Quote:
If this - if the time that it took on Earth, for the one example we know about, is typical, this is typically how long it takes for a single cell to go all the way to something like us that can think about the universe,
-From the public video by David Kipping in Scaramanga's post, my emphasis.

Kipping takes the time that intelligence took to evolve since the start of life, and points out this is the one example we know about. (He does briefly discuss the possibility of a later start and shorter intervening time before intelligence, describing this idea as a hard sell.) He therefore takes the time that it took for intelligence to evolve on Earth- our one example- as a possible indicator of how long it might take intelligence to evolve elsewhere. Again, in the absence of other data a reasonable "rule of thumb".

But when it comes to an early start of life on Earth- "...for the one example we know about...", he doesn't use the same logic.
Why not?

I'm not sure there's any strong reason for our sort of intelligence to evolve at all, even in a richly diverse biosphere lasting for billions of years. We know it's possible, we don't know how often intelligence might arise if there are other habitable worlds.
I suspect intelligence is very rare. We have not seen any evidence of extraterrestrial intelligence so far.
Admittedly (@Scaramanga) I'm basically quibbling over Kipping's paper, not the incidence of ETI.
 
Kipping takes the time that intelligence took to evolve since the start of life, and points out this is the one example we know about. (He does briefly discuss the possibility of a later start and shorter intervening time before intelligence, describing this idea as a hard sell.) He therefore takes the time that it took for intelligence to evolve on Earth- our one example- as a possible indicator of how long it might take intelligence to evolve elsewhere. Again, in the absence of other data a reasonable "rule of thumb".

But when it comes to an early start of life on Earth- "...for the one example we know about...", he doesn't use the same logic.
Why not?


Because I think you have it round the wrong way....

Kipping is countering a specific argument. That argument is essentially ' life started early on Earth, so life must be easy to get going'. In other words, there are those who use the early start of life as a basis for arguing that life is easy.

Kipping is basically arguing that this is observer bias, because if life hadn't started early then we might not ever be around to ponder it. Thus the early start of life becomes a pre-requisite for beings to even ponder life starting early. There simply aren't any beings to ponder it on planets where life starts too late. It's actually an extremely good and clever application of the Anthropic Principle.
 
Yet, we have no scientific way of knowing whether such a set of potential genetic configurations in our universe is finite or infinite.
Since the number of atoms in any organism of any sub-planetary size is going to be finite, the number of configurations must be finite. True, it can be a whopping great number, but it's finite.
 
Since the number of atoms in any organism of any sub-planetary size is going to be finite, the number of configurations must be finite. True, it can be a whopping great number, but it's finite.

According to standard (non-quantized) relativity, infinity is a reality in the known universe at least in the form of a continuous parameter such as spacetime. For example, there's an infinity of points between any too points in spacetime where an object (such as an electron, atom or a protein) could be positioned, insofar as we assume spacetime to be continuous rather than 'grainy'.

If spacetime is continuous and if there are no other underlying constraints acting upon how entities (such as biological organisms) are configured, there's therefore an infinite number of ways they can be configured even if they consist of but a finite number and kind of constituent elements within a limited region of spacetime. Hence each individual atom would also be truly unique in its exact shape and position, and relative to even one other atom.

There's an interesting article on infinite combinatorics in biology:

Article:
Is it possible to apply infinite combinatorics and (infinite) set theory in theoretical biology? We do not know the answer yet but in this article we try to present some techniques from infinite combinatorics and set theory that have been used over the last decades in order to prove existence results and independence theorems in algebra and that might have the flexibility and generality to be also used in theoretical biology.

. . .

it is still not possible to explain or predict the spatial structure and thus above all the function of the encoded protein from a genetic sequence of bases alone. This is certainly one of the core problems of modern research in this field. Similarly, the very complex translation of the genetic information into proteins - called protein synthesis - is sufficiently understood from a biological point of view, however, its robustness against potential errors is still a miracle.


To recap, we don't know whether such potential configurations of atoms or proteins are infinite or not. And science has very few tools to establish it either way.
 
I don't like the anthropic principle! ;)

The trouble with the principle and multiverse theory is threefold:

(1) Mathematical/computational (the anthropic principle theorists are often not familiar with set theory): The cardinality of the continuum (the infinity of real numbers R) is infinitely greater than the cardinality of natural numbers (the infinity of N). As Cantor showed, there are different sizes of infinities.

The hypothetical infinite set of universes in any multiverse is the cardinality of N. Whereas the set of possible variations of parameters within these universes is the cardinality of R due to an indefinite number of continuous variables that could be identified or imagined.

Let one sub-category of possible universes be such where each universe solely contains of geometric triangles of a fixed shape and size (without even broaching the theme of possible universes that are not mathematically/logically consistent, or which operate under different maths altogether). Since the possible sizes and shapes of geometric triangles vary continuously along R, and since the infinity of R is infinitely greater than the infinity of N universes in a multiverse, which includes our universe, the probability of our universe would be reduced to 0. One can only imagine the absurd cardinality of the super-R when we start adding further parameters and complexity (beyond triangles) to possible universes whereby each parameter varies along R.

For any multiverse theorist to start limiting the number of theoretically possible universes from R to N, and to insist they must fall within an ad hoc constraint whereby these possible universes only vary by the different values given to a small set of known physical constants commonly used in the fine-tuning argument, is contrivance at best to fit the anthropic principle, and ideologically motivated reasoning at worst.

(2) Scientific: The infinite multiverse theory is unscientific and untestable as it yields no testable predictions and assumes way more than necessary to explain known physical phenomena, just as the God-hypothesis does.

(3) Paradoxical contrivance: The infinite multiverse theory is contrived to explain away theistic fine-tuning arguments (despite a theist myself, I find fine-tuning arguments for God very problematic) rather than arising from any requirement of experimental physics to explain observable physical phenomena. And yet, in so doing, it too represents a fine-tuning (by us -- an intelligent creator of the theory) of its own kind to account for our universe where sentience could arise.

In short, it's 'sciency' nonsense rather than science -- replete with contradictions when studied more carefully. But the ostensible arguments advanced by the anthropic principle, to its credit, when examined without a critical depth of analysis, have a certain cleverness to them.
 
... The infinite multiverse theory is contrived to explain away theistic fine-tuning arguments (despite a theist myself, I find fine-tuning arguments for God very problematic)...
I very much agree with you on 'fine tunings arguments' being problematic. Actually, I think there is no theistic 'fine tuning argument' at all. Indeed, when all he evidence is put in, avoiding aplologetical cherry-picking, it turns into a rather strong argument for a-theism. 'Fine tuning' could be interesting for a new thread.
 
I don't like the anthropic principle!

Me either, but it is philosophically very sound....as long as it is only used in situations where there is a dependency loop such that our existence to observe a particular phenomenon depends on the very pre-requisite we are analysing.

The real issue is that it is very hard to precisely determine whether the anthropic principle applies. For example, Jupiter sized planets are actually quite rare...less than 10% of stars have them. So, when one asks why does our system have a Jupiter, it may be that there is such a corelation between Jupiter type planets and Earth type planets that we would not be here to debate the issue if there were no Jupiter.

But that is, in turn, precisely why I don't like the anthropic principle. Though it is based on sound logic, there is no real cut off point for it. It works well for certain high level observations...but one could apply it to almost any observation that requires an observer.

Incidentally, this is also precisely what lies behind the 'consciousness alters reality...collapses the wave function' interpretations of quantum mechanics. It is essentially the anthropic principle at work again.
 
(2) Scientific: The infinite multiverse theory is unscientific and untestable as it yields no testable predictions and assumes way more than necessary to explain known physical phenomena, just as the God-hypothesis does.

I've seen it claimed numerous times that inflation theory demands a multiverse. I'm pretty sure the originator of inflation theory, Allan Guth, has said so. However, this is often muddled with string theory and its 10^500 variants, as there is actually nothing in inflation theory that demands that the inflationary multiverse has different laws in each 'bubble'. Inflation would seem to just lead to an infinity of universes with the same laws.....which would not resolve 'fine tuning' at all.

Also, there is something rather perverse about using the laws of physics to argue that the laws of physics could be different ! It's on par with using 2 + 2 =4 to argue that 2 + 2 could be 5, or 3, or 2317. There is a self defeating loop in that.
 
Incidentally, this is also precisely what lies behind the 'consciousness alters reality...collapses the wave function' interpretations of quantum mechanics. It is essentially the anthropic principle at work again.

Um, I don't like that either! I don't get all the wavefunctions waiting for an observer before collapsing.

That said, I thought the Copenhagen Interpretation was the recipe for Danish pastries in English, so I have to concede that you (@Scaramanga) have a better understanding of these things.

(So annoying when that happens :))- And I've taken us off-topic, soz.
 
I've seen it claimed numerous times that inflation theory demands a multiverse. I'm pretty sure the originator of inflation theory, Allan Guth, has said so. However, this is often muddled with string theory and its 10^500 variants, as there is actually nothing in inflation theory that demands that the inflationary multiverse has different laws in each 'bubble'. Inflation would seem to just lead to an infinity of universes with the same laws.....which would not resolve 'fine tuning' at all.

Also, there is something rather perverse about using the laws of physics to argue that the laws of physics could be different ! It's on par with using 2 + 2 =4 to argue that 2 + 2 could be 5, or 3, or 2317. There is a self defeating loop in that.

There's also something deeply unscientific about (Guth) adding further layers of abstraction (multiverse) to an already speculative theory (inflation) that has yet to be empirically verified by observing its main 'smoking gun' prediction of long wavelength gravitational waves. Not sure Neil Turok's mirror universe or other proposed models are much better at this stage though. But the beauty of (honest and humble) science is that we know when we're still at the stage of utter ignorance about a given topic.
 
Um, I don't like that either! I don't get all the wavefunctions waiting for an observer before collapsing.

That said, I thought the Copenhagen Interpretation was the recipe for Danish pastries in English

Some of these poorly baked (har har) standard interpretations naively assume the quantum wavefunction (i.e. essentially a differential equation of sorts) as a 'real object' which somehow 'collapses' (a notion employed in the so-called Copenhagen and Ghirardi-Rimini-Weber theories/interpretations of QM) into the definite positions and properties observed at measurement. Yet historically it was devised as a statistical construct to predict measurement outcomes -- a mere calculational algorithm which itself cannot be read literally as a description of the quantum reality. It 'may' describe a wave-like aspect of quantum reality but is utterly inadequate in explaining the particle-like observations in, say, the double-slit experiment and such. Conflating the two into one seamless entity and claiming it an undisputable brute fact (Bohr) without any further parameters accounting for the 'collapse' is unscientific and an unjustified theoretical leap of faith.

This mistake is partly analogous to Ptolemaic calculational rules (circles on circles) which generated remarkably accurate predictions of planetary activity when observed from Earth, while they didn't match the real convolutions of planets.

More precisely, historically the quantum wavefunction as we know it is a product of a known mathematical theory (probability theory similar to Newton's second law) applied by Schrödinger to a (then) new context of quantum states with absolute squares (the Born rule) to avoid negative and imaginary amplitudes. It was consciously developed to predict measurement outcomes. Not to describe underlying quantum reality or its interaction with observers which are later literalistic confusions made by influential physicists who have had a great sociological impact on our (erroneous) thinking and false dichotomy between 'classical' and quantum physics.

Just as in classical physics, observations (measurements) have precise values in quantum physics too. The only difference is that the wavefunction is largely an 'instrumentalist' construct (a mathematical tool to predict measurements) rather than a 'realist' theory (i.e. an explanation of an underlying reality beyond observation -- the standard scientific theory). Hence a naive literal reading of the wavefunction as 'reality' interprets these precise measurements as a 'mysterious' collapse of this imprecise 'thing' called the 'wavefunction'. Observer-dependency is just a function of one of such naive readings where these collapses are claimed to be observer effects of various kinds (subjective, objective, pick your theory).

The stochastic character as well as the strange action-at-a-distance of certain quantum observations, together with the positivist/instrumentalist philosophy of science of the likes of Bohr and Heisenberg (in opposition to Einstein's realist -- a more standard -- philosophy of science), contributed to QM devolving into debates on observer effects etc.

Otto von Neurath, a positivist philosopher, strongly influenced Bohr's thinking that has since been uncritically parroted by QMcians under the fuzzy label of 'Copenhagen interpretation'. Under Bohr's influence the Copenhagenists claim the wavefunction is a complete description and its seemingly irrational collapse is an unresolvable mystery beyond our ken. Hence, the quantum physicists had better just "shut up and calculate."

Ever since Bohr's powerful sociological impact on quantum physics, attempts to really understand what's going on at the quantum level (including Bohmian mechanics) have even been frowned upon in many physics departments the world over at least as far as quantum mechanicians are concerned. This situation has however changed significantly for the better in recent decades.
 
There's also something deeply unscientific about (Guth) adding further layers of abstraction (multiverse) to an already speculative theory (inflation) that has yet to be empirically verified by observing its main 'smoking gun' prediction of long wavelength gravitational waves. Not sure Neil Turok's mirror universe or other proposed models are much better at this stage though. But the beauty of (honest and humble) science is that we know when we're still at the stage of utter ignorance about a given topic.

I agree, but at the same time I have a hard job accepting Sabine Hossenfelder ( and others ) stance of fine tuning just being a 'brute fact'. One can go too far with the empirical approach. The common 'brute fact' approach is that we can't talk about what other values constants 'might' have had as we have zero evidence that any 'other' could even exist and no empirical evidence of such.

I don't accept that dismissal. The reason being that, for example, nobody has ever dealt a pack of cards with all the cards in suit and number order. The odds are astronomical, yet the fact that nobody has ever empirically observed it happen doesn't mean we can't calculate and discuss the odds.
 
Um, I don't like that either! I don't get all the wavefunctions waiting for an observer before collapsing.

Strictly speaking the issue is not so much wave function collapse ( which may or may not even happen....pick your interpretation ) but a far more baffling issue called the Measurement Problem.

The Measurement Problem is an inability to define what actually constitutes a measurement. No such thing as a 'measurement' actually occurs in nature...which has no idea that anything is being 'measured'. It is a purely human concept. And that, rather than wave function collapse, is why one gets the associations with 'the observer'. Not even the 'Wigner's Friend' experiment, where one observer observes another observer, can resolve this conundrum.
 
The reason being that, for example, nobody has ever dealt a pack of cards with all the cards in suit and number order. The odds are astronomical, yet the fact that nobody has ever empirically observed it happen doesn't mean we can't calculate and discuss the odds.
I think perhaps your analogy is poorly chosen. Since there have been an enormous number of people playing a much, much greater number of card games, it's an unwarranted assumption that it's never happened. The odds of it happening in suit and numerical order are, of course, exactly the same as for any other random hand, and since with cards we are dealing with a small handful of defined values, the math is relatively simple.

You still can't get around the fact that we have no way to determine if there could have been other constants in the formation of the universe. We have what we have; there are no other universes to study.
 
Strictly speaking the issue is not so much wave function collapse ( which may or may not even happen....pick your interpretation ) but a far more baffling issue called the Measurement Problem.

The Measurement Problem is an inability to define what actually constitutes a measurement. No such thing as a 'measurement' actually occurs in nature...which has no idea that anything is being 'measured'. It is a purely human concept. And that, rather than wave function collapse, is why one gets the associations with 'the observer'. Not even the 'Wigner's Friend' experiment, where one observer observes another observer, can resolve this conundrum.

This is actually helpful as I have heard these arguments without the context this discussion is adding. Any chance we can split this off into its own thread "Quantum Mechanics for Debunkers" perhaps?
 
I think perhaps your analogy is poorly chosen. Since there have been an enormous number of people playing a much, much greater number of card games, it's an unwarranted assumption that it's never happened. The odds of it happening in suit and numerical order are, of course, exactly the same as for any other random hand, and since with cards we are dealing with a small handful of defined values, the math is relatively simple.

You still can't get around the fact that we have no way to determine if there could have been other constants in the formation of the universe. We have what we have; there are no other universes to study.

Yes its not the best analogy...but it gives a flavour of the point being made. The 'brute fact' argument is that we have no empirical evidence that things could be different. We equally have no empirical evidence for a pack of cards being dealt as described....for which I've just learned the odds are 1 in 10^68. You could deal a hand every second for the entire lifetime of the universe and you'd need a period of time 10^52 longer than the 13.8 billion years of the universe before you dealt such a hand !

So there is no way you can empirically prove...via actually dealing such a hand.... that such a hand can be dealt. You have to rely on statistical inference. And surely that is precisely what those who argue that 'fine tuning' does need explaining are doing.
 
Any chance we can split this off into its own thread "Quantum Mechanics for Debunkers" perhaps?

The Measurement Problem is common to all 'interpretations' of quantum mechanics. It is ( rather than wave function collapse ) the real reason for Schrodinger's Cat. Whoever resolves the measurement problem will also resolve which interpretation of quantum mechanics is the 'correct' one.

However it is just one of numerous condundrums that arise with 'observers'. A similar ( possibly even structurally related in terms of thinking ) one is John Searle's 'Chinese Room'.....where the concept being struggled with is 'understanding'. With quantum mechanics the concept being struggled with is 'measurement'.

The entire arena is an Alice In Wonderland rabbit hole. One could certainly dispel a few mis-perceptions, but I suspect any quantum mechanics thread would just devolve into 'interpretation' tribalism.
 
Yes its not the best analogy...but it gives a flavour of the point being made. The 'brute fact' argument is that we have no empirical evidence that things could be different. We equally have no empirical evidence for a pack of cards being dealt as described....for which I've just learned the odds are 1 in 10^68. You could deal a hand every second for the entire lifetime of the universe and you'd need a period of time 10^52 longer than the 13.8 billion years of the universe before you dealt such a hand !

So there is no way you can empirically prove...via actually dealing such a hand.... that such a hand can be dealt. You have to rely on statistical inference. And surely that is precisely what those who argue that 'fine tuning' does need explaining are doing.
Well, technically, that's not true.
I could shuffle a deck and do it right now, if I was lucky. (I'm sure Richard Turner could do it easily...) And I might wait ten times as long as you asked me to, and that hand may not get dealt. Every hand anyone deals has that same tiny chance of getting dealt, yet one hand always does.
Luck is not a universal constant.
 
You could deal a hand every second for the entire lifetime of the universe and you'd need a period of time 10^52 longer than the 13.8 billion years of the universe before you dealt such a hand !
That's also a mis-statement of probabilities. Nothing says that if you dealt that many hands, you'd get one of each. You could get it on the first hand, or you could get a few dozen of them but none of some other hand with the same odds. ;)
 
That's also a mis-statement of probabilities. Nothing says that if you dealt that many hands, you'd get one of each. You could get it on the first hand, or you could get a few dozen of them but none of some other hand with the same odds.

Well, no, on average you'd get such a hand once every 10^68 seconds.

Of course you 'could' deal such a hand tomorrow. But then, the laws of the universe 'could' change tomorrow using the same reasoning....though we might not be around to make the empirical observation.
 
Going waaay off-topic, and not in any way an original observation, but in films, TV dramas etc. we routinely see the best poker hands dealt, the unexpected winning hole in one or home run, the lottery win, the hero archer hitting the centre of the bullseye- and then splitting that arrow with another.

I guess "improbables", particularly high-stakes improbables, are for some reason the stuff of entertainment.
Wonder if that effects our everyday estimates of the probability of rare events?

-Thinking about it, news and other factual programs often focus on stochastically rare events: The plane crash, freak storm, the act of violence in a pleasant neighbourhood (but not the frequent, similar violence in a favela or township), the unexpected sporting triumph or act of survival against the odds.

We see constant coverage of the unlikely. Mind you, I'd much rather watch a Mission Impossible or Sully than a British kitchen sink drama of the 1960s (at least 96.3% of the time).

(Edited to add: I like science fiction, but we won't see the SF film where there's a crash program to build Earth's first starship,
they get the best scientists, the beautiful young lady who wears vest tops and tailored cargo pants, and the rugged formerly-sidelined hero wrestling with his inner demons, and after 96 minutes they report back to the President,
"Nope. Had a go, can't do it at the moment.")
 
Last edited:
Going waaay off-topic, and not in any way an original observation, but in films, TV dramas etc. we routinely see the best poker hands dealt, the unexpected winning hole in one or home run, the lottery win, the hero archer hitting the centre of the bullseye- and then splitting that arrow with another.

I guess "improbables", particularly high-stakes improbables, are for some reason the stuff of entertainment.
Wonder if that effects our everyday estimates of the probability of rare events?

-Thinking about it, news and other factual programs often focus on stochastically rare events: The plane crash, freak storm, the act of violence in a pleasant neighbourhood (but not the frequent, similar violence in a favela or township), the unexpected sporting triumph or act of survival against the odds.

We see constant coverage of the unlikely. Mind you, I'd much rather watch a Mission Impossible or Sully than a British kitchen sink drama of the 1960s (at least 96.3% of the time).

The trouble is that all those 'hole in one' type examples can be explained away precisely by invoking how many millions of golf shots are made every day. But if there was only ever one golf shot in the entire history of the universe and it was a hole in one....that explanation would not work.
 
I agree, but at the same time I have a hard job accepting Sabine Hossenfelder ( and others ) stance of fine tuning just being a 'brute fact'. One can go too far with the empirical approach. The common 'brute fact' approach is that we can't talk about what other values constants 'might' have had as we have zero evidence that any 'other' could even exist and no empirical evidence of such.

I see your point and agree. And yet even if we accept this brute fact, it's a pretty amazing brute fact (for our sakes) and one which unwittingly flirts with 'design' arguments if she isn't in fact suggesting a chancy origin for the universe with only one cosmic roll of the dice. And this is not even broaching the matter of other known meta-laws in our known universe whereby physical reality somewhat amazingly observes neat laws of formal logic and mathematics rather some totally chaotic 'systems' of configuring physical elements.
 
Last edited:
Strictly speaking the issue is not so much wave function collapse ( which may or may not even happen....pick your interpretation ) but a far more baffling issue called the Measurement Problem.

The Measurement Problem is an inability to define what actually constitutes a measurement. No such thing as a 'measurement' actually occurs in nature...which has no idea that anything is being 'measured'. It is a purely human concept. And that, rather than wave function collapse, is why one gets the associations with 'the observer'. Not even the 'Wigner's Friend' experiment, where one observer observes another observer, can resolve this conundrum.
Uh? I'm sorry to dissent, but nature is costantly 'measuring'. A 'measurement' is just an interaction of a quantum system with something else, strong enough to collapse the wavefunction and convert a fundamentally indeterminate result in a well-defined one. There's no need for an 'observer' at all (even an atom will do), and no philosophical problems I can see.

But I'd rather split a discussion on quantum mechanics to another thread (ditto for the 'fine tuning principle').
 
The Measurement Problem is common to all 'interpretations' of quantum mechanics.

Not so much to the pilot wave interpretation.

Measurement problem was initially treated as a 'problem' only because naive theorists treated the wavefunction as a real purely wave-like physical state which somehow collapses/transforms at measurement into a totally different state -- an observed particle at a definite location in spacetime.

The pilot wave theory (and the like) treats the wavefunction also as a real wave which, however, guides particles along a certain probability distribution rather than turns into a particle -- much like sea waves guide water molecules. Particles that have both a position and momentum before the measurement, are observed at measurement as predicted by the pilot wave, and continue to exist at a definite location in spacetime after measurement.
 
Of course you 'could' deal such a hand tomorrow. But then, the laws of the universe 'could' change tomorrow using the same reasoning....though we might not be around to make the empirical observation.
I think you're still not understanding probabilities. In your suggested billions of hands of cards, one of them is guaranteed to be the first one dealt out of the finite number of possibilities. It's just as possible that the first would be that perfect deal as it is that it would be any other particular order. Averages might tell you which to place a bet on, but they tell you nothing about individual occurrences. (An aside: I saw a video clip yesterday of a boy of about eight getting a hole in one the first time he ever hit a golf ball.)

But that has no comparison at all to "the laws of the universe", since we don't know (and don't know how we could determine) if they are, so to speak, chosen out of a finite deck of possibilities, or are what they are because there is no other version available, or maybe have interdependencies that would influence each other and bring them all into a stable combination, or whether we are the occupants of this universe but there were untold numbers of failed universes at another time or place. Apples and bananas.
 
Last edited:
Going waaay off-topic, and not in any way an original observation, but in films, TV dramas etc. we routinely see the best poker hands dealt, the unexpected winning hole in one or home run, the lottery win, the hero archer hitting the centre of the bullseye- and then splitting that arrow with another.

I guess "improbables", particularly high-stakes improbables, are for some reason the stuff of entertainment.
Wonder if that effects our everyday estimates of the probability of rare events?
Obviously you can't sell the movie rights to your golf game if nothing exciting happened—there's a bias in what gets made into movies.

"Scientists have calculated that the chances of something so patently absurd actually existing are millions to one.
But magicians have calculated that million-to-one chances crop up nine times out of ten."

Terry Pratchett, Mort
 
It's just as possible that the first would be that perfect deal as it is that it would be any other particular order.

Under the above-cited premise you lay out all 'other particular orders' would logically be imperfect deals. Scaramanga's analogy's point was the low likelihood of the perfect deal vs. imperfect deals. Not the likelihood of the perfect deal vs. any other particular order.

Scaramanga's understanding of probabilities seems fine to me.

In any probability calculation, the key property to factor in for the fine-tuning advocates is functional purpose. A suit performs a function in a card game. 'Any other particular order' of cards doesn't. A pack of monkeys throwing bricks onto a pile that does nothing is easy and indeed the probable outcome. Throwing them in a way that forms a small building capable of performing the function of a house is a tad more difficult.
 
Uh? I'm sorry to dissent, but nature is costantly 'measuring'. A 'measurement' is just an interaction of a quantum system with something else, strong enough to collapse the wavefunction and convert a fundamentally indeterminate result in a well-defined one. There's no need for an 'observer' at all (even an atom will do), and no philosophical problems I can see.

No, its not that simple. Hence Schrodinger's Cat.

The Schrodinger's Cat issue is 'wave function of what ?' The radioactive particle emitting the neutron that releases the poison ? The radioactive particle AND the poison ? The particle AND the poison AND the cat ? All the prior AND the box the cat is in, AND the observer, AND the lab ?

These definitions are arbitrary, and decided by the observer. The 'wave function' is just a mathematical tool for defining which choice one made. There's zero evidence that wave functions actually 'exist' as physical entities. Nobody has ever seen a wave function.
 
I think you're still not understanding probabilities. In your suggested billions of hands of cards, one of them is guaranteed to be the first one dealt out of the finite number of possibilities. It's just as possible that the first would be that perfect deal as it is that it would be any other particular order. Averages might tell you which to place a bet on, but they tell you nothing about individual occurrences. (An aside: I saw a video clip yesterday of a boy of about eight getting a hole in one the first time he ever hit a golf ball.)

But that has no comparison at all to "the laws of the universe", since we don't know (and don't know how we could determine) if they are, so to speak, chosen out of a finite deck of possibilities, or are what they are because there is no other version available, or maybe have interdependencies that would influence each other and bring them all into a stable combination, or whether we are the occupants of this universe but there were untold numbers of failed universes at another time or place. Apples and bananas.

No....I'd hurl your critique right back. Of course one of the hands of cards has to be the first one and of course the first one could be the 1 in 10^68 one that has all suits in order. But...so what ? I might win the lottery tomorrow...but I'm far more likely not to win it. Likewise, you are vastly unlikely to pick the all suits in order deck of cards on your first attempt.

In fact you'd have to go, on average, a period of time 10,000 trillion trillion trillion trillion times longer than the entire lifetime of the universe so far to get that all suits in a row deck if you selected a deck every second. Its comparable with me selecting one atom in the entire universe and you guessing it correctly. Good luck getting it right on the first attempt without 'fine tuning' !

You overlook the fact that I'm disputing the 'brute fact' explanation of fine tuning....and the 'brute fact' explanation does not allow for multiverses. Therefore there are no other deals of the cards. You get ONE shuffle and that's it. The brute fact stance basically says this is the only universe we can measure and talk about....'deal with it'. I find this unscientific...as we could equally have said 'gravity is a brute fact....deal with it'. There has to be some reason things are the way they are....even if it is that they can't be any other way. It is absurd to argue that one cannot postulate that things might have been different, and thus deduce how unlikely what we have actually is.
 
The pilot wave theory (and the like) treats the wavefunction also as a real wave which, however, guides particles along a certain probability distribution rather than turns into a particle -- much like sea waves guide water molecules. Particles that have both a position and momentum before the measurement, are observed at measurement as predicted by the pilot wave, and continue to exist at a definite location in spacetime after measurement.

The pilot wave is just as un-detectable as the standard wave function. It's essentially a 'hidden variables' interpretation, and the problem with hidden variables is they are...hidden ! Pilot wave also has various issues to do with local realism. As the pilot wave theory is essentially a classical theory ( arguing that particles do have exact position and momentum ) then it has to have local realism. However it would appear that a number of recent experiments ( I think they were tests of 'Wigner's friend' hypothesis ) have cast doubt on local realism being correct....and if confirmed than we can completely rule out Pilot Wave Theory.
 
Back
Top