Tags

,

Ever had an idea that felt so natural when you thought it, that it was only later you realized you’d never heard anyone else state it? Those ones are particularly dangerous, as odds are you’re either ignorant of previous research or too wrapped up in your view.

So I hope you don’t mind if I bounce something off everyone.

Frequentism as it is normally practised doesn’t have a coherent interpretation, but we can fix that. We only have one reality, by definition, and likewise that reality cannot contradict itself. As a consequence, there can only be one hypothesis out there which perfectly describes reality. All other hypotheses are either synonyms for that hypothesis, or disagree with reality in some way. The only way to put a hypothesis to the test is to check its long-term behaviour; if it does disagree with reality, it’ll out itself at some point. This process might take forever if left to chance, so it makes sense to analyse the hypothesis for places where its most likely to break with reality, and focus our efforts on there.

This is falsificationism, a potent but problematic approach to epistemology.

In the Bayesian interpretation, the only thing that is certain is the Bayesian epistemology itself. Everything else carries a level of certainty somewhere between 100% and 0%, but never equal to those extremes. As there may be an infinite number of hypotheses out there, comparing all of them at the same time may be impossible; instead, pools of hypotheses are created and the rest are ignored. As bits of evidence come in, the certainty level of hypotheses may change, or it may not. There’s no guarantee that one hypothesis will emerge triumphant, either. The only assurance is that if the evidence favours a hypothesis, its certainty will increase relative to other hypotheses that aren’t favoured.

The consequences are interesting. No hypothesis is ever considered “false,” the worst you can say is that it’s very unlikely when compared to another. This hypothesis may still be equally as certain as others, or even more likely. You can also get into situations where over one subset of evidence, one hypothesis beats out all comers, yet over another subset it ties with another or fares poorly. It’s all very wibbly-wobbly, especially when contrasted with the strict frequestist/falsificationist approach.

But in the end, it’s the way we operate. NASA rarely uses General Relativity. There’s no need, as within the domain they operate the two theories give identical results. This is a bizarre choice to make under falsification, as we’ve known GR is the better theory since the 1920’s. Under the Bayesian interpretation, though, it’s completely natural.

It’s also more in line with how science evolves. We clearly did not abandon Newtonian Mechanics once the first falsification landed, nor should we have; NM is actually GR, or more accurately it’s what happens when you make some reasonable assumptions and simplify the math behind GR. Falsification sort of covers this, as it says you should replace a toppled theory with another that explains all the old observations plus the new ones, but note the implication that you’re either adding to an existing theory or toppling it entirely. The Bayesian approach, in contrast, implies the old theory is a subset or simplification of the new one. This is a better fit for how NM and GR connect, as it’s a lot easier to simplify GR into NM then expand NM into GR as is evidenced by Special Relativity’s inability to handle gravity. You also see the same pattern repeat in, for instance, the phylogenetic gradualiism vs. punctuated equilibrium debate, or how Darwinian evolution morphed into the Modern Synthesis.

As strange as the Bayesian interpretation can get, it’s as solid or more so than any other competitor I’m aware of.

Advertisements