It’s a great summary, going into much more depth than most. I really like how Muller brought out a concrete example of publication bias, and found an example of p-hacking in a branch of science that’s usually resistant to it, physics.
But I’m not completely happy with it. Some of this comes from being a Bayesian fanboi that didn’t hear the topic mentioned, but Muller also makes a weird turn of phrase at the end. Muller argues that, as bad as the flaws in science may be, think of how much worse they are in all our other systems of learning about the world.
Slight problem: there are no other systems. Even “I feel it’s true” is based on an evidential claim, evaluated for plausibility against other competing hypotheses. The weighting procedure may be hopelessly skewed, but so too are p-values and the publication process.
Muller could have strengthened his point by bringing up an example, yet did not. We’re left taking his word that science isn’t the sole methodology we have for exploring the world, and that those alternate methodologies aren’t as rigorous. Meanwhile, he explicitly points out that a small fraction of “landmark cancer trials” could be replicated; this implies that cancer treatments, and by extension the well-being of millions of cancer patients, are being harmed by poor methodology in science. Even if you disagree with my assertion that all epistemologies are scientific in some fashion, it’s tough to find a counter-example that effects 40% of us and will kill a quarter.
My hope doesn’t come from a blind assurance that other methodologies are worse than science, it comes from the news that scientists have recognized the flaws in their trade, and are working to correct them. To be fair to Muller, he’d probably agree.