Scientists can’t give you a good answer.
It’s not their fault, said Steven Goodman, co-director of METRICS. Even after spending his “entire career” thinking about p-values, he said he could tell me the definition, “but I cannot tell you what it means, and almost nobody can.” Scientists regularly get it wrong, and so do most textbooks, he said. When Goodman speaks to large audiences of scientists, he often presents correct and incorrect definitions of the p-value, and they “very confidently” raise their hand for the wrong answer. “Almost all of them think it gives some direct information about how likely they are to be wrong, and that’s definitely not what a p-value does,” Goodman said.
Think on that: we can give simple explanations of Evolution, General Relativity, and even Quantum Mechanics that aren’t perfect but capture the basic gist in things people can relate to. Scientists rely a lot more on p-values than any of those theories, yet not only do they struggle to nail the technical description they cannot even relate them to everyday experience.
Compare this with Odds Ratios and Bayes Factors: the former is the likelihood of one hypothesis related to another, ignoring all background information, while the latter is the same but factoring in prior experience. You run into them all the time in betting pools. They’re intuitive and easy to explain.
It speaks poorly of science in general that scientists insist on using a counter-intuitive metric that even they struggle to understand, and ignore a simpler alternative.