Tags

, ,

We have enough info from the last installment to do a Bayesian hypothesis test. So let’s pluck out one of Bem’s experiments and run the numbers! The second one has the largest trial count, making it a tempting target: Table 2 shows that in 5,400 attempts, there were 2,790 successful guesses.

We’ll consider two hypotheses: this result was due to random chance, or it wasn’t. Seems straightforward enough.

The Bayes Factor, in a Bayesian Odds Ratio form.We’ll start with the random portion. As the subjects have only two choices, and the odds are constant over the duration of the test, this is best handled with the Binomial distribution. Easy.

Calculating the odds of the evidence given the "random chance" hypothesis. Answer: about 1 / 1850That other value is kind of weird, though. See why? Maybe it’ll become obvious if I switch the “precognition exists” hypothesis for another: precognition exists with a success rate of 51.7%. Plugging more numbers into Wolfram Alpha gives:Calculating the odds of getting this value under the "precognition at a 51.7% success rate" hypothesis; it's about 1 / 92This hypothesis isn’t the same as the random one, so it must be part of the “not random” hypothesis. But by the same logic, so must the hypothesis that the success rate is 99%.

Calculating the odds of the "precognition at a 99% success rate" hypothesis; it's about 0The “not random” hypothesis covers a range of values! That makes sense in hindsight; probabilities cover a range, and the opposite of a single value plucked from that range is the entire range minus that value.

Dis mean moar mat, rite?Yes, cute lil’ kitty, but not much more. We just need to figure out how to handle a range. Let’s pretend there are only ten possible values within it.

The probability of "precog" = adding up all portions of the probability space which don't correspond to H_randomThe weighting for the “not random” hypothesis would be the sum of nine of those values, each divided by the range they covered. As the number of divisions increases holy crap we’re doing calculus integrals.

Summing all the slices of evidence for precognition = an integralNo really, integrating a formula is really just doing a fancy sum. Don’t let the “calculus is haaaard” people convince you otherwise.

The numerator can be an integral too, but in this case the “random” hypothesis only asked us to sum a single point. Oh, and if you’re wondering why I didn’t exclude p = 0.5 from the denominator, that’s because a point is infinitely small and therefore drowned out by its infinite peers over the rest of the range. We can safely ignore it.

So all that remains is to integrate the Binomial distribution for 2,790 successes in 5,400 trials, plug in the range of possible values, and we’re set! There are even online tools to help with this.

Mathematica can't handle itHmm. Maybe another one would work better?

Integral Online fails us.Dang. New plan: we’ll throw random numbers at the problem until it goes away. By plucking random parameters from the hypothesis, evaluating each in turn, and averaging the results together, we can brute force toward a solution. The technical term for this is “simple Monte Carlo estimation,” and while it’s inefficient compared to other methods, it’s also extremely easy to code up and modify.

I get a further boost from the fact that a Bayes Factor is a fraction. Any calculations shared by both numerator and denominator cancel out, saving even more CPU cycles. You can run the program for yourself online, and duplicate what I got:

Success/Trials	Samples	Bayes Factor	1 / Bayes Factor
2790/5400	2000000	0.342207	2.922211

Wow, so precognition is actually less likely in light of this test, but the results aren’t that strong either. It was rather sporting of Bem to include a failed test in his paper!

As Table 2 reveals, the four analyses yielded comparable results, showing significant psi performance across the 150 sessions

Umm, Bem thinks these values show support for precognition? What is he

.... wut.... wutthe_values__largeWHA.....Traditional and Bayesian hypothesis testing can lead to radically different results!I... I don't understandDon’t worry, there’s a perfectly good explanation. It’s ….

.. in the next post.

Advertisements