**Tags**

We have enough info from the last installment to do a Bayesian hypothesis test. So let’s pluck out one of Bem’s experiments and run the numbers! The second one has the largest trial count, making it a tempting target: Table 2 shows that in 5,400 attempts, there were 2,790 successful guesses.

We’ll consider two hypotheses: this result was due to random chance, or it wasn’t. Seems straightforward enough.

We’ll start with the random portion. As the subjects have only two choices, and the odds are constant over the duration of the test, this is best handled with the Binomial distribution. Easy.

That other value is kind of weird, though. See why? Maybe it’ll become obvious if I switch the “precognition exists” hypothesis for another: precognition exists with a success rate of 51.7%. Plugging more numbers into Wolfram Alpha gives:This hypothesis isn’t the same as the random one, so it must be part of the “not random” hypothesis. But by the same logic, so must the hypothesis that the success rate is 99%.

The “not random” hypothesis covers a range of values! That makes sense in hindsight; probabilities cover a range, and the opposite of a single value plucked from that range is the entire range minus that value.

Yes, cute lil’ kitty, but not much more. We just need to figure out how to handle a range. Let’s pretend there are only ten possible values within it.

The weighting for the “not random” hypothesis would be the sum of nine of those values, each divided by the range they covered. As the number of divisions increases *holy crap we’re doing calculus integrals*.

No really, integrating a formula is really just doing a fancy sum. Don’t let the “calculus is haaaard” people convince you otherwise.

The numerator can be an integral too, but in this case the “random” hypothesis only asked us to sum a single point. Oh, and if you’re wondering why I didn’t exclude p = 0.5 from the denominator, that’s because a point is infinitely small and therefore drowned out by its infinite peers over the rest of the range. We can safely ignore it.

So all that remains is to integrate the Binomial distribution for 2,790 successes in 5,400 trials, plug in the range of possible values, and we’re set! There are even online tools to help with this.

Hmm. Maybe another one would work better?

Dang. New plan: we’ll throw random numbers at the problem until it goes away. By plucking random parameters from the hypothesis, evaluating each in turn, and averaging the results together, we can brute force toward a solution. The technical term for this is “simple Monte Carlo estimation,” and while it’s inefficient compared to other methods, it’s also extremely easy to code up and modify.

I get a further boost from the fact that a Bayes Factor is a fraction. Any calculations shared by both numerator and denominator cancel out, saving even more CPU cycles. You can run the program for yourself online, and duplicate what I got:

```
Success/Trials Samples Bayes Factor 1 / Bayes Factor
2790/5400 2000000 0.342207 2.922211
```

Wow, so precognition is actually *less* likely in light of this test, but the results aren’t that strong either. It was rather sporting of Bem to include a failed test in his paper!

As Table 2 reveals, the four analyses yielded comparable results, showing significant psi performance across the 150 sessions

Umm, Bem thinks these values show support for precognition? What is he

Don’t worry, there’s a perfectly good explanation. It’s ….

.. in the next post.

Pingback: Bayes Theorem 203: Why Bayes Rules | SINMANTYX

Pingback: Bayes Theorem 205: Hypothesis Fever and Falsification | SINMANTYX

Pingback: Take It Outside | SINMANTYX

Pingback: Bayes Theorem 201: Odds Ratios, Bayes Factors, and Cute Animals | SINMANTYX