Tags

As I’ve said before, human beings are a bag of heuristics; generating a perfect Bayesian learning machine is mind-bogglingly expensive, maybe even impossible, so instead we’ve evolved a variety of shortcuts that make us pseudo-Bayesian for a fraction of the price. A lot of the cognitive biases that skeptics love to cart out work perfectly fine as heuristics; but not only do they fail some of the time, through the march of science and engineering we’re making those failure cases more common.

Example: We’ve developed a test that can pick out an airplane or airport terrorist attack with 99.9% accuracy. Further testing reveals it has a 1% false positive rate; that is, it incorrectly identifies 1% of non-terrorists as terrorists poised to strike.

This looks like a sure-fire win, so let’s implement it at airports worldwide. Roughly eight million people fly per day, while there are about 23 terrorist attacks on planes or airports per year (we’ll ignore that they typically happen in conflict zones with few air travellers) that kill on average one and a half people. So if the odds of anyone doing a terrorist act are roughly one in a hundred twenty five million, then if our test detects a terrorist the odds of that person actually being a terrorist poised to strike are…

…. one in one and a quarter million, roughly. Our test would catch 79,999.999 faux terrorists per day, on average of course. If this test costs six minutes per passenger, that’s 91 person-years lost per day just administering the test. If it takes two hours to double-check someone isn’t a terrorist, all those false alarms cost 18 person-years per day. On the other shoe, assuming every person those (fractional) terrorists killed had a hundred years of life left in them, my screening managed to save a mere 9 person-years per day.

So what seemed like a bulletproof screening process is actually worthless. If you ignored the priors, though, and just focused on the false positive and false negative rates, you’d falsely conclude it was worthwhile. Admittedly, you wouldn’t be alone in this mistake.

Another pseudo-paradox. Say my shelf is stocked with one tin of delicious home-made cookies and four tins of store-bought but similar-looking biscuits I hand out to less-than-special guests, with each tin appearing identical to everyone else. I spot you coming out of the kitchen with a cookie as I approach; horrified, but not wanting to give the game away, I ask you if the cookie has chocolate sprinkles.

You, however, are a bit of cookie connoisseur; you’ve heard of my special tin, but you also heard me coming and only had time to grab one cookie from one tin without looking at anything else. My question about chocolate sprinkles gives away a lot more than I realize, as you know of a store-bought cookie brand that looks identical in shape and size to this one, sold at my favorite grocery store. Three-quarters of them come with the same chocolate sprinkles you see on the cookie in you hand.

I relax a bit when I see the sprinkles, but quickly snatch the cookie from your hands and shoo you out of the kitchen. You hear the garburator flip on, then crunch away at the cookie. How confident can you be that you snagged a store-bought cookie?

If I’d done nothing, you’d have been 100% confident the cookie was store-bought, as that meant my own batch would have been sprinkle-free. But because I made sure you couldn’t taste the cookie, I couldn’t have been 100% confident you had a store-bought one; some of my batch must have chocolate sprinkles. But without knowing what proportion of them have sprinkles, the question seems to be unanswerable.

Except it’s not. You can be at least 75% confident you had a store-bought cookie.

Not seeing why? My cookies can have anywhere from at least one with sprinkles, to all of them with sprinkles. So by Bayes’ Theorem, my confidence in you having a store-bought cookie must have been between…

… if we make no assumptions about the proportion of sprinkles at all. Of course, I automatically had 80% confidence that you owned a store-bought cookie before I asked you that question, so if all my home-made cookies had sprinkles I’d actually be less confident of what you had. Therefore it’s likely that less than three quarters of my cookies had sprinkles; if they had exactly three-quarters, I would have gained nothing by asking the question at all.

The proportion of real positives to false ones shifts around the posterior or final probability, and even if some of the variables aren’t constrained the final result can still be. It’s counter-intuitive, but only because we’re not perfect Bayesian learning machines.