[tw: sexual assault statistics]
This tactic has the form of:
We cannot know with certainty,
reality is myth.
This logic fail is used often by climate denialists:
We cannot tell the future with certainty,
climate change is a myth.
And with anti-vaxxers:
This Japanese study says MMR vaccine is “most unlikely” to be a “main” cause of Autism Spectrum Disorder (ASD), not that MMR didn’t cause ASD,
MMR vaccines cause autism.
There is no study that does not struggle with bias and uncertainty. For example, survey respondents are almost always self-selected. Barring forcing a population to participate in a study, which would be against ethical standards of informed consent, this bias will exist in every single study based on survey data ever conducted. In some instances this bias is significant enough that the results should not be interpreted. However, if any study that contained this bias was thrown out, our best and only ways of gaining information about certain subjects would be gone.
The tactic feeds on a natural discomfort with uncertainty. Thinking about information in terms of HOW GOOD IT IS instead of whether it’s GOOD or BAD is actually nearly impossible before certain stages of human cognitive development. So, this tactic works extremely well.
All you have to do is poke ONE SMALL HOLE into the information being presented and all the sudden all that knowledge from all those studies is thrown in the waste-bin of “myth” and anyone who tells you different is a “fraud”.
Christina Hoff Sommers uses this tactic egregiously when discussing the prevalence of rape and sexual assault on college campuses, which she has described as “statistical hijinks” in an article in TIME magazine “5 Feminist Myths That Will Not Die”.
The one-in-five figure is based on the Campus Sexual Assault Study, commissioned by the National Institute of Justice and conducted from 2005 to 2007. Two prominent criminologists, Northeastern University’s James Alan Fox and Mount Holyoke College’s Richard Moran, have noted its weaknesses:
“The estimated 19% sexual assault rate among college women is based on a survey at two large four-year universities, which might not accurately reflect our nation’s colleges overall. In addition, the survey had a large non-response rate, with the clear possibility that those who had been victimized were more apt to have completed the questionnaire, resulting in an inflated prevalence figure.”
What is so insidious about this particular type of truthiness is that it doesn’t involve flat-out lying. It’s lying through implication. The whole article is framed as debunking MYTHS. The implication is that the self-selection bias and sampling bias inherent in the studies render the results useless or at least significantly suspect.
Absolutely no evidence is provided to support that implication. None. In fact, evidence to the contrary is provided. If the self-selection bias and sampling bias rendered the results invalid, similar studies would likely not support these findings. Yet, they do.
Defenders of the one-in-five figure will reply that the finding has been replicated by other studies. But these studies suffer from some or all of the same flaws.
Yet, she still gives no evidence that these flaws invalidate the finding. Instead of evidence, she simply suggests that women who have been assaulted are more likely to care about the subject matter and bother to take the survey. This might be true and it might inflate the numbers, but there is no evidence provided that this effect is significant. It is also possible that some women chose not to take the survey because doing so would bring to mind painful memories and they don’t want to think about it. There are many reasons people do not take surveys when asked.
Self-selection bias is not an unstudied phenomena and careful researchers do what they can to mitigate it. If such a study exists concerning self-selection bias in surveys concerning sexual assault on college campuses, Sommers has not provided it. Instead, she declares the 1 in 5 statistic a myth – not just POSSIBLY inflated by a POSSIBLE self-selection bias – but a MYTH.
So, how to guard against this tactic?
First, use good stats:
- If you want to use a stat, find the study it is based on and read it.
- Don’t use statistics that you can’t cite to the primary source.
- Avoid using numbers that aren’t very well supported.
Second, provide information:
- Give the citation with the stat – every time.
- Describe the methods of the study and relevant information (like sample size).
- Be forthright and honest about the limitations of the study.
Then, let them talk:
- If challenged, concede that the study is not perfect.
- Ask them for any info that might change the impression of the study’s conclusion.
- Use their own numbers to make your arguments if possible. (It’s amusing.)
For example, lets say someone starts screaming at you that the “1 in 5” stat has been debunked for this, that or the next reason. You can ask them to provide information about the prevalence of sexual assault on college campuses, since apparently your numbers (provided by the U.S. Department of Justice based on a randomize sample with a response rate of over 40% and a sample size of 5,446) are complete shit. If they don’t have any numbers to provide, seriously just go straight to the high-end of reason to meet them half way. If you don’t acknowledge their opinion as having some validity (however bullshit you think it is), you are a “fraud” because the talking-head told them so.
So – give in.
“So, if you’re right and our best estimate of 1 in 5 is inflated, what is the more accurate number? Is it 1 in 6? 1 in 7? Would that possibility significantly change how we might shape policy or otherwise respond to the problem? We can’t identify prevalence with complete certainty. That doesn’t mean we know nothing at all. We know that the prevalence is disturbingly high and responding to the problem should be a priority.”
Lastly, don’t get bogged down defending exact numbers. Numbers are never exact. Find common ground and move forward.