, ,

Every once in a while, I get hit with this statistic:

… this BJS study Rape and Sexual Assault Victimization Among College-Age Females, 1995–2013 found the rate of rape and sexual assault among college-age women to be about 1-in-52 as opposed to 1-in-5.

I’ve always wondered where it came from, as said report never mentions “one in fifty two,” instead putting the number at 6.1 in 1000 or one in 164 (pg. 1, Highlights).

So, I did a little digging.

The trail starts with a local Washington, D.C. tabloid known as the Washington Examiner. Back in December 12th 2014, a day after the aforementioned report came out, Ashe Schow wrote this:

A new survey from the Bureau of Justice Statistics debunks the oft-repeated claim that one in five women will be sexually assaulted while in college.

The survey found that between 1995 and 2013, an average of 6.1 for every 1,000 female students were raped or sexually assaulted each year. That’s about 0.61 percent annually, or (at most) 2.44 percent over the average four-year period (one in 41). That’s way smaller than 20 percent. That’s also virtually unchanged from 2005, the last time BJS put out this report, where the rate of rape among college women was 6 per 1,000.

So they simply multiplied the one-year stat by four to get a four-year stat, in this case one in 41. This was picked up by Mark J. Perry of the American Enterprise Institute (emphasis in original).

3. What might be the most important statistic (and was not provided in the report and is not being reported by the media, except Ashe Schow at the Washington Examiner) is that the data provided by the NCVS show that only about 1 in 41 women were victims of rape or sexual assault (threatened, completed and attempted; and reported and unreported) while in college for four years during the entire period investigated from 1995 to 2013, based on this analysis:

6.1 women per 1,000 = “1 in 163.9 women” per year, and over four years attending college would then be = “1 in 41 women” while in college. 

Because the victimization rate has been trending downward, that same analysis using data from the last four years (2010 to 2013) reveals that 1 in 52.6 women have been sexually assaulted or raped in recent years.

So the “one in 52” stat isn’t from the original NCVS report, but was calculated from a subset of the original data, then multiplied by four to cover four years. Is it a valid statistic, though? And does it challenge the one-in-five number reported by the Campus Sexual Assault Study of 2007?


It’s not clear if Perry from AEI merely averaged the four years listed in the original report data, or manually grabbed the raw BJS dataset. If it was the former, then the results are hopelessly skewed. The yearly numbers in the NCVS report are three-year rolling averages (pg. 3, Figure 2). So if you average the reported rates from 2010 to 2013, you’re actually finding a weighted average of the raw stats from 2009 to 2013 (2014 was only released this year). The raw numbers from 2011 and 2012 are worth three times as much in this average as those from 2009.

But there’s a bigger problem. To quote myself: On the second page of that BJS study, there’s a two-page aside that begins with:

The NCVS is one of several surveys used to study rape and sexual assault in the general and college-age population. In addition to the NCVS, the National Intimate Partner and Sexual Violence Survey (NISVS) and the Campus Sexual Assault Study (CSA) [HJH: the source of the 1-in-5 stat] are two recent survey efforts used in research on rape and sexual assault. The three surveys differ in important ways in how rape and sexual assault questions are asked and victimization is measured. Across the three surveys, the measurement differences contribute, in part, to varying estimates of the prevalence (the number of unique persons in the population who experienced one or more victimizations in a given period) and incidence (the number of victimizations experienced by persons in the population during a given period) of rape and sexual assault victimization.

The authors then go on to detail the various ways this survey differs from the other two, such as whether or not the incidents are reported, the type of incidents included, whether or not the words “rape” and “sexual assault” were explicitly used, and how the data was gathered. The three surveys differ so greatly in methodology that their results cannot be compared, and the NCVS’s design was the most conservative of the three.

Eagle-eyed observers will notice the exact same information is repeated in the Appendix 1 on page 14. The report authors were so worried people wouldn’t read the report and so falsely compare non-comparable statistics that they copy-pasted an appendix to the front, to emphasize the methodological differences with huge blinking lights and prevent that possibility.

End-quote. Want a second opinion? Fine.

[The Bureau of Justice Statistics] asked the National Research Council to investigate this issue and recommend best practices for measuring rape and sexual assault on their household surveys. Estimating the Incidence of Rape and Sexual Assault concludes that it is likely that the NCVS is undercounting rape and sexual assault. The most accurate counts of rape and sexual assault cannot be achieved without measuring them separately from other victimizations, the report says. It recommends that BJS develop a separate survey for measuring rape and sexual assault. The new survey should more precisely define ambiguous words such as “rape,” give more privacy to respondents, and take other steps that would improve the accuracy of responses.

Sadly, it looks like Ashe Schow either missed or ignored the huge blinking lights, and compared apples to oranges. On the plus side, few people take their number seriously, save a few MRA websites and Christina Hoff Sommers.