There’s a lot more bold headlines where that came from. This study is pretty big news; for years, people have been claiming that cell phones are linked to cancer, but no-one’s been able to provide solid evidence of it.
On the contrary, we’ve been given good reason to be skeptical. Combine that with the lack of a cancer up-tic in epidemiological studies, and our priors should be quite low.
But let’s dig into the study itself. Fortunately for everyone, it was published on a pre-print archive, so the public has immediate access. Fortunately for skeptics, the reviewer comments were left intact too. Most of them are positive, but not all.
As could be expected in a study following NTP protocols, the exposure levels for the rodents in this project exceed the limits for the wbSAR and psSAR defined in the IEEE Std C95.1-2005 safety standard for human exposure to mobile phone radiation. In the low dose exposure group the exposure level in the organs exceeds or is close to the localized SAR limit for the general public, except for a few low-water content tissues. More specifically, the psSAR over 1g in the human head, is limited by the safety standards to 2.4 W/kg for mice, and >1.3 W/kg for rats, hence similar to the limit. […]
The proliferative lesions in the brain are more difficult to interpret because 1) their low incidence that was well within the historical control range, 2) lack of clear dose response; and 3) lack of statistical significance (except for the significant exposure-dependent trend for test article B…. However, the presence of malignant gliomas and/or foci of glial cell hyperplasia in 5 of 6 test article groups for both sexes vs none in controls of either sex is suggestive of a test article effect….I would consider the malignant gliomas as ‘Equivocal Evidence’ of carcinogenic activity.[…]
Based on these inputs, the recommendations in Table 13 of the FDA guidance document, and a sample size of 90 rats in each group, I find very low power (< 5%, see Appendix 2). Even allowing for a risk ratio of 5.0 (a level that is clinically unlikely), the power for 2-sided alpha=0.005, k=3 and low lethality is only ~14% (see Appendix2). The low power implies that there is a high risk of false positive findings, especially since the epidemiological literature questions the purported association between cell phone exposure and cancer.
Follow the link to the NIH I provided above, and you find:
The National Institute of Environmental Health Sciences (NIEHS), which is part of the National Institutes of Health (NIH), is carrying out a large-scale study in rodents of exposure to radiofrequency energy (the type used in cell phones). This investigation is being conducted in highly specialized labs that can specify and control sources of radiation and measure their effects. Preliminary results from this study were released in May 2016 and are being reviewed by NCI experts.
So this study is quite different from most others you’ve heard about; while the others passed through the peer review process, this one is only partway through. We really shouldn’t be talking about it until the ink has fully dried.
But I’m bad at following advice.
Let’s start with just a basic plausibility check. The rats were dosed with anywhere from 1.5 to 6 Watts per Kg of energy from a radio emission source. How does that compare to the dosage of an actual cell phone? Many cell phone batteries run at 3.7 Volts, and carry 3 Amp-hours of charge. My phone’s specs say I can talk with it for about 20 hours straight, but let’s be charitable and say 4 is more reasonable. Cell phones radiate their energy in all directions, so at best half of the energy it transmits goes into you, and their antenna are roughly 50% efficient, so we can lop off a quarter of the energy output. Of what’s emitted, let’s say 100% of it goes into your brain, and that your brain has a mass of 1 kg (1.3 is more likely, but again we’re being charitable). That’s enough to work out the dosage.
The lowest dosage level they used is about three times greater than what you’d encounter from a cell phone, even if I bend over backwards to pump up the radiation. In comparison, human beings emit roughly 90-120 Watts of energy, two-thirds of it via infrared light. Our radiation has a shorter wavelength, so each photon carries a bigger punch even if the average energy radiated over time is the same. If we stuck you and a cell phone in a Faraday cage, so that no radio emissions can escape into the environment, then you’d absorb 1.40 W/kg from your phone but a 100 gram cell phone would absorb 900-1200 Watts/kg from you! Get rid of the Faraday cage, but assume the phone is perfectly flat against your body, and it still receives roughly 13-10 W/kg to your 0.69 W/kg.
The design of the study itself is fairly good. There’s two things I would have liked to see, though. The reviewers had radio frequency monitors set up to ensure each rat got the expected dose. Their control group is listed as having no exposure to RF, though, which is impossible; radio waves bounce around us all the time, from natural sources like lightening to “artificial” ones like nearby computers. What was the actual RF dosage of the control group?
Secondly, there’s way too many treatment groups. Two types of radio signal were broadcast, at three different levels, and to two sexes. That’s twelve different treatment groups, to two control groups (male and female). On top of that, here’s the variables the researchers looked at:
- Percent of pregnant females (“dams”) littering,
- Litter size,
- Sex distribution,
- Gestation weight,
- Weight during lactation,
- Litter weight,
- Adult weight,
- Survival rate,
- Incidence of malignant gliomas,
- Incidence of gilial cell hyperplasia,
- Incidence of cardiac schwannaomas,
- Incidence of schwann cell hyperplasia,
- Incidence of schwannaomas outside the heart.
As one of the reviewers noted, it’s very easy to fish for correlations with so many variables in play and so many different ways to slice the data. If the typical study has a 30% chance of finding a false positive for a single variable, there’s a 31% chance two of the five metrics will have false positives. The typical study has a false negative rate of about 45%, though, while one of the reviewers noted this study’s was under 5%; while false positives and false negatives are only loosely related, it’s a safe bet that my above back-of-the-envelope calculation is an underestimate.
There’s no sign the researchers tried to compensate for the extra variables, though, and in fact they admit the treatment group had some better outcomes.
At the end of the 2-year study, survival was lower in the control group of males than in all groups of male rats exposed to GSM-modulated RFR. Survival was also slightly lower in control females than in females exposed to 1.5 or 6 W/kg GSM-modulated RFR. In rats exposed to CDMA-modulated RFR, survival was higher in all groups of exposed males and in the 6 W/kg females compared to controls.
As Rebecca Watson pointed out, this study could easily have been headlined “Cell phones extend your life” instead. This is also a complicating factor: if you’re more likely to get cancer as you get older, and one group of you lived to be a younger age, that group would be less likely to get cancer than one that lived longer. I don’t see any indication that the researchers compensated for that, and so far I haven’t seen the lifespans of each group presented in the paper, so it may be impossible for other researchers to nullify that effect.
In the next part, I’ll start digging into the numbers.
 Wyde, Michael, Mark Cesta, Chad Blystone, Susan Elmore, Paul Foster, Michelle Hooth, Grace Kissling, et al. “Report of Partial Findings from the National Toxicology Program Carcinogenesis Studies of Cell Phone Radiofrequency Radiation in Hsd: Sprague Dawley® SD Rats (Whole Body Exposure).” bioRxiv, May 26, 2016, 55699. doi:10.1101/055699.
 Hubbard, R., and R. M. Lindsay. “Why P Values Are Not a Useful Measure of Evidence in Statistical Significance Testing.” Theory & Psychology 18, no. 1 (February 1, 2008): 69–88. doi:10.1177/0959354307086923.
 Colquhoun, David. “An Investigation of the False Discovery Rate and the Misinterpretation of P-Values.” Royal Society Open Science 1, no. 3 (November 1, 2014): 140216. doi:10.1098/rsos.140216.
 Jennions, Michael D., and Anders Pape Møller. “A survey of the statistical power of research in behavioral ecology and animal behavior.” Behavioral Ecology 14.3 (2003): 438-445.
 Sedlmeier, Peter, and Gerd Gigerenzer. “Do studies of statistical power have an effect on the power of studies?.” Psychological Bulletin 105.2 (1989): 309.