FYI.

This story is over 5 years old.

Tech

How a Marijuana Study Can Poke Holes in Your Brain

Thanks to the bungling of a study that grabbed headlines last week, we've gotten caught in a weed-brain feedback loop.

Casual marijuana use may be associated with brain abnormalities in developing brains. It might even cause some of the changes. But, thanks to the bungling of a study that grabbed headlines last week, we aren’t all that much closer to knowing.

The study, “Cannabis Use is Quantitatively Associated with Nucleus Accumbens and Amygdala Abnormalities in Young Adult Recreational Users,” was widely reported as being the "first study that has found these brain abnormalities in casual users," as I put it in Motherboard's coverage. (I also noted that "casual" is a pretty vague adjective, as we'll see in a moment.)

Advertisement

The study’s lead author, Hans Breiter, used the study as a reason to say we should reconsider legalizing marijuana and shouldn’t allow “anybody under age 30 to use pot.” Anne Blood, another researcher, said that the study "could indicate that the experience with marijuana alters brain organization and may produce changes in function and behavior." (I’ve reached out to the scientists and press officers involved in the study, and will update when I hear back.)

“People think a little recreational use shouldn’t cause a problem, if someone is doing OK with work or school,” Breiter said in a Northwestern release regarding the research. “Our data directly says this is not the case.”

Except, as has since been pointed out, it doesn’t.

The main problems with the study, according to people who have pushed back, is its small sample size (20 college students), its definition of “casual” (the average study participant smoked 11 joints a week), the fact that most of the weed smokers were also drinkers, and the fact that only one MRI scan was done for each participant, which is not ideal.

Lior Pachter, a University of California, Berkeley computational biologist, has done what is most likely the most thorough review of the paper, and concluded by calling it "quite possibly the worst paper I've read all year." While Pachter lays out a host of concerns with the statistical analysis that led to the paper's conclusions, he says it's not the study that's most egregious, it's the researchers' comments after the fact.

Advertisement

"I do agree that the summation could be better written (as well as the title, abstract, and other parts of the paper making unsupported claims), but what really upsets me are the bold statements that Breiter has made in the press," Pachter wrote. "I think that asking that scientists represent their data honestly in press releases and media interviews is a pretty low bar."

Meanwhile, at Medpage Today, John Gever wrote that "the study team's actual paper stuck fairly close to their data, concluding that the users showed 'structural abnormalities.'" However, the conclusion later pushed—that casual marijuana use is correlated with bad consequences—can't be proven "without before-and-after MRI scans showing brain structure changes in users that differ from nonusers and documentation of functional impairments associated with those changes," Gever wrote.

I found Pachter's post through Jacob Sullum, who wrote a piece for Reason called “Study of Pot Smokers’ Brains Shows that MRIs Cause Bad Science Reporting.” He raises some good points about the echo chamber in science reporting, and we've said much the same thing about previous studies.

But Sullum also misses the mark a bit. MRIs and studies like this don’t cause bad science reporting; misleading sources cause bad stories. And correlation not equaling causation is a cardinal rule of science—one that was missed here—but so is bad data in, bad data out.

Advertisement

Scientists, journals, and their press teams have learned how to game the system, and that’s what happened here. In the marijuana study, Northwestern UniversityMassachusetts General, and the Society for Neuroscience failed miserably. Each of those press releases quote researchers who oversold the findings of the study.

Yes, some studies smell bad from a mile away, and rewriting press releases is the ultimate sin in science journalism. So we read papers, and rely on the researchers themselves to reliably explain what’s going on, in plain English. For the most part, it works well. What person to better comment on research than the person who conducted it?

Journalists aren't absolved of blame. They of course need to do a better job of actually reporting these things out, especially when it concerns a controversial (and political) topic. In a perfect world, one-source stories should be avoided, so it would have helped, in this case, to get comment from someone who both knows what they're talking about and has read and understood the paper.

Unfortunately, this isn't a perfect world. Most journalists get studies on embargo, so we have time to read the paper and talk to the scientists behind the study. Unless it's a blockbuster report, finding a third-party source that intimately knows the research—meaning one that has early access to the paper, and time to read it—is rarer than most would expect.

Advertisement

Instead, you focus only on work that is already peer-reviewed, and therefore would presumably stand to basic scrutiny. Your own analysis, with a call to the researchers, is the best you're able to do in a lot of cases. And when a big study like this comes out, you either write about it right away, or you wait for others to screw up and then you debunk them.

The more experienced you get, you get a feel for which journals, which institutions are trustworthy. This was a study in the Journal of Neuroscience (published by the Society for Neuroscience, which is good—check), funded by the National Institutes of Health (check), done by researchers at Harvard (check). On its face, the study checks out.

But even the best journals put out bad papers. The Lancet published the infamous vaccines-are-linked-to-autism study. Science retracted a paper that suggested ecstasy causes holes in your brain after it was discovered that the scientists doing the study used meth instead. There’s nothing to suggest that this Journal of Neuroscience study was a screw-up on the order of magnitude of either of those, but it’s the same deal—journalists are in no position to corroborate this stuff, and have to put a lot of trust in the people who put out the study.

“The buck stops with journal editors,” John Bohannon, a biologist and science journalist who works at the magazine side of the journal Science (and has done a large study of the science publishing industry), told me. “They need to make sure that the article doesn’t oversell its conclusions. And [they] also [need] to be careful not to hype it in the press release. But after publication, the authors are beyond the control of editors and often oversell their story when talking to the press.”

A little of both happened here. Yes, it was a juicy story ripe to be oversold. Yes, my story and dozens of others should have focused on assessing the methods of the study, and not the words of the researchers. And yes, correlation is not causation, even when the authors say it is.