FYI.

This story is over 5 years old.

Tech

New Research Offers a Bleak Perspective on Algorithmic Medicine

RAND analysts find that, so far, computerized clinical decision support (CDS) systems mostly don't work.
Image: Intel Free Press/Flickr

It seems natural enough. Take a bunch of clinical diagnostics—seemingly cold hard data like genetics, symptoms, family history, and patient history—punch it all into some system that compares it against a database and returns a suggested treatment path based on what has and hasn't worked for similar patients dealing with similar illnesses in similar stages. Evidence, in other words.

This is the gist of computerized clinical decision support (CDS) systems, which, as of 2014's Protecting Access to Medicare Act, are mandated to be in place in the United States by 2017 within a limited scope. Specifically, CDS systems must be used—or at least queried—for decisions related to expensive advanced diagnostic imaging orders for Medicare patients. Time spent in a PET scanner is money, and the government is very particular about money, or at least the appearance of money poorly spent.

Advertisement

There's a pretty big catch: CDS systems as currently realized don't really work. This is according to a study out in the current JAMA. The report, which comes courtesy of researchers at the RAND Corporation led by policy analyst Peter Hussey, examined 117,000 diagnostic imaging orders from 3,300 different physicians, finding that CDS systems fail up to two-thirds of the time.

Brutal.

"The CDS systems did not identify relevant appropriateness criteria for 63.3 percent of orders during the baseline period and for 66.5 percent during the intervention period," Hussey and his group report.

Fortunately, it's not so much that CDS systems are kicking out wrong or dangerous recommendations as it is that the systems just aren't coming up with any recommendations at all. As Hussey tells IEEE Spectrum, it's a failure of both algorithms and the databases that those algorithms query, neither of which are currently robust enough to handle the task of doling out critical health-care advice.

"There are lots of different kinds of patients with different problems, and the criteria just haven't been created for some of those," Hussey says. "In other cases, it's likely that the criteria were out there but the CDS tools couldn't find them. These seem like solvable problems, but we need to get working on this pretty quickly because this is going to be mandatory in a couple of years."

Research on CDS systems overall is a mixed bag. A 2011 meta-analysis focusing on CDS deployment and risk of patient death—covering 16 trials, 37395 patients, and 2282 deaths—found no real improvement with the systems in place. A 2005 meta-analysis, meanwhile, found a 64 percent improvement in patient outcomes with the systems, depending on how and how well the systems are integrated. This integration is key.

There are a lot of reasons behind the limited success of CDS systems, including the completeness of available patient information in the face of still-limited electronic health records. One interesting barrier comes in the form of alert overload, e.g. when IRL medical personnel become deluged with automated warnings and flags, creating a sort of warning fatigue and unfavorable signal to noise ratio. It's an environment begging for CDS recommendations to be ignored.

In any case, Hussey's suggestion is that the systems, including those recently mandated by the US Congress, should be implemented incrementally rather than all at once. The long-term success of algorithmic medicine most likely depends on it.