health care
Machine learning could improve medicine by analyzing data to improve diagnoses and target cures, but technological, bureaucratic, and regulatory obstacles have slowed progress.
Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. | Seth Wenig/AP Photo
By Ben Leonard and Ruth Reader
08/15/2022 03:55 PM EDT
Link Copied
Investors see health care’s future as inextricably linked with artificial intelligence. That’s obvious from the cash pouring into AI-enabled digital health startups, including more than $3 billion in the first half of 2022 alone and nearly $10 billion in 2021, according to a Rock Health investment analysis commissioned by POLITICO.
And no wonder, considering the bold predictions technologists have made. At a conference in 2016, Geoffrey Hinton, British cognitive psychologist and “godfather” of AI, said radiologists would soon go the way of typesetters and bank tellers: “People should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better.”
But more than five years since Hinton’s forecast, radiologists are still training to read image scans. Instead of replacing doctors, health system administrators now see AI as a tool clinicians will use to improve everything from their diagnoses to billing practices. AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it yet. And the government is just beginning to grapple with its regulatory role.
“Companies come in promising the world and often don’t deliver,” said Bob Wachter, head of the department of medicine at the University of California, San Francisco. “When I look for examples of … true AI and machine learning that’s really making a difference, they’re pretty few and far between. It’s pretty underwhelming.”
Administrators say algorithms — the software that processes data — from outside companies don’t always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.
But it’s slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.
The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technology’s detrimental aspects, such as bias that threaten to exacerbate health care inequities.
“Sometimes there’s an assumption that AI is working, and it’s just a matter of adopting it, which is not necessarily true,” said Florenta Teodoridis, a professor at the University of Southern California’s business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.
Despite the obstacles, the tech industry is still enthusiastic about AI’s potential to transform health care.
“The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026,” Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.
If he’s right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.
“I see us moving as a medical community to a better understanding of what it can and cannot do,” said Lara Jehi, chief research information officer for the Cleveland Clinic. “It is not going to replace radiologists, and it shouldn’t replace radiologists.”
Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.
An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: “We’ve taken 80 percent of the human effort out of it.” The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.
NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebook’s Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AI’s ability to downsize the amount of data doctors need to review.
“When I look for examples of … true AI and machine learning that’s really making a difference, they’re pretty few and far between. It’s pretty underwhelming.”
Bob Wachter, head of the department of medicine at the University of California, San Francisco
Covid has accelerated AI’s development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.
Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.
“The amount of knowledge that’s been obtained and the amount of progress has just been really exciting,” said Peter Lee, corporate vice president of research and incubations at Microsoft.
There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.
“I’m calling it our care traffic control,” said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.
The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.
Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But that’s not as easy for smaller players.
Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.
Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.
This quandary has led top health systems to build out their own engineering teams and develop AI in-house.
That could create complications down the road. Unless health systems sell their technology, it’s unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. It’s not just that the health systems are implementing AI while no one’s looking. It’s also that the stakeholders in artificial intelligence, in health care, technology and government, haven’t agreed upon standards.
A lack of quality data — which gives algorithms material to work with — is another significant barrier in rolling out the technology in health care settings.
Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. | Elaine Thompson/AP Photo
Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patient’s medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.
“We have pieces and parts, but not the whole,” said Aneesh Chopra, who served as the government’s chief technology officer under former President Barack Obama and is now president of data company CareJourney.
While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.
Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.
Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.
That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.
But working with health care data is also more difficult than in other sectors because it is highly individualized.
“We found that even internally across our different locations and sites, these models don’t have a uniform performance,” said Jehi of the Cleveland Clinic.
And the stakes are high if things go wrong. “The number of paths that patients can take are very different than the number of paths that I can take when I’m on Amazon trying to order a product,” Wachter said.
Health experts also worry that algorithms could amplify bias and health care disparities.
For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.
Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of “good machine learning practices,” oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.
The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.
Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.
But in-house AI isn’t subject to FDA oversight. Bakul Patel, former head of the FDA’s Center for Devices and Radiological Health and now Google’s senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, there’s no “easy answer.”
FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AI’s potential, he said.
Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.
But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.
Link Copied
© 2022 POLITICO LLC