Analysis | By Scott Mace
September 29, 2022
In the second of a two-part interview, Dennis Chornenky, Optum’s senior vice president and chief AI officer, looks for inspiration from finance and institutional review boards to steer AI toward maturity.
Editor’s note: This is the second of a two-part interview. The first part was posted on Wednesday, Sept. 28.
In early 2022, Dennis Chornenky, MPH, became chief AI officer and senior vice president at Optum Health, a subsidiary of the UnitedHealth Group health plan. In part 1 of his conversation with HealthLeaders, posted on Wednesday, he addressed the role of AI in creating data-driven insights to prevent disease and personalize care; how to apply governance frameworks to AI, and his prior role serving Presidents Trump and Biden in crafting national strategies on AI and telehealth.
Editor’s note: This is the second of a two-part interview. The first part was posted on Wednesday, Sept. 28.
In early 2022, Dennis Chornenky, MPH, became chief AI officer and senior vice president at Optum Health, a subsidiary of the UnitedHealth Group health plan. In part 1 of his conversation with HealthLeaders, posted on Wednesday, he addressed the role of AI in creating data-driven insights to prevent disease and personalize care; how to apply governance frameworks to AI, and his prior role serving Presidents Trump and Biden in crafting national strategies on AI and telehealth.
HeathLeaders: Where are we on the maturity cycle of AI? Where are the limits of AI, particularly with the eye toward healthcare?
Chornenky: Maturity in the AI space is certainly an evolving concept. The further out something is, the less defined it becomes, and the more variability you’re going to run into. My view is that we’re early in the maturity stages, in terms of development potential.
Dennis Chornenky, MPH, senior vice president and chief AI officer at Optum Health. Photo courtesy Optum Health.
Some of the work that I’m privileged to be advancing, that creates governance and a portfolio management model for our organization, is quite innovative and I would say ahead of the curve. I’m doing that by leveraging a combination of experiences and emerging frameworks.
With regard to managing a portfolio of AI investments, we can pull in some insights from the financial sector. We can look at institutional investment policy statements, for example, and approaches to outlining which risks you may be likely to encounter as a portfolio manager. What are you doing to mitigate those risks? What’s the strategic goal of the portfolio? What level of risk are you willing to accept to achieve your target return? These kinds of established approaches can be helpful to keep in mind.
I also draw on my epidemiology experience with institutional review boards, which are designed to ensure ethics and safety in clinical trials. There is an emerging recognition in healthcare that we can do something very similar for the AI space, so we’re seeing the emergence of “AI review boards,” loosely modeled on institutional review boards, that can screen AI projects, models, and applications for various types of risks in operational and clinical environments. We want to make sure that we have consistent and reliable processes that help ensure our models are safe, ethical, and as fair as they can be for any particular use case.
As for the limits of AI, we can frame this from the perspective of the trade-offs between performance when doing precise, clearly defined tasks against the ability to perform a broad range of tasks and solve a broad range of cognitive problems creatively. Machines excel at doing repetitive tasks with precision and humans excel at navigating dynamic environments and solving new problems. What business leaders should really be thinking about in this context is how to drive the kind of digital transformation in their enterprise that optimizes the collaboration of humans and machines in a way the amplifies the strengths of both and minimizes the limitations of both.
HL: The fact that you have this title of chief AI officer at Optum suggests that the day is not far off where AI becomes part of the standard of care. Is that something you ever hear discussed?
Chornenky: It’s an interesting question. For me, this conversation started in telehealth, where the question was, could telehealth be the standard of care in the sense that there were concerns many years ago around the risks of doing telehealth? A health system that’s providing care delivery through telehealth may be exposing itself to legal action if some harm comes from offering virtual care. The other side of that argument was that there may come a day where health systems that don’t provide telehealth as a modality may be the ones exposing themselves to lawsuits, because they’re the ones providing less access to care, or restricting care, because they’re forcing people to only get care in person, potentially disadvantaging certain populations.
It’s the same question with AI. Does using AI introduce legal risks, or will there come a day when it’s expected and not using AI is what introduces legal risks? Because if you’re not using more advanced technologies, you might be relying on imperfect and more variable human decision making in diagnosis and care delivery, without the use of more precise machine recommendations. We’re pretty far away from something like that, maybe decades, because of the very limited maturity of the use of these technologies and the lack of scale. And in some ways, we may never entirely get there.
There’s always going to be some sort of downstream human decision making. It’s unlikely that we’re going to get to a point where machines will completely control all aspects of the diagnostic or care delivery process. There’s always a requirement, a natural need, to have humans make the broader decisions around whatever machine insights may be derived from data, and the kinds of actions we may want to take as a consequence. That said, over time we will certainly see a growing automation of tasks and more care decisions being informed by data-driven insights.
HL: Before the pandemic, Vinod Khosla was going on and on about how we don’t need doctors. The pandemic changed that dialogue.
Chornenky: The pandemic forced a recognition of the importance of investing in technology. But I don’t think It did much to support Khosla’s case, which, if it can be described as doctors inevitably being replaced by computers one day, is a poor formulation. There are many tasks currently performed by physicians and medical technicians that will indeed be replaced with AI, i.e., computational methods and machines that simulate or surpass human cognitive capabilities. But the overall role of the physician is much too broad and requires the kind of big picture thinking, creativity, and bedside manner that machines are unlikely to achieve in the coming centuries, if ever.
Scott Mace is a contributing writer for HealthLeaders.
Scott Mace is a contributing writer for HealthLeaders.
Insights from the use of AI in the financial sector are helpful as healthcare organizations seek to maximize its use.
Machines won’t control decision-making in healthcare, but that concept is driving growing AI-powered automation.
The pandemic has debunked forecasts of AI-powered healthcare without human physicians, who play too broad a role to be replaced, maybe ever.
View the discussion thread.
Research shows patients are more willing to utilize emergency services when unexpected bills are not a concern. …
Delaying vaccines is risky. Many pediatricians say a more gradual approach to vaccinations is better than no vaccinations, but they offer hard advice to parents considering it. …
If approved, Atrium Health expects the satellite hospital to open on January 1, 2026. …
Judge Carl Nichols, who formerly served in the Department of Justice (DOJ), greenlights deal but requires key claims system divestiture. …
A survey of physicians reveals the potential ramifications of the impending 8.5% slashing of Medicare rates in 2023. …