© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI has been used in healthcare for decades now. Some say they want more regulation

AYESHA RASCOE, HOST:

Artificial intelligence is used in everything from creating funny images on the Internet to organizing office records. And increasingly, it's also an integral component to health care. But some patient advocacy groups are worried. They want more regulation of AI in the healthcare industry. Last month, experts from the U.S. Food and Drug Administration agreed in an article they wrote for JAMA, the Journal of the American Medical Association. To tell us more, we're joined by the senior author of the paper, FDA Commissioner Dr. Robert Califf. Thank you for being here.

ROBERT CALIFF: Ayesha, it's great to be with you.

RASCOE: You write that over the past 30 years, the FDA has authorized nearly 1,000 medical devices that use artificial intelligence. Can you give us an idea of what these devices do?

CALIFF: You know, I think of it in terms of category. So there's definitely a category where most of the action has been is in imaging, a CT scan or an X ray, and you want to make sense of it. The radiologist looks at it as a human being, and there's limited processing capability that we all have. A computer never gets tired, doesn't need a cup of coffee and can analyze the data on a broad scale.

But the much, much larger area that has implications for all of us is what's called decision support. And that is, either a consumer looking on the Internet or your doctor or a nurse dealing with a patient. You're inputting information. That information is processed, and it gives you a recommendation.

RASCOE: How does the FDA right now evaluate AI-enabled devices? Can you sketch out the process just generally for a layperson?

CALIFF: There are actually a set of laws that govern this. So if it's an AI algorithm which is embedded in a high-risk device - let's say a cardiac defibrillator. That's a really good example. If your heart suddenly stops beating, you want a shock to get your heart started again. You want to make sure that algorithm is right. That is very carefully regulated, and you got to do clinical studies, typically in people, that show that the algorithm actually really works. You have administrative uses of AI. Those aren't regulated at all. That's administration and finance.

And in the middle, we have this vast area of decision support that I described. And that ranges from, I'm working out, and I want to know what my heart rate is, to I've got heart failure. I could die. I want to make sure when I work out, I've got careful control of the parameters. And that would be more regulated, so there's a spectrum.

Bottom line on all this is that this field will be so big, we couldn't possibly hire enough people at the FDA to regulate it all on our own. And so it's very important that we get the clinical community - the health systems, the professions, the people that you see for your health care - to be very involved in developing what we would call an ecosystem of regulation where these algorithms can be constantly evaluated.

RASCOE: So there are some advocates who seem to think that the products that help with, say, decision support - they should also go through clinical trials. Do you agree with that? Do you think that they should go through clinical trials?

CALIFF: Well, you know, in addition to being a cardiologist, I was a clinical trialist. That's what I did for a living. So I've never seen a clinical trial I didn't like. And of course, the more clinical trials we can do on these algorithms, the better. But I'd also point out there's an element of this which is different. You know, when you produce a drug or a traditional device, it's the same thing for the rest of its existence. Here, the decision support, the AI algorithms are changing every day. And so the real key here is making sure they're safe at the beginning and then monitoring them. So I think the consumer advocacy groups are onto something here, and we'll have to work out exactly where to draw a line in the premarket phase for these algorithms, what it takes for them to get on the market versus what happens afterwards.

RASCOE: I think one concern that people will have and one example that's come up, say, if AI product is using billing records to predict someone's future health care needs, the AI may falsely presume that low income people have fewer needs than wealthier people. But in reality, it's just that they have received fewer services because they don't have the resources to get them. Do your review procedures take things like that into account, or is that something that the FDA would have any insight on?

CALIFF: Well, you're asking a really important question there. And yes, we have a lot of insight into this. And you're completely right in the example that you gave, that if you have an AI algorithm, it's only as good as the data that goes in. But we're hyperaware of what you said, which is bias in the formulation of an algorithm leads to biased output and can make our inequities even worse than they already are, and they're pretty bad right now.

RASCOE: What do you say to those people who are concerned that, you know, a computer may be deciding whether they have some condition or not?

CALIFF: I think one of the most important things is, for the most part, for important decisions, AI should inform the clinician with whom you're working, not make the decision for that person. The decision should be made by a human being that's interacting with you, can understand dimensions of you that are not necessarily easily picked up by a computer.

On the other hand, I'd also point out that your doctor or nurse, as I mentioned at the beginning of this conversation, could be tired, might have had a spousal fight in the morning before going to work, could be overwhelmed by other issues going on. A computer doesn't get tired, and it's not distracted by other things in the environment. In my 40 years of working on computer-human interfaces, the combination is always better as long as the rules of the road are right.

RASCOE: That's FDA Commissioner, Dr. Robert Califf. Thank you so much for talking with me.

CALIFF: It's a real pleasure, and thanks for giving me a chance to go over this. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Ayesha Rascoe is a White House correspondent for NPR. She is currently covering her third presidential administration. Rascoe's White House coverage has included a number of high profile foreign trips, including President Trump's 2019 summit with North Korean leader Kim Jong Un in Hanoi, Vietnam, and President Obama's final NATO summit in Warsaw, Poland in 2016. As a part of the White House team, she's also a regular on the NPR Politics Podcast.
Ryan Benk
[Copyright 2024 NPR]
Matthew Schuerman
Matthew Schuerman has been a contract editor at NPR's Weekend Edition since October 2021, overseeing a wide range of interviews on politics, the economy, the war in Ukraine, books, music and movies. He also occasionally contributes his own stories to the network. Previously, he worked at New York Public Radio for 13 years as reporter, editor and senior editor, and before that at The New York Observer, Village Voice, Worth and Fortune. Born in Chicago and educated at Harvard College and Northwestern University's Medill School of Journalism, he now lives in the New York City area.
You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.