Harvey: Welcome to the GPT Podcast. I'm Doctor Harvey Castro, here with my co-host Brooks, and today we're continuing our AI in Healthcare Series. Brooks: Great to be here, Dr. C. What's the topic for today? Harvey: Oh, we've quite a hot topic today, Brooks. Do you remember hearing about artificial intelligence making serious healthcare decisions? Brooks: Sounds like something out of a Sci-Fi novel. How does this work? Harvey: That's an excellent question, Brooks. For years, important choices about who got medical coverage were made in back offices at health insurance companies. But, it appears now allegedly, certain life-altering decisions are being formulated by artificial intelligence algorithms, believe it or not. Brooks: Interesting, but it seems like this would raise some significant concerns too, yeah? Harvey: You're spot on. In fact, two families recently sued UnitedHealth Group contending that the insurance company used artificial intelligence to deny or shorten rehabilitation stays for two elderly men in the months before they passed away. Brooks: Wow, that doesn't sound quite right. How does A.I. factor into these decisions? Harvey: The lawsuit alleges that UnitedHealth's artificial intelligence system is making "rigid and unrealistic" determinations about what it takes for patients to recover from serious illnesses and denying them care in skilled nursing and rehab centers that, according to the lawsuit, should be covered under Medicare Advantage plans. Essentially, this A.I. technology is accused of overriding doctors' recommendations for patient care, at a point where it's... ahem, indefensible. Brooks: Sounds alarming. I mean, shouldn't these assessments be done by medical professionals rather than an AI? Harvey: Exactly! That's the central grievance the families are making. They believe the insurance company is denying care to elderly patients who aren't likely to protest, even though, the families argue, the AI is doing a poor job of evaluating people's needs. They also allege the AI program boasts an astonishingly high error rate. Brooks: High error rate? What exactly does that mean? Harvey: More than 90% of patient claim denials were overturned through internal appeals or a federal administrative law judge according to court documents. Yet, in reality, very few patients elect to combat the algorithm's determinations with only 0.2% choosing to go through the appeals process. This indicates a potential gap in the AI's decision-making capabilities. Brooks: So, are these patient families claiming that the AI is doing more harm than good? Harvey: The lawsuit suggests, quite strongly, that the AI is hurting more than helping. One case study is of a 91-year-old man who fell at home and broke his leg and ankle. His doctor recommended physical therapy; however, after only three weeks, the insurer terminated his coverage. His family had to foot a bill of around $12,000 to $14,000 per month for about a year of therapy at the facility. Brooks: That's a lot for a family to carry. Is this part of a larger pattern in the insurance industry? Harvey: Yeah, it seems so. Attorneys arguing the case say high denial rates are part of the insurance company's strategy. And this issue isn't isolated to UnitedHealth either. Back in July, a law firm filed a case against CIGNA Healthcare alleging they used AI to automate claims rejections. Brooks: Isn't there some sort of legal framework to regulate the use of AI here? Harvey: You'd hope so. But, as of now, the area is still relatively gray. AI litigation is a growing body of law, and there's been increasing legal attention towards the use of emerging technology in several fields, including healthcare. Brooks: You mentioned earlier about AI overriding the recommendations of medical clinicians. Is this being challenged legally? Harvey: Indeed, it is. The families suing UnitedHealth and similar cases highlight the importance of having a 'human in the loop'. Even as AI can manage vast amounts of data and be potentially useful, it lacks the nuances and common sense decision-making that a human clinician can provide. Brooks: Interesting. Are there any long-term impacts of this AI-led decision making emerging? Harvey: Well, the effects are already visible and quite worrisome. As per a report from the Center for Medicare Advocacy, AI programs often made coverage decisions that were more limiting than what standard Medicare would have allowed. In other words, the AI was more restrictive in coverage decisions than humans would have been. Brooks: That's a lot to digest, Dr. C. It's fascinating but also a bit unsettling. How does the industry anticipate handling these concerns with AI? Harvey: That's the million-dollar question, There's no easy answer. AI is becoming increasingly integrated into our lives, but it is crucial to critically evaluate its potential implications, especially in sensitive areas like healthcare. It seems like the bigger question is "ways to steer AI's use responsibly"—be it through legal frameworks or ethical guidelines, without letting bottom line business interest override patient care. Brooks: Seems like we've a lot more to explore in this field, yes? Harvey: Absolutely, Thanks for this wonderful discussion today. Brooks: As always, Dr. C, it's been a pleasure. Until next time. Harvey: Yes, until next time. Take care, everyone!