Harvey: Welcome to the GPT Podcast. I'm Doctor Harvey Castro, or Dr. C if that's easier, here with my co-host Brooks. Today, we're continuing our AI in Healthcare Series. So Brooks, how are you feeling about this topic? Brooks: Well, Dr. C, I have to admit, it's quite fascinating. I've heard some chatter about it, but I'm keen to get into the nitty-gritty details. Harvey: That's great to hear, Brooks. Let's dive right in then. Gary Gensler, the head of the SEC, recently stated that AI could trigger a, and I quote, “nearly unavoidable” financial crash if regulators don't quickly establish control. Brooks: That sounds concerning. Can you expand on why Gensler believes this? Harvey: That's an excellent question. According to Gensler, the lack of diversity in AI models used by companies could play a significant role. The fact is, while open-source AI models exist, most entities today rely on a small number of tools developed by select companies, like OpenAI’s ChatGPT. This lack of diversity in AI usage could end up posing a threat to U.S. financial stability. Brooks: Hmm, intriguing... Could you explain more about this issue of 'diversity' in AI models? Harvey: Absolutely. Gensler says that this is a horizontal issue - this means many institutions rely on the same base model or data aggregator. If something goes wrong with that model or aggregator, it could have catastrophic effects on the financial market. Brooks: I see... But aren't there already regulations in place for such scenarios? Harvey: Well, this is the tricky part. Most of our regulation attempts to address individual clinicians, but using AI requires broader solutions. It's a challenging financial stability issue to tackle. Brooks: Has anything like this ever happened before, Dr. C? Harvey: In fact, it has. You might remember the 2010 "flash crash" when the stock market suddenly plummeted by over a trillion dollars and immediately rebounded. This led regulators and market participants scrambling for answers. It was concluded that high-frequency trading algorithms had contributed to the crash with a cascade of rapid trades. Brooks: Wow! I didn't know that. What about the companies that provide these AI services? Harvey: Great point, As Gensler notes, the tech companies that generate these models and the cloud computing companies, which offer AI services, aren't subjected to the strict regulations of Wall Street. This does add an extra layer of complexity. Brooks: Definitely sounds like a lot to chew on. Last question Dr. C, are there any steps in place to avoid such a situation again? Harvey: That's an excellent question, U.S. regulators have slowly begun launching initiatives to regulate artificial intelligence, due in part to concerns about potential monopolies and anticompetitive effects. The SEC has proposed a rule to require firms to “address conflicts of interest” arising from their individual use of predictive analytics. Brooks: Well, this has been enlightening, Dr. C. I look forward to diving deeper into these issues in our future discussions. Harvey: I'm glad you found it informative, I'm sure our listeners did too. That concludes our episode for today. Thanks for tuning in, and we look forward to keeping this conversation going on the next edition of the GPT podcast.