Opinion | Digital Doctors Are Coming. Regulators Need to Catch Up.

You May Be Interested In:FDA steps up enforcement of regulations for imported seafood


Baloescu is an emergency physician, assistant professor, and AI researcher.

Microsoft’s new artificial intelligence (AI)-powered healthcare suite plans to “shape a healthier future” through advances in everything from medical imaging to nursing workflows, painting a rosy picture of better patient care. Meanwhile, major health systems and medical schools from Yale to Harvard to the University of Michigan are exploring or rolling out AI initiatives to enhance care delivery and improve access.

Yet, as we stand at this technological crossroads, it’s worth asking whether our enthusiasm for AI in healthcare might be outpacing our ability to navigate its potential pitfalls.

As a physician, I’ve seen both the benefits and limitations of AI-assisted triage. In my emergency department, we use AI to prioritize patients based on admission likelihood. While it helps with patient flow, it can miss complex cases, like an elderly patient on blood thinners with a head injury. For now, medical staff maintain significant oversight, meticulously double-checking AI-generated recommendations.

Most published AI research in medicine is still in its infancy, focusing on simple validation rather than large-scale, real-world implementation. But with 950 AI medical devices authorized by the FDA as of August 2024, AI’s influence on critical medical decisions, from diagnosis to treatment planning, appears poised to grow substantially.

The FDA currently approves AI medical tools as devices rather than drugs. This distinction matters because it shapes how thoroughly these AI tools are evaluated and monitored before they’re used in patient care. Medical devices, including AI tools, often undergo a different and sometimes less extensive approval process compared to drugs, which may leave gaps in our understanding of how well they work in real-world healthcare settings — or how they work at all.

Many AI systems are “black boxes,” meaning their decision-making is hard to understand. Like a (hypothetical) AI that only sees you in a red dress and assumes “red” defines you, healthcare AI may fixate on misleading patterns, giving results that seem correct but are based on faulty reasoning, making it harder for doctors to spot errors.

In addition, most AI models learn to identify patterns and make predictions from large datasets, but their accuracy depends on the quality of the data. For example, a 2018 study found that an AI tool for detecting skin cancer performed poorly on darker skin tones because it was mostly trained on lighter-skinned patients. Medical data can also reflect historical biases — if women are underdiagnosed for heart disease, for example, AI might misjudge their risk.

To ensure AI in healthcare is safe and fair, we need stronger rules and oversight. The FDA should require ongoing reporting on AI performance in real-world settings, not just during the initial approval process. Currently, manufacturers must report serious incidents, but the FDA is still developing a regulatory framework for AI devices that balances safety with the evolving nature of the technology.

Developers should also make AI more transparent by providing tools for clinicians and regulators to understand how AI makes its recommendations. Healthcare institutions must track AI performance in actual clinical use to spot issues that may not show up in testing. A new HHS rule holds healthcare organizations responsible for making “reasonable efforts” to identify and lower risks of discrimination in AI tools they use. That’s a good start, but smaller hospitals will need support, and everyone needs clearer guidelines on what “reasonable efforts” means.

A public database of approved AI medical devices that shows their efficacy and reports any problems is key to building trust and ensuring accountability. Like the FDA’s Adverse Event Reporting System for medications, an AI reporting database would provide transparency for AI in healthcare. While the FDA lists AI tools through its Digital Health Center of Excellence, this resource is incomplete. A dedicated AI database would offer comprehensive, real-world insights into device performance, protecting patient care.

Implementing these changes will require additional resources. For instance, the FDA commissioner recently suggested that the agency may need to double its workforce to effectively manage the increased oversight responsibilities associated with new regulations. Funding could come from various sources, including congressional budget allocations, small fees on AI-enabled devices, and contributions from AI companies to a shared regulatory fund.

AI promises to revolutionize healthcare, much like the steam engine once launched the Industrial Revolution. But we aren’t there yet. To lean into the analogy, we haven’t yet built the factories that will simultaneously increase productivity and create miserable working conditions and poor health outcomes. Now is the time to act to promote the former while preventing the latter through oversight and regulation.

Patients and concerned citizens should educate themselves about AI in healthcare. Doctors should inform patients about how it’s used in their care. We should continue to closely monitor AI’s clinical diagnoses or treatment recommendations, and report any concerns to our medical facility or the FDA if the AI tool is FDA-approved.

By staying engaged, we’re not just protecting ourselves — we’re helping shape a healthcare system where AI is used responsibly. Our active participation can promote better rules and safeguards, ensuring AI advances within medicine in a way that’s safe, fair, and beneficial for all.

Cristiana Baloescu, MD, is an emergency physician and assistant professor in the Department of Emergency Medicine at Yale University School of Medicine in New Haven. She engages in artificial intelligence research for ultrasound devices, and is a Public Voices fellow of Yale and The OpEd Project.

Disclosures

Baloescu receives research funding from Philips Healthcare and Caption Health (now part of GE Healthcare) to support the development of AI applications for point-of-care ultrasound.

share Paylaş facebook pinterest whatsapp x print

Similar Content

Reused Pacemakers Good as New in Early Safety Data
Reused Pacemakers Good as New in Early Safety Data
COP29 Opens With Focus on Climate Funding
COP29 Opens With Focus on Climate Funding
Photo of an exhausted doctor in protective clothing with burnout.
Opinion | Worried About Healthcare Worker Well-Being?
What Donald Trump's Win Means For AI
What Donald Trump’s Win Means For AI
Myanmar’s Civil War—and What Comes Next, Explained
Myanmar’s Civil War—and What Comes Next, Explained
FDA steps up enforcement of regulations for imported seafood
Today's Insight | © 2024 | News