Dresden Researchers Show Regulatory Shortcomings, Offer Solutions for Safe Implementation of AI in Medicine

Insider Brief

  • A new study funded by the Else Kröner Fresenius Foundation warns that current U.S. and European medical device regulations are not prepared for the rise of autonomous AI agents in healthcare, according to researchers at the Else Kröner Fresenius Center for Digital Health at TU Dresden.
  • Published in Nature Medicine, the study calls for regulatory reforms including voluntary alternative pathways and adaptive oversight frameworks to ensure patient safety as AI agents increasingly manage clinical workflows without continuous human supervision.
  • Researchers recommend treating advanced AI agents more like medical professionals in the long term, granting autonomy only after demonstrating safe, consistent performance in clinical settings.

Researchers at the Else Kröner Fresenius Center (EKFZ) for Digital Health at TU Dresden warn that current medical device regulations in the U.S. and Europe are not prepared for autonomous AI agents in healthcare.

The study, funded by the Else Kröner Fresenius Foundation and published in Nature Medicine, highlights the need for regulatory reforms to ensure patient safety as AI agents advance, according to the researchers.

Unlike earlier AI tools focused on single tasks, new autonomous AI agents can manage entire clinical workflows, integrating external databases and computational tools under the control of large language models (LLMs). These systems can analyze medical images, manage patient data, and guide clinical decisions without continuous human oversight, raising concerns over accountability and risk management.

“We are seeing a fundamental shift in how AI tools can be implemented in medicine,” noted Jakob N. Kather, Professor of Clinical Artificial Intelligence at the EKFZ for Digital Health at TUD and oncologist at the Dresden University Hospital Dresden. “Unlike earlier systems, AI agents are capable of managing complex clinical workflows autonomously. This opens up great opportunities for medicine – but also raises entirely new questions around safety, accountability, and regulation that we need to address.”

Researchers say existing regulations were designed for static, narrowly defined technologies that do not evolve after approval. However, AI agents are adaptable and capable of autonomous decision-making, presenting challenges to static regulatory frameworks, the study notes.

The researchers propose several reforms. In the short term, they suggest expanding enforcement discretion policies and classifying certain AI systems as non-medical devices to ease immediate adoption hurdles. Medium-term solutions include developing voluntary alternative pathways (VAPs) and adaptive regulatory frameworks that allow dynamic oversight based on real-world performance data. Long-term, they propose regulating AI agents similarly to medical professionals, granting autonomy only after demonstrating safe, consistent performance through structured training.

The study’s methods involved a review of existing regulatory pathways and analysis of AI agents’ technical characteristics, highlighting gaps between current frameworks and emerging technologies. While regulatory sandboxes offer flexibility for early testing, the authors argue they are insufficient for broad deployment due to resource limitations.

The authors caution that without substantial reform, the meaningful adoption of autonomous AI agents in healthcare may remain stalled. Collaboration among regulators, healthcare providers, and developers is essential to design flexible, safety-focused frameworks that accommodate the unique features of AI agents.

“Realizing the full potential of AI agents in healthcare will require bold and forward-thinking reforms,” said Stephen Gilbert, Professor of Medical Device Regulatory Science at the EKFZ for Digital Health at TU Dresden and last author of the paper. “Regulators must start preparing now to ensure patient safety and provide clear requirements to enable safe innovation.”

Greg Bock

Greg Bock is an award-winning investigative journalist with more than 25 years of experience in print, digital, and broadcast news. His reporting has spanned crime, politics, business and technology, earning multiple Keystone Awards and a Pennsylvania Association of Broadcasters honors. Through the Associated Press and Nexstar Media Group, his coverage has reached audiences across the United States.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape