Exploring International Regulations Pertaining to AI Technology in Gadgets
In the rapidly evolving landscape of healthcare, AI-enabled medical devices (SaMD) are making significant strides in administrative tasks and patient care. However, the unique nature of these devices presents regulatory challenges that global authorities are actively addressing.
The Food and Drug Administration (FDA) has been at the forefront of these developments, approving the first AI-enabled medical device in 1995 - an automatic interactive gynaecological instrument for analysing Papanicolau (PAP) cervical smears. Despite its accuracy in diagnosing cervical cancer, the device, known as PAPNET, was not cost-effective for widespread adoption.
Today, 97% of AI-related approvals for such devices have occurred in the last 10 years. Devices without a predicate are granted approval through De Novo classification, while applicants for AI-enabled devices need to prove their device is substantially equivalent to an already FDA-authorized device (a predicate device).
When assessing the safety and effectiveness of algorithms within an AI-enabled SaMD, the FDA considers factors including data quality, robustness, and clinical performance. Quality system and post-market requirements, including adverse event reporting, also apply to AI-enabled devices.
However, the current regulatory pathways may be stifling innovation in adopting more AI within medical devices. The majority of AI-enabled devices in the US get to market via the 510(k) pathway. This streamlined approval process, while beneficial for innovation, has raised concerns about the rigour of the safety and efficacy assessments.
One of the key challenges in regulating AI-enabled SaMD is algorithm adaptability. Unlike traditional devices, AI/ML SaMD often changes after initial market approval, complicating premarket validation and ongoing oversight. The FDA has introduced the concept of Predetermined Change Control Plans (PCCPs) to anticipate and manage algorithm changes without requiring a new approval each time.
Another significant challenge is the fragmented and complex regulatory landscapes. Multiple regulatory systems and overlapping requirements pose barriers, particularly in regions like the EU. For example, the EU AI Act imposes dual conformity assessments for AI medical devices, creating a significant compliance burden.
Regulators worldwide are responding to these challenges by integrating iterative oversight, harmonising requirements, emphasising risk and transparency, and facilitating innovation while maintaining patient safety and trust. The FDA's lifecycle and agile regulatory approach, the EU's AI Act and Medical Device Regulations, the UK MHRA's reliance on international standards, and stakeholder engagement and cross-jurisdiction coordination are all part of this global response.
In January 2025, the FDA issued draft guidance entitled "Artificial Intelligence-Enabled Device Software Functions: Lifecycle management and marketing submissions recommendations," aligning with prior guidances and proposing lifecycle management considerations and specific recommendations to support marketing submissions for AI-enabled medical devices.
As of October 2024, 22 low to moderate risk devices received approvals via the De Novo pathway, and over 1,000 AI-enabled medical devices were approved by the FDA by March 2025. The primary reason some fields have been quicker to adopt AI is that AI is more suited for tasks like pattern-recognition via image and/or waveform analysis. Over 700 of the FDA-authorized AI medical devices fall within radiology, followed by around 100 in cardiology and just over 30 in neurology.
In conclusion, the regulatory landscape for AI-enabled medical devices is evolving to meet the challenges posed by these innovative technologies. By balancing innovation, safety, and transparency, regulators worldwide are ensuring that AI-enabled medical devices can deliver on their promise of improving patient care while maintaining trust and safety.