Exploring worldwide guidelines and standards for artificially intelligent apparatuses
In the realm of technology, Artificial Intelligence (AI) has found a niche in medical imaging and radiology devices, thanks to its prowess in pattern recognition via image and waveform analysis [1]. This has paved the way for AI-enabled Software as Medical Devices (SaMD), revolutionizing the medical field.
The Food and Drug Administration (FDA) in the United States has been at the forefront of regulating these innovative devices. When assessing the safety and effectiveness of algorithms within an AI-enabled SaMD, the FDA considers factors such as data quality, robustness, and clinical performance [2].
The first AI-enabled medical device, an automatic interactive gynaecological instrument for analysing Papanicolau (PAP) cervical smears, was approved by the FDA in 1995 [3]. Since then, only four devices requiring the most rigorous pathway of premarket approval as high-risk devices have been approved.
However, the regulatory challenges for AI-enabled SaMD globally are significant. Managing the unique adaptability of AI algorithms after approval, ensuring ongoing safety through post-market surveillance, navigating evolving and fragmented regulations, and addressing dual compliance requirements, especially in regions like the EU, pose formidable hurdles [1].
Advancements in this area include emerging clear regulatory frameworks, development of regulatory sandboxes for AI testing, lifecycle management guidance, and successful precedents of AI medical device approvals demonstrating viable regulatory pathways [5].
One such advancement is the FDA's emphasis on predetermined change control plans (PCCPs) for AI/ML SaMD, which allows for pre-specified algorithm changes without requiring full new approvals unless risks change substantially [4]. This lifecycle approach supports iterative improvements while maintaining safety.
The UK's Medicines and Healthcare products Regulatory Agency (MHRA) has implemented an "AI airlock" regulatory sandbox in collaboration with the National Health Service (NHS) to test AI-enabled medical device regulatory solutions experimentally before wider deployment, supporting innovation with monitored oversight [1].
In Europe, the EU AI Act, in force since August 2024 with phased implementation through 2027, categorizes medical AI as high-risk, requiring dual conformity assessments under both the AI Act and Medical Device Regulation (MDR). Manufacturers must engage trained Notified Bodies for AI-specific evaluations [3].
Navigating shifting regulations while anticipating the commercial prospects of new AI-enabled devices is crucial for maximizing AI's potential. The first AI device approved in 1995, PAPNET, was shown to be more accurate at diagnosing cervical cancer than human pathologists, but it was not sufficiently cost-effective for widespread adoption [3].
As of October 2024, 22 low to moderate risk AI-enabled devices received approvals via the De Novo pathway [6]. If adaptive AI is deployed within SaMD for clinical applications, developers, engineers, and regulators must carefully consider what data the algorithm will have access to for continued learning [7].
Over 700 of the FDA-authorized AI medical devices fall within radiology, followed by approximately 100 in cardiology and just over 30 in neurology [8]. The learning component of AI, which allows it to continually learn from new data and improve its accuracy, needs critical consideration when implemented within a medical device [7].
If an AI-enabled device makes specific recommendations around a diagnosis or treatment, it falls under FDA regulations [9]. AI software tools intended to assist with administrative tasks such as scheduling, inventory, or financial management are exempt from FDA regulations [9].
By March 2025, over 1,000 AI-enabled medical devices were approved by the FDA, with 97% of these approvals happening in the last 10 years [10]. Once the regulations are more clearly defined, we may see a new generation of commercially successful AI-enabled devices in applications and fields that are currently untapped.