Your AI-powered clinical decision support software may or may not require FDA clearance. The line between exempt CDS and a regulated medical device is thinner than most programs assume — and the FDA's final guidance on Clinical Decision Support software has now made the criteria explicit and binding. For QA/RA professionals and medical device directors, this is the framework you will be working within from here forward.
The FDA confirmed it during the Town Hall on the final CDS guidance: the four criteria from Section 3060 of the 21st Century Cures Act govern all clinical decision support software. That includes large language models. No separate framework. No special pathway. No carve-outs based on the underlying technology.
The four criteria that determine whether a CDS function is exempt or a regulated device:
Meet all four criteria and your function is exempt. Fail on any one — in practice, Criterion 4 is where most programs get caught — and you are in device territory, with everything that implies for your submission strategy and quality system.
Criterion 4 dominated the Q&A at the FDA Town Hall, and for good reason. It is the criterion that most frequently tips a CDS function from exempt software into a regulated medical device, and it is the one that requires the most nuanced analysis from your team.
The critical point the FDA made explicit: clinical context alone does not determine device status. Operating in an ICU or acute care setting does not automatically trigger regulation. What matters is whether the HCP is primarily relying on the software's recommendation in a time-sensitive decision — without a meaningful opportunity to independently review the basis before acting.
The FDA provided a concrete example from page 11 of the guidance: a cardiovascular risk assessment tool generating a 24-hour risk score. That specific tool fails Criterion 4 — not because all cardiovascular risk scoring is excluded from the exemption, but because the time-sensitive nature of the clinical decision that follows from that particular output removes the HCP's ability to independently verify before acting. The failure is contextual. It is not a blanket rule about cardiovascular software.
Practical implications for your regulatory program:
A recurring question at the Town Hall: do alarms and alerts get a pass under Section 520(o)(1)?
They do not. The FDA directed teams back to Criterion 4 as the evaluation framework for alert and alarm functions. If an alarm is driving time-critical HCP decisions primarily based on the software's output, it is subject to the same four-criteria analysis as any other CDS function. The section 520(o)(1) exemption does not create a separate category for alerts.
Two product areas featured prominently in the Town Hall discussion:
For software functions where the Criterion 4 analysis is genuinely ambiguous — a reasonable outcome given the contextual nature of the standard — the FDA specifically mentioned the 513(g) classification request process during the Town Hall.
A 513(g) submission produces a written FDA determination on the regulatory classification of your software function. This is not a workaround or an admission of uncertainty; it is the intended mechanism for resolving ambiguous cases. For QA/RA teams, a formal determination also serves as defensible documentation for internal governance, notified body reviews, or M&A due diligence.
If your program includes multiple CDS components, a systematic mapping exercise — each function assessed against all four criteria, with Criterion 4 analysis documented — is a proportionate first step. The 513(g) path is most valuable when that analysis yields a genuinely uncertain result, not as a substitute for doing the mapping work internally.
The FDA's explicit confirmation that LLMs fall under the same four-criteria framework removes an open question that had been circulating in some product organizations. There is no separate AI/ML CDS pathway, and there is no indication one is being developed.
The practical question for your program is no longer whether this guidance applies to your AI — it does. The question is which of your software's CDS functions fails Criterion 4, and what that means for your regulatory strategy.
For SaMD programs already under FDA oversight, the final guidance clarifies existing criteria rather than introducing new obligations. For teams still determining whether their AI-enabled clinical software requires a submission, the four-criteria framework is now the definitive starting point. Build your Criterion 4 analysis around documented clinical workflow, time constraints on HCP decision-making, and the degree to which the software output is the primary input to that decision — not the care setting, not the technology stack.
The FDA's final CDS guidance establishes a technology-neutral, criteria-based framework with no exceptions for AI or LLMs. Criterion 4 — the time-critical HCP decision test — is where most regulatory risk concentrates, and it demands contextual, evidence-backed analysis. Map your software functions against all four criteria now. Document your Criterion 4 analysis with workflow evidence. And when classification is genuinely uncertain, use the 513(g) process for what it was designed for: a definitive answer before you need one.