Bench / Analytical / Preclinical |
Verify design inputs, analytical performance, and safety
margins before human use. |
Laboratory testing, benchtop simulation, animal studies;
for IVDs: precision, limit of detection, interference. |
No concurrent clinical comparator. |
Not applicable (non-clinical). |
Foundational evidence in technical documentation and
performance evaluation. |
Non-human evidence demonstrating the product performs
safely and as intended under controlled conditions. |
Human Factors / Usability Validation |
Validate that intended users can use the device safely and
effectively for critical tasks. |
Summative testing with intended users and environments;
focuses on use-related risk mitigation. |
No; success criteria are task-based. |
Scenario-driven, coverage-based rather than
powered for clinical endpoints. |
Supports instructions for use and risk controls; complementary
to clinical evidence. |
A structured assessment of real-world use to minimise
use-related hazards and confirm safe operation. |
Early Feasibility / First-in-Human |
Initial clinical safety, device handling, and design iteration.
|
Small number of participants, few sites, protocol
flexibility to enable iterative changes. |
Often none; may include descriptive comparison to standard
practice. |
Not powered for effectiveness; focused on
safety and feasibility signals. |
Exploratory; informs subsequent feasibility or pivotal design.
|
The first cautious step in humans to verify that the
concept is safe enough to refine further. |
Feasibility / Pilot |
Refine endpoints, event rates, operational logistics, and
effect size assumptions. |
Prospective; may include controls; protocol learning for
pivotal planning. |
Sometimes; or benchmark against objective criteria. |
Generally not powered for confirmatory
conclusions. |
De-risking step before confirmatory evaluation. |
A rehearsal study that clarifies how to run the
confirmatory trial and what to measure. |
Pivotal (Confirmatory) |
Provide primary evidence of safety and
effectiveness/performance for approval and labelling. |
Pre-registered protocol and Statistical Analysis Plan
(SAP); bias control (randomisation/blinding/independent adjudication); multi-centre. |
Yes (active control/standard of care) or justified benchmark such
as an Objective Performance Criterion (OPC). |
Statistically powered to test pre-specified
primary endpoints and margins. |
Core evidence for initial authorisation and
claims. |
A definitive study designed to answer whether benefits
outweigh risks for the intended use. |
Pragmatic / Registry-based Randomised Controlled
Trial |
Assess effectiveness in routine care using registries or
electronic records. |
Broad inclusion; randomisation embedded in clinical
workflow or registry. |
Yes – randomised comparison. |
Powered; often large sample sizes via existing
data systems. |
May be pivotal or supportive depending on the question and data
quality. |
A real-world randomised trial leveraging existing
infrastructure to answer comparative questions. |
Observational (Cohort, Case-Control, Registry, Real-World
Evidence) |
Characterise utilisation, outcomes, and risks without assigned
interventions. |
Non-interventional; uses methods to reduce confounding
(e.g., propensity scores). |
Analytical comparators or benchmarks are possible; no
randomisation. |
Varies; can be very large. |
Supportive; can be primary when randomised trials are
infeasible and bias is well-controlled. |
Evidence derived from routine care data to understand
real-world performance and safety. |
Post-Market (Post-Market Clinical Follow-up (PMCF) /
Post-Market Performance Follow-up (PMPF)) |
Confirm performance in wider/longer use; detect rare or late
risks; refine labelling. |
Prospective registries, targeted studies, surveillance,
or real-world data. |
Usually not required; may include benchmarks. |
Varies by risk and objective. |
Lifecycle obligation, especially in the European Union. |
Evidence collected after market entry to ensure
continued safety and performance. |