Clinical Study Types — Quick Comparison

Click any row to open a full accordion with: Hallmarks · Device-specific nuances (including Software in a Medical Device (SiMD), Software as a Medical Device (SaMD), Medical Device Software (MDSW)) · In Vitro Diagnostic (IVD) nuances · When required (and when not) · Practical comparison · Quick checklist.

Study Type Primary Purpose Typical Design Traits Comparator? Sample Size & Statistical Power Regulatory Role Description of Meaning
Bench / Analytical / Preclinical Verify design inputs, analytical performance, and safety margins before human use. Laboratory testing, benchtop simulation, animal studies; for IVDs: precision, limit of detection, interference. No concurrent clinical comparator. Not applicable (non-clinical). Foundational evidence in technical documentation and performance evaluation. Non-human evidence demonstrating the product performs safely and as intended under controlled conditions.
Human Factors / Usability Validation Validate that intended users can use the device safely and effectively for critical tasks. Summative testing with intended users and environments; focuses on use-related risk mitigation. No; success criteria are task-based. Scenario-driven, coverage-based rather than powered for clinical endpoints. Supports instructions for use and risk controls; complementary to clinical evidence. A structured assessment of real-world use to minimise use-related hazards and confirm safe operation.
Early Feasibility / First-in-Human Initial clinical safety, device handling, and design iteration. Small number of participants, few sites, protocol flexibility to enable iterative changes. Often none; may include descriptive comparison to standard practice. Not powered for effectiveness; focused on safety and feasibility signals. Exploratory; informs subsequent feasibility or pivotal design. The first cautious step in humans to verify that the concept is safe enough to refine further.
Feasibility / Pilot Refine endpoints, event rates, operational logistics, and effect size assumptions. Prospective; may include controls; protocol learning for pivotal planning. Sometimes; or benchmark against objective criteria. Generally not powered for confirmatory conclusions. De-risking step before confirmatory evaluation. A rehearsal study that clarifies how to run the confirmatory trial and what to measure.
Pivotal (Confirmatory) Provide primary evidence of safety and effectiveness/performance for approval and labelling. Pre-registered protocol and Statistical Analysis Plan (SAP); bias control (randomisation/blinding/independent adjudication); multi-centre. Yes (active control/standard of care) or justified benchmark such as an Objective Performance Criterion (OPC). Statistically powered to test pre-specified primary endpoints and margins. Core evidence for initial authorisation and claims. A definitive study designed to answer whether benefits outweigh risks for the intended use.
Pragmatic / Registry-based Randomised Controlled Trial Assess effectiveness in routine care using registries or electronic records. Broad inclusion; randomisation embedded in clinical workflow or registry. Yes – randomised comparison. Powered; often large sample sizes via existing data systems. May be pivotal or supportive depending on the question and data quality. A real-world randomised trial leveraging existing infrastructure to answer comparative questions.
Observational (Cohort, Case-Control, Registry, Real-World Evidence) Characterise utilisation, outcomes, and risks without assigned interventions. Non-interventional; uses methods to reduce confounding (e.g., propensity scores). Analytical comparators or benchmarks are possible; no randomisation. Varies; can be very large. Supportive; can be primary when randomised trials are infeasible and bias is well-controlled. Evidence derived from routine care data to understand real-world performance and safety.
Post-Market (Post-Market Clinical Follow-up (PMCF) / Post-Market Performance Follow-up (PMPF)) Confirm performance in wider/longer use; detect rare or late risks; refine labelling. Prospective registries, targeted studies, surveillance, or real-world data. Usually not required; may include benchmarks. Varies by risk and objective. Lifecycle obligation, especially in the European Union. Evidence collected after market entry to ensure continued safety and performance.

Tip Click a row to reveal the full reference-backed guidance for that study type.

Bench / Analytical / Preclinical
Reference pack
Hallmarks of this study type
  • Laboratory and simulation testing that establishes design performance, safety margins, and failure modes before any exposure to human participants.
  • For In Vitro Diagnostics, analytical performance typically includes precision/repeatability, limit of detection, linearity, and interference studies as part of the performance evaluation framework.
  • Animal studies may be used when needed to demonstrate biological safety or functional performance prior to clinical use.
Device-specific nuances (including Software in a Medical Device (SiMD), Software as a Medical Device (SaMD), Medical Device Software (MDSW))
  • For physical devices, bench testing spans mechanical, electrical, electromagnetic compatibility, environmental stress, and biological safety.
  • For software in a medical device and standalone software as a medical device, “bench” work includes verification and validation, cybersecurity testing, and simulated-data validation in line with clinical evaluation requirements for software.
In Vitro Diagnostic (IVD)-specific nuances
  • Analytical performance is a distinct pillar alongside scientific validity and clinical performance; analytical studies precede or run in parallel with clinical performance studies.
When this study is (and is not) required
  • Always required to support technical documentation and risk management prior to human exposure.
  • Not sufficient alone to support clinical claims; must be complemented by clinical evidence when claims rely on clinical performance.
Practical comparison: current selection vs other types

Compared to feasibility or pivotal clinical investigations, bench/analytical studies are faster, fully controllable, and critical to de-risk design—but they cannot answer questions about patient outcomes or user interaction.

Quick checklist
  • Do analytical and bench tests map to risk controls and intended use?
  • Are worst-case configurations tested?
  • For software, are verification/validation and cybersecurity tests traceable to requirements?
Human Factors / Usability Validation
Reference pack
Hallmarks of a human factors validation
  • Summative (validation) testing with intended users, use environments, and representative tasks to confirm safe and effective use of the device.
  • Focus on prevention of use-related hazards and confirmation that mitigations and labelling are adequate.
Device-specific nuances (incl. SiMD, SaMD, MDSW)
  • Software interfaces require validation with intended users, including error messaging, alarms, cybersecurity-related tasks (e.g., login, updates), and critical workflows.
In Vitro Diagnostic (IVD)-specific nuances
  • Sample handling, workflow steps, and interpretation (e.g., reader displays) are part of critical tasks; training and labelling validation are expected.
When this is (and is not) required
  • Expected for devices with meaningful human-device interaction; complements, but does not replace, clinical evidence.
Practical comparison

Unlike pivotal trials that answer clinical outcome questions, human factors validation answers whether the user can operate the device safely and effectively under real-world conditions.

Quick checklist
  • Are intended users, environments, and critical tasks correctly identified?
  • Do scenarios reflect realistic training and labelling?
  • Are residual use-related risks acceptable?
Early Feasibility / First-in-Human
Reference pack
Hallmarks
  • Initial clinical exposure to collect proof-of-principle and early safety data; allows design iteration under Investigational Device Exemption (IDE) oversight.
  • Limited participants and sites; heightened monitoring and governance.
Device-specific nuances (incl. SiMD, SaMD, MDSW)
  • Iterative updates to hardware or software can be prospectively managed via IDE supplements and change control during Early Feasibility Studies.
IVD-specific nuances
  • First-in-human clinical use is uncommon for IVDs; most “feasibility” is analytical and small clinical pilot evaluations prior to pivotal clinical performance studies.
  • Logo

    Providing expert assessments to unlock potential in operations, product management, software development, leadership, and regulatory compliance.

  • Services provided by MashUp s.p.

Contact Info

  • Location:
    Ljubljana, Slovenia
  • Email:
    info@excellence-consulting.services
>