
International MedTech Safety Conference (IMSC26)

Boston, MA, USA
2-5 June 2026
Workshop 1
How to Demonstrate Safety to Regulators: Turning Risk Files into a Clear, Credible Safety Story
Description: Risk management documentation can run to hundreds of pages or thousands of table rows; instead of persuading reviewers, it often overwhelms them. It isn’t the volume that convinces, it’s the clarity, completeness, and quality of the argument. Regulators want a logically structured, evidence-backed explanation that your device is safe and effective for its intended use.
This workshop shows how to transform the ISO 14971 risk-management file into a concise safety story using the principles of safety assurance case method. We’ll introduce the core elements, then demonstrate the method on a real-world case study that shows how risk management outputs can be woven into a review-ready narrative. We’ll unpack what reviewers actually scan for, such as logical structure, completeness, traceability, proportionality, and context with rationale, and build an illustrative safety case live.
Attendees leave with a first-principles understanding, reusable templates, and review checklists aligned to FDA expectations and reviewer practice, plus a short research-program case study on how the method guides emerging technologies and produces regulator-ready documentation even as regulatory requirements evolve. This workshop is designed for teams that want to reduce back-and-forth, right-size documentation, increase effectiveness, and shorten review timelines.
Who should attend: RA/QA, risk management, systems & software engineers, human-factors, clinical, and innovators.
Learning objectives:
-
Explain first-principles foundations of safety.
-
Convert risk management outputs into a safety story.
-
Make risk-management information comprehensive, convincing, and proportionate—without unnecessary volume.
Workshop 2 Attendance Limited to 25
Using ChatGPT to Analyze Device Recalls and Adverse Events
Description: Recalls are an unfortunate reality in MedTech. But every recall starts as a faint signal buried in post-market data. The challenge is detecting it before patients are harmed.
Databases like FDA MAUDE contain millions of reports, yet most analyses rely only on structured fields such as Device Problem or Patient Problem. The real insight often hides in the narrative - what the clinician observed, what the device did, what went wrong.
Large Language Models (LLMs) such as ChatGPT can analyze this text at scale, revealing hidden patterns traditional tools miss. Using the WATCHMAN TruSeal recall as a case study, this four-hour workshop shows how QA/RA and risk professionals can combine expert judgment with ChatGPT Plus (GPT-4 Turbo) to detect weak signals, trace root causes, and convert narrative data into actionable safety intelligence.
Who should attend: Ideal for QA/RA leaders, post-market surveillance and vigilance teams, and risk or CAPA professionals seeking practical ways to analyze unstructured data. Also valuable for clinicians, engineers, and educators exploring how AI can strengthen recall analysis and risk assurance under FDA’s QMSR framework.
Learning objectives:
-
Frame analytical problems and develop an analysis plan
-
Prepare and validate data for ChatGPT use
-
Identify hidden patterns and insights
-
Link adverse-event narratives to recalls
-
Gain hands-on practice with reproducible workflows
Pre-requisites:
-
Active ChatGPT Plus subscription
-
Basic Excel or CSV skills (no programming required)
Significance & Take-Home Value:
Participants will learn to use ChatGPT not as automation but as a thinking partner - enhancing analytical depth, reproducibility, and cross-functional learning. The workflow is immediately transferable to other devices, CAPA reviews, and PMS programs, supporting FDA’s QMSR focus on proactive, risk-based assurance and data-driven decision-making.

