AI Regulatory and Policy Landscape
Where AI for life sciences sits today in Ireland, and across the EU, UK, US and beyond.
Learning objectives
- Describe the EU AI Act's risk-based classification system (unacceptable, high, limited, minimal) and identify where typical medtech AI use cases fall.
- Identify Ireland's National Competent Authorities relevant to AI in life sciences (HPRA, DPC, others as applicable), describe their respective remits and understand the touchpoints through which EU AI regulation is enforced locally via these bodies.
- Compare the regulatory postures of the EU, UK MHRA, US FDA, Health Canada and TGA with respect to AI in medtech.
- Describe the FDA's evolving approach to AI/ML-enabled medical devices – Predetermined Change Control Plans, Good Machine Learning Practice, Computer Software Assurance – and what it implies for AI use inside medtech organisations.
- Position the IMDRF Software as a Medical Device (SaMD) framework against the AI tools and harnesses participants are likely to encounter.
- Articulate the "risk-reward" posture regulators are taking – encouraging adoption while requiring proportionate controls – and explain why blanket positions (either prohibition or unrestricted use) sit outside that posture.
- Recognise the data-protection landscape (GDPR, HIPAA where relevant, ePrivacy) as a governing constraint on every AI use case touching personal or patient data.