In force, with risk-based classification of AI use cases and corresponding obligations for providers and deployers. Article 4 specifically requires organisations deploying AI systems to ensure their staff have AI literacy appropriate to their role.
The industry is now moving. At pace.
The regulators are active. Competitors have started. But others in the industry have not yet figured out how to embed AI safely inside their operating model. The consensus is that the next 18 months will set the pattern for the rest of the decade.
The EU AI Act is in force. MDR and IVDR have evolved. The FDA has shipped guidance on AI/ML medical devices and Computer Software Assurance. The IMDRF has published its SaMD framework. And regulators aren't only writing the rules – they're using AI themselves, with the FDA and EMA both deploying AI tools for internal review and regulatory search.
AI tools are already in use – much of it ad hoc, much of it outside any sanctioned policy.
Doing nothing at the corporate level is no longer a neutral position. Adopting via shadow AI is not a safe strategy. The only defensible posture is for it to be designed deliberately.
The medtech AI moment.
We have spent two years training thousands of senior knowledge workers to use AI well across regulated industries. The pattern is consistent: the technology is moving faster than most organisations can absorb, and medtech is feeling that pressure at least as much as most.
Medtech organisations sit at the intersection of clinical evidence, manufacturing rigour, post-market surveillance and a regulatory landscape that is still being rewritten. The questions that matter for an AI tool – what data goes through it, what controls sit around it, what records survive an inspection – are harder here than they are almost anywhere else. And the consequences of getting them wrong are harder too: a hallucinated citation in a regulatory submission is not a minor inconvenience.
What that produces, in most organisations we work with, is hesitation. Pilots that do not scale. Tools approved for some uses but not for the ones that would actually move the dial. AI literacy concentrated in a few enthusiasts and absent everywhere else. Shadow AI filling the gap.
The organisations that work with us are the ones that have decided the cost of that hesitation has become higher than the cost of empowering their teams.
What the regulators are signalling.
The regulatory landscape is the single biggest reason most medtech leaders pause before committing to an AI programme. It is also, paradoxically, the single biggest reason others have decided to act.
Because regulators are not asking medtech organisations to avoid AI. They are asking them to use AI in a way that can be defended.
European medical device and in vitro diagnostic regulations have evolved alongside the AI Act. AI used in design, manufacturing, post-market surveillance or quality decisions sits inside Article 10's manufacturer obligations. Notified Bodies are increasingly asking how AI is governed inside the QMS.
The FDA has shipped its Predetermined Change Control Plan framework, its Good Machine Learning Practice guidance, and the Computer Software Assurance approach. Risk-based, fit-for-purpose validation rather than blanket testing – but the controls have to be in place and documented.
The international harmonisation framework that increasingly anchors how regulators think about software-driven devices and the AI inside them.
Data protection. GDPR in the EU. HIPAA where US patient data is involved. ePrivacy. Cross-border transfer constraints under Schrems II. None of this is new, but every AI use case has to be assessed against it.
The signal across all of these: regulators are encouraging adoption while requiring proportionate controls. The risk-reward posture is explicit. Blanket positions – either prohibition or unrestricted use – sit outside that posture.
Why blanket prohibition fails.
The cleanest response to AI risk in a regulated medtech organisation is to prohibit it.
Our experience suggests this does not work. Productivity is lost. Shadow IT fills the gap, with employees using personal accounts on personal devices to do work that organisational policy forbids – and producing a worse compliance posture than if the work had been done inside sanctioned tools.
Just as importantly, prohibition is a strategic choice with a half-life. The competitive pressure does not pause while an organisation makes up its mind. Within twelve to eighteen months, all organisations will be hiring from a pool that has trained up. Organisations that haven't adopted will be competing against peers with AI embedded inside their operating model.
Doing nothing at the corporate level is no longer a neutral position. Adopting via shadow AI is not a safe strategy. The only defensible posture is for it to be designed deliberately.
Why undisciplined adoption fails.
The opposite response is to enable AI broadly and let teams figure it out. This produces faster early movement and looks decisive for up to six months.
In every organisation we have worked with, the momentum dies unless it is backed by dedicated workplace training. It is not enough to provide the information. Collective experience is vital to create a focal point that leads to adoption.
The 'let them loose' approach also produces several specific problems unique to medtech.
These are not theoretical. They are the everyday failure modes of organisations that adopted AI without a frame.
Where Brightbeam stands.
Our position is straightforward. AI adoption in medtech has to be deliberate, risk-based and embedded. Not performative and not a free-for-all.
That means a shared, common curriculum that treats the regulatory frame as foundational rather than optional. A delivery model that uses the participant's own work as the worked examples, not generic ones. A governance posture that fits AI use into the existing QMS rather than building a parallel structure. A measurement framework that tracks the four things that matter – Activity, Quality, Value and Risk – agreed with leadership before the work begins.
We deliver this for medtech specifically because even within life sciences, medtech has its own reality. Your needs are different, the proof points are different, the regulatory tempo is different and the operating models are different. Our curriculum reflects that.
The rest of this site is the detail of how that's achieved.
Two paths forward.
Read the curriculum. It is reproduced in full. Foundations, Applied Practice, Organisational Implementation. Twenty sub-modules. Every learning objective.
Read the curriculum →Read Our Approach. Plan, Educate, Facilitate. The four sprints. The three-phase loop. Mixed cohorts. Sector specificity.
Read Our Approach →If you want to talk about whether this fits your organisation, the contact is in the footer.