Every question we get asked. Organised by who is asking.
The FAQ is the workhorse of this site. It exists so that procurement, IT, quality, L&D and the executive sponsor can each find their answers without reading everything else – and so decision-makers and participants can use it as an internal reference when they are explaining the programme to others.
Pick the section that matches your role.
Each answer is self-contained.
If a question you need answered is missing, the contact in the footer will get you to a real person.
The leadership-level questions. What this programme is, what it produces, what it costs in time and capacity, what it returns.
What does this programme actually produce?
Three things, in order. Trained individuals who can use AI confidently and defensibly inside regulated medtech work. Embedded organisational practice – policy, governance, measurement and a community that keeps the work going after we leave. Measurable change in the work itself, tracked across four dimensions (Activity, Quality, Value, Risk) agreed before the engagement starts.
How long does it take to see results?
Activity changes show up in weeks. Quality changes – work being done faster, with fewer rework cycles – typically show up by the end of Sprint 1, six to ten weeks in. Value changes that hold up in a board paper take longer, usually by the close of Sprint 2. Embedded organisational change is a multi-quarter outcome.
What's the investment?
The commercial framing is modular. A single-sprint engagement is the typical entry point, with subsequent sprints commissioned at the close of each one based on outcomes evidence. Specific investment depends on cohort size, sprint scope and the level of bespoke worked-example development required. Detailed pricing comes through proposal – the contact in the footer is the path.
Who else has done this?
Brightbeam has worked with organisations across biopharma, medtech, advanced manufacturing and financial services. Named clients on file are listed on the About Brightbeam page. Named-client conversations and references are arranged through the contact in the footer.
What's distinctive about Brightbeam vs alternatives?
Three things. Sector specificity – we build worked examples from your own medtech context, not generic cases. Methodology heritage – the curriculum is built on Cognitive Task Analysis and operationalised into a delivery model (Plan / Educate / Facilitate) that has been refined across hundreds of cohorts. Embedded posture – we are not building a course you consume and forget. We are building organisational capability designed to compound after we leave.
What are the risks of doing this?
Two main ones. The first is delivery risk – that the programme runs but does not produce measurable change. We mitigate this with the four-dimension measurement framework agreed at the leadership workshop, baseline survey at the start, comparative impact reporting at the close of each sprint. The second is opportunity cost – your people invested time in this rather than other work. We mitigate this by using their actual work as the worked examples, so the time spent in the programme produces real outputs they would have had to produce anyway.
What are the risks of not doing this?
Three main ones. Regulatory exposure – the EU AI Act Article 4 literacy obligation is in force and most organisations cannot yet defend their position on it. Productivity decay – competitors moving faster on AI accumulate compounding advantage. Shadow IT – without sanctioned AI use, employees use personal accounts on personal devices to do work that organisational policy forbids, producing a worse compliance posture than no AI use at all.
How do you measure success?
Activity, Quality, Value, Risk. Agreed at the leadership workshop. Tracked through delivery. Reported in a comparative impact report at the close of each sprint and at programme completion. The four-dimension framework is described in detail on the How We Deliver page.
How does this fit with our existing AI initiatives?
Embed is designed to complement build work, not duplicate it. If you are building AI use cases internally or with another partner, Embed teaches your people the language, mental models and craft they need to commission, evaluate and operate those use cases. The two work in concert – Embed makes build projects faster and cheaper because staff already understand what AI can and cannot do.
What's the time commitment for our people?
A typical sprint involves around four to six hours per participant per week across the sprint duration – two to three hours of live session, plus homework time tied to their real work. The capstone project at the close of each sprint is typically a half-day to a full day of effort spread across two to four weeks. We design around participant workload deliberately and adjust the cadence to fit operational reality.
What level of leadership engagement is needed?
Real but bounded. The leadership workshop at the start (half a day to a day). Active sponsorship throughout, with brief check-ins between sprints. Visible attendance at capstone presentations. Endorsement of any policy or governance changes the programme produces. The programme does not run itself, but it does not require constant leadership attention either.
Can we start small?
Yes. Most engagements begin with Sprint 1 and a single cohort. The decision to commit to subsequent sprints is taken based on Sprint 1 outcomes. The Plan phase produces an honest assessment of fit before the first session runs.
What happens after the formal programme ends?
The internal community of practice continues. Updated curriculum materials remain accessible. Coaching calls extend by arrangement. Most importantly, the cascading discipline taught in Module 3 means trained champions continue developing capability inside the organisation after our formal involvement ends.
Will this satisfy our regulators?
The curriculum is designed to help your organisation meet its regulatory obligations – particularly the EU AI Act Article 4 literacy obligation. We do not claim the programme constitutes regulatory compliance in itself. Meeting your regulatory obligations remains your organisation's responsibility. The Brightbeam Certificate of Completion documents the competencies acquired and is widely accepted as evidence of AI literacy training. See About Brightbeam for our regulatory posture in detail.
How do we know it's working?
The comparative impact report at the close of each sprint is the answer. Baseline established at the start of the programme, change tracked across the four dimensions during delivery, results reported transparently at the end. If something is not working, the report says so – we treat measurement as a discipline, not as marketing.
The internal-selling questions. How to build the case, run a pilot, keep momentum, make this stick.
How do I build the internal case for this?
Start with a real workflow problem your organisation already cares about – quality reporting overhead, audit-finding rework, complaint backlog, regulatory drafting cycles. Frame the case around that workflow and the four-dimension measurement framework (Activity, Quality, Value, Risk) the programme will produce evidence against. Brightbeam can supply a leadership briefing pack to support this – request through the contact in the footer.
What proof points can I show my exec team?
The full curriculum on this site is one. Brightbeam's track record across regulated industries (About Brightbeam) is another. Anonymised case studies in the Resources section as they land. Named-client references arranged through the contact. The Why Embed, Why Now page is designed specifically to be forwarded to a sceptical exec.
How do I run a pilot?
Most engagements start with a single Sprint 1 cohort – that is the pilot. The Plan phase before Sprint 1 includes a leadership workshop, baseline survey and function-by-function deep-dives that produce an honest assessment of fit before the first session runs. The decision to commit to Sprint 2 is taken at the close of Sprint 1 based on outcomes evidence. There is no all-or-nothing commitment up front.
How do I get buy-in from quality and regulatory?
Use the Quality and Regulatory section of this FAQ. It is designed to answer the questions they will ask. The curriculum's positioning on the QMS, audit posture, regulatory monitoring, PHI handling and Article 4 literacy obligation typically answers most concerns. Where they have a specific question we have not addressed, surface it through the contact and we will respond.
How do I get buy-in from IT?
Use the IT and Information Security section of this FAQ. Most IT concerns centre on tooling, data handling and integration – all of which the programme is designed to operate inside the organisation's existing approved environment. We do not require new tool deployments to deliver.
Who should be in the first cohort?
Mix across functions and seniority. The Embed approach is designed for mixed cohorts and the outcomes are stronger when the regulatory lead learns alongside the quality engineer, when the clinical affairs manager sits next to the manufacturing scientist. Avoid an all-functional or all-senior cohort – both produce weaker results than a mixed one.
What support do I get from Brightbeam during cascading?
The community of practice. Updated curriculum materials. Coaching calls between sprints to discuss internal cascading challenges. Champion-specific briefings when the curriculum updates materially. The Module 3 content is specifically designed to teach the cascading discipline, so you are not on your own.
How do I measure impact in a way leadership will accept?
The four-dimension framework. Agreed at the leadership workshop, baselined at programme start, tracked through delivery, reported in the comparative impact report at the close of each sprint. The framework is designed to produce numbers that hold up in a board paper without overclaiming.
What if the first sprint underperforms?
The comparative impact report says so. We do not gloss over weak results. The conversation between Sprint 1 and Sprint 2 is the moment to decide whether to continue, what to change, or whether the engagement is the right fit. Brightbeam treats that conversation seriously – extending an engagement that is not producing value benefits no-one.
How do I sequence sprints?
The default sequence is Core Skills → Vertical Skills → Strategic Change → Embedding. Most organisations work through that sequence over twelve to eighteen months, though pace varies. Some organisations stop after Sprint 2 and continue independently. Some go to Sprint 4. The decision is taken sprint by sprint.
Can I customise the curriculum to our specific workflows?
Yes – that is the Plan phase's job. The sub-module structure stays consistent across engagements, but worked examples, reference material and depth of coverage are tailored to your organisation. Sprint 2 in particular is designed around your specific role-by-role workflows.
How do I keep momentum between sprints?
The community of practice. Coaching calls. Capstone presentations. Visible champion-led activity inside the organisation between formal sprint dates. Most successful programmes have a between-sprint rhythm – even a short monthly internal session keeps the work alive.
What if our regulatory environment changes mid-programme?
The curriculum is monitored continuously and refreshed within thirty days of any material change to the EU AI Act, MDR, IVDR, FDA AI/ML guidance, MHRA software guidance, IMDRF positions or HPRA-specific developments. Champions and active participants receive a briefing note when material updates are made.
How do I report on this to the board?
The comparative impact report is designed for it. Use the four-dimension framework as the structure. Pair it with one or two participant stories that bring the data to life. We can produce a board-ready summary at the close of each sprint as part of the engagement.
Where this fits inside the existing learning architecture. Cohort design. Cascading. Training records.
How does this fit into our existing learning architecture?
The programme is designed to integrate with your training matrix and competency requirements rather than to sit alongside as a parallel track. The Brightbeam Certificate of Completion is documented in a way that allows the participant's organisation to credit it against existing AI literacy, GxP or regulatory training requirements. We work with your L&D team during the Plan phase to map this in advance.
Will it count toward GxP or regulatory training records?
The certificate documents the competencies acquired. Whether it counts toward specific GxP, regulatory or competency records is a decision your quality and L&D teams make – different organisations integrate it differently. We provide the documentation; you decide how to apply it inside your training matrix.
What credit or certification do participants receive?
A Brightbeam Certificate of Completion documenting the sprints completed, skills covered and the medtech-specific context in which they were delivered. The certificate is not a regulatory qualification and does not substitute for any formal accreditation your QMS requires. Within those bounds, it is widely accepted as evidence of AI literacy training under the EU AI Act Article 4 obligation.
How do you cascade beyond the formal cohort?
Module 3 teaches the cascading discipline directly. Champions and trained participants run internal sessions, demonstrations and pair learning with colleagues using the materials and patterns the programme provides. The community of practice supports them with updated content. Most organisations see significant secondary capability built in the six to twelve months after the formal programme ends.
How do you handle different starting maturities in the same cohort?
The baseline survey in the Plan phase identifies the spread. Mixed cohorts handle the variance better than single-maturity cohorts because more advanced participants help less advanced ones learn faster, and less advanced participants ask the foundational questions that benefit everyone. Where the spread is genuinely too wide, we can split the cohort or run parallel cohorts at different paces.
What's the minimum viable cohort size?
Around eight participants is the lower bound for the cohort dynamics to work. Below that, the mixed-perspective benefit drops off and the programme economics tighten.
What's the maximum?
Around twenty-five participants per cohort is the upper bound. Beyond that, the live-session dynamics suffer and the small-group coaching becomes difficult to deliver well. Larger organisational rollouts typically run multiple cohorts in parallel rather than oversize a single one.
Can we co-deliver with our internal L&D?
Yes – this is increasingly common. Internal L&D involvement strengthens the cascading dimension because the learning function is already aligned with what is being delivered. The Plan phase is where co-delivery arrangements are agreed.
How do you handle remote vs in-person?
Both work. Remote delivery via Teams or equivalent is the default for most engagements. In-person delivery is offered where the cohort is geographically clustered and the value of in-room work justifies the cost. Hybrid is workable but requires more facilitator effort to maintain cohort dynamics.
What about participants who join part-way through?
We discourage it for cohort dynamics reasons but accommodate where unavoidable. Late joiners get accelerated pre-work covering the sessions they missed and a one-to-one onboarding call. Joining after the third session of a sprint usually means deferring to the next cohort.
How do you assess learning?
Through the homework reviewed by facilitators, the capstone project at the close of each sprint, and the comparative impact reporting that tracks Activity and Quality changes in the work itself. We do not run formal exams – the assessment is the work the participant produces during the programme.
What happens to participants who don't keep up?
Coaching support increases. Where the issue is workload rather than capability, we can adjust the participant's homework expectations. Where the issue is capability or engagement, the conversation goes back through the L&D and champion structure – Brightbeam does not unilaterally make participation decisions.
The compliance, audit and regulatory posture questions. PHI. QMS integration. Audit-readiness.
How do you handle PHI?
Default position: PHI does not enter AI tools used in delivery. Worked examples use anonymised or synthetic data drawn from sector-appropriate sources. Where a specific worked example would benefit from real data, the data handling decision goes through the Plan phase and requires explicit validation that the AI environment is approved for the data class involved.
Also under IT & InfoSec · Champions
How do you handle controlled documents?
Controlled documents stay inside your eDMS. Where the programme works with controlled-document patterns (SOPs, work instructions, technical files), it works with derivatives or anonymised versions, not the controlled originals. The boundary between the AI workspace and the controlled record is taught explicitly in Modules 2 and 3.
How does this work alongside our QMS?
The curriculum is designed to fit AI use into the existing QMS rather than to create a parallel structure. AP1 in Module 2 covers this directly – change control, document control, training records, supplier qualification, periodic review. The aim is that AI use becomes a normal part of the QMS rather than a special exception.
What's the audit posture?
The curriculum teaches inspection-ready evidence as a discipline. For any AI activity that touches a regulated record or decision, participants learn to maintain evidence of tool, user, input, output, review and decision. The curriculum itself is designed to survive scrutiny – participants who work through it can defend what they did, why and under what controls.
How do you stay current with regulation?
Monitored continuously across the EU AI Act, MDR, IVDR, FDA AI/ML guidance, MHRA software guidance, IMDRF positions and HPRA-specific developments. The regulatory context inside the curriculum is refreshed within thirty days of any material change. Active participants and champions receive briefing notes when updates land.
Will this satisfy Notified Body scrutiny?
The curriculum is designed to support an organisation's ability to demonstrate AI governance to a Notified Body, FDA, MHRA or MDSAP auditor. We do not claim it constitutes Notified Body acceptance in itself. The Notified Body assesses your QMS and your specific AI use, not Brightbeam's curriculum. What the curriculum produces is documented, defensible practice that holds up in that conversation.
How do you handle the EU AI Act Article 4 literacy obligation?
The programme is designed to produce documented evidence of AI literacy appropriate to participants' roles. The Brightbeam Certificate of Completion records the competencies acquired. Whether this constitutes Article 4 compliance for your specific organisation is a decision your regulatory and legal teams take based on your AI deployment posture – but the programme is designed to provide the evidence basis for that decision.
What about Computer Software Assurance?
CSA thinking is taught directly in AP1: Compliance & Regulation. Risk-based, fit-for-purpose verification rather than blanket validation for its own sake. Participants learn to apply CSA reasoning to AI tools and AI-generated outputs, not just to traditional software systems.
Are the AI tools you use validated?
The tools used in delivery are the tools your organisation has already approved or is prepared to approve. We do not bring validated AI tools – we use yours. Where the engagement requires a specific tool that is not yet approved, the validation conversation goes through your existing IT and quality processes, not around them.
What happens if there's an AI-related incident?
The curriculum teaches incident handling directly in Module 3 – escalation paths for incidents like data leak, hallucinated output in a regulated record or unauthorised tool use, with the recovery procedures that follow. During delivery, any incident involving Brightbeam material or facilitation is documented and managed through the engagement's contractual incident-handling provisions.
How do you handle PHI in worked examples?
Same answer as the first PHI question above – default position is no PHI in worked examples. Synthetic or anonymised data is the standard. Where real data is required, validation goes through Plan phase.
How does this interact with 21 CFR Part 11?
AP1 in Module 2 covers Part 11 and EU Annex 11 expectations directly – attribution, dated entries, reason for change, integrity, limited access. Participants learn to apply these expectations to AI-touched records. The curriculum is designed to produce practice that holds up under Part 11 scrutiny.
What's your regulatory monitoring approach?
Three layers. First, the programme team reviews tool and feature updates each quarter. Second, regulatory developments are monitored continuously and the curriculum refreshed within thirty days of any material change. Third, champions and participants receive briefing notes when material updates land. Detail in the Content Maintenance section of the How We Deliver page.
Does the curriculum cover MDR / IVDR specifically?
Yes – across multiple modules. The regulatory frame in F1 covers MDR/IVDR positions on AI. AP1 covers ISO 13485 integration and Notified Body posture. The agentic modules (AP7–AP9) cover the MDR Article 10 implications of AI use in design, manufacturing, post-market or quality decisions about a medical device.
How do you handle GxP boundaries?
AP1 covers this directly. Participants learn to recognise where AI use crosses GxP boundaries (GMP, GLP, GCP, GDP) and apply the specific controls each Good Practice regime requires. The curriculum treats GxP as foundational, not as an afterthought.
Tooling, data handling, security posture. Integration with existing systems.
What AI tools do you use?
The tools your organisation has already approved or is prepared to approve. The curriculum is designed to be platform-agnostic – RBSF, EG, the agentic patterns and the compliance discipline transfer across Claude, Microsoft Copilot, Gemini and other major harnesses. We do not require new tool deployments to deliver.
What data goes where?
Default position: client data stays inside client environments. Worked examples use anonymised or synthetic data unless a specific exception is approved through the Plan phase. Where Brightbeam needs access to client material for worked-example preparation, it is governed by the engagement's data processing agreement.
Do you require new tool deployments?
No – the default is to work with the AI tools your organisation already has. Where a new tool is genuinely required for an engagement (rare), the deployment goes through your existing IT approval process, not around it.
What's your security posture?
Documented in detail in the security and data handling pack available through the contact in the footer. Summary: data minimisation, no PHI by default, anonymised worked examples, controlled-environment handling for client-confidential material, GDPR and equivalent compliance for cross-border data.
Also under Commercial & contracting
How do you handle cross-border data transfer?
GDPR adequacy, Schrems II and equivalent considerations are assessed at the Plan stage based on the cohort's geography and the tools involved. Where the engagement spans jurisdictions, tool selection and data handling are adjusted accordingly. The decisions made are documented in the engagement's data handling record.
What's your DPA position?
A standard data processing agreement template is available through the contact in the footer. We are also able to negotiate against the client's preferred DPA. Most engagements settle the DPA position before the Plan phase begins.
How do you handle integration with our existing systems?
The curriculum is designed to teach participants to use AI alongside existing systems (eDMS, PLM, QMS, CTMS, EDC, RIM) rather than to integrate AI into them. Where an integration is required for a specific worked example, it goes through the client's existing IT integration process.
What about our information security policies?
The programme operates inside the client's information security policies. Where client policy is silent on AI use, the engagement helps surface and address those gaps as part of Module 3. We do not work around information security policy – we work with it.
What if a participant breaches AI policy during training?
Standard incident handling applies. The breach is documented, escalated through the engagement's incident-handling provisions and addressed through the client's existing disciplinary or compliance structures. Brightbeam does not handle the consequences of policy breach – that is the client's responsibility – but we do report and document.
Do you use our enterprise tenants or your own?
Where an enterprise tenant is available and approved for the engagement, we use it. Where one is not, the Plan phase makes a documented decision about how worked examples will be handled. Default is the client's environment.
What logging and monitoring do you put in place?
For Brightbeam-facilitated activity, standard usage logging through whatever tools the engagement runs on. Where the client has additional monitoring requirements, they apply. The engagement's logging and monitoring posture is agreed at Plan stage.
What's your incident response process?
Documented in the security and data handling pack. Summary: incidents are notified to the client within agreed timeframes, root cause analysis is run jointly, corrective actions are documented, and any pattern is reflected in updated curriculum or process.
What being on the programme actually involves.
What do I have to do?
Attend the live sessions, complete the homework tied to your real work, contribute to the cohort discussion and produce a capstone project at the close of each sprint. The work is real work – most of what you produce is something you would have had to produce anyway.
How much time will it take?
Around four to six hours per week across each sprint – two to three hours of live session, plus homework time. The capstone at the close of each sprint is typically a half-day to a full day spread across the final two to four weeks.
What tools will I learn?
The major AI platforms – Claude, Microsoft Copilot, ChatGPT, Gemini and the harnesses around them. The specific tools used depend on what your organisation has approved. The curriculum teaches the underlying patterns (RBSF, EG, agentic discipline) that transfer across all of them, not the specifics of any single tool.
Will I need to download anything?
Usually not – the tools used are typically web-based or already installed inside your organisation. Where an installation is required for a specific session, you will be told in advance and supported through it.
What if I'm new to AI?
The Foundations module is designed for exactly this. The mixed-cohort approach means you will be alongside colleagues at different starting points, including some at your level. The pre-work will get you to a confident starting position before session one.
What if I'm already advanced?
The Plan-phase baseline survey identifies your starting point. The mixed-cohort approach gives you a role helping less experienced colleagues, which deepens your own understanding. Where you are genuinely beyond the Sprint 1 content, we can accelerate or pull you forward into Sprint 2 content earlier.
What happens if I miss a session?
Session recordings (where the cohort consents) are available afterwards. Catch-up homework is provided. Missing one session is normal and recoverable. Missing several signals a workload conversation that is best had with your manager and the champion.
What support do I get during?
Live facilitator support during sessions. Small-group coaching between sessions. Asynchronous question support through the cohort's working channel. The community of practice for cross-cohort questions.
What support do I get after?
Continued access to the community of practice. Updated curriculum materials as they refresh. Coaching call availability for sustained challenges. The cascading materials in Module 3 to support you taking the work into the rest of your team.
Will this help my career?
Indirectly. The programme is designed to build organisational capability, not personal certifications. That said, participants regularly report that the AI fluency they develop becomes a meaningful career asset in regulated industries where AI literacy is increasingly expected. The Brightbeam Certificate documents what you have learned.
Do I get a certificate?
Yes – a Brightbeam Certificate of Completion documenting the sprints completed, skills covered and the medtech-specific context. See How We Deliver for the certificate's positioning.
What about confidential work – how is it handled?
The default position is that PHI does not enter AI tools used in the programme. Confidential work stays inside the controls your organisation already operates. Where a worked example needs real data, the handling is agreed at Plan stage with explicit validation that the environment is appropriate for the data class involved.
Also under Quality & regulatory
The cross-cutting commercial questions. Pricing. Engagement models. Renewal.
How is the programme priced?
Modular. Each sprint is commissioned at a fixed scope and fee based on cohort size, depth and bespoke worked-example development. Specific pricing comes through proposal.
What's the typical engagement size?
Most engagements begin with one cohort of twelve to twenty-five participants in Sprint 1. Multi-cohort, multi-sprint engagements are common. Larger organisations frequently run multiple cohorts in parallel.
Can we run multiple cohorts in parallel?
Yes – this is the typical pattern for larger organisations. Parallel cohorts allow horizontal coverage of different functions or geographies inside the same sprint window.
What's the minimum commitment?
A single sprint. Most engagements start with Sprint 1 only. Sprint 2 and beyond are commissioned independently based on outcomes evidence.
How is payment structured?
Standard staged payment against milestone delivery – typically Plan completion, mid-sprint, sprint close. Specific terms are in the MSA.
Can we extend or modify mid-programme?
Yes, by mutual agreement. Mid-sprint scope changes are uncommon but possible where the engagement reveals a need that was not visible at the start.
What if we need to pause?
Pauses between sprints are normal and accommodated without penalty. Mid-sprint pauses are possible but disrupt cohort dynamics – we will discuss alternatives where requested.
Are there volume discounts?
The pricing structure scales with engagement size in the way you would expect. Specific commercial framing happens through proposal.
How do you handle multi-year engagements?
Multi-year engagements are typically structured as a master agreement with annual or sprint-by-sprint commercial commitments. This protects both parties from over-commitment.
What's included and what's extra?
The sprint fee includes all live delivery, coaching, materials, the digital resource pack and the comparative impact report. Bespoke worked-example development against your specific content is included up to a typical scope; significant content development beyond that is sometimes priced separately. Specific scope is detailed in each SOW.
If your question is not answered here, the contact in the footer is a real person who reads their email. We would rather hear from you than have you guess.
If there is a question we should have answered, tell us. The FAQ is maintained based on what actually comes up in the sale.
Ask James Harte →