What it actually looks like, week to week.
The substance of the programme is in the Curriculum. The methodology is in Our Approach. This section covers the practical layer in between – what a sprint feels like for a participant, how they are supported across the arc, how the curriculum stays current and how outcomes are measured.
A sprint runs over six to 14 weeks depending on cadence agreed with the client.
Each features live training sessions, homework tied to the participant's real work, small-group coaching and a capstone project.
Pre-work is provided before each sprint, a parking lot is present during sessions, a digital resource pack and community of practice are provided after.
Content is reviewed quarterly against tool changes, regulatory changes and trainer feedback.
Outcomes are measured against four agreed dimensions – Activity, Quality, Value, Risk – from the Plan phase onwards.
A sprint is the unit of delivery. Each one compounds on the last.
Cadence. Most sprints run over six to 14 weeks. The exact rhythm is agreed with the client – some organisations prefer compressed delivery (two sessions a week), others prefer an extended cadence (one session a fortnight) that lets capability bed in alongside live work. We have done both. The decision depends on participant workload, the criticality of the work they are doing concurrently, and the depth of capability the organisation wants to reach in this sprint. We find one session per week is optimal. But operational reasons often dictate a different cadence.
Live, facilitated sessions.
Virtual or onsite. Each session covers a specific capability area, follows the three-phase loop of teaching concept, applying to participant's own work and reflect and challenge across cohort – and produces something the participant takes back into their work that week. Typically two to three hours, with a break.
Applied to real tasks.
Between sessions, participants apply their new skills to real tasks from their daily work. Homework is reviewed by the facilitator and used as material for the next session.
Small-group, interleaved.
Smaller groups, less formal, more depth on individual challenges and shared practice where appropriate. This is where most of the personalised capability development happens.
Three-plus skills chained.
Each sprint closes with a capstone project – a piece of work that requires participants to chain three or more skills from the sprint together, applied to a real problem they own. Capstones are evaluated and reports are sent back to each individual.
Compounds into the next.
This shape is consistent across sprints. The content varies – Sprint 1 is horizontal core skills, Sprint 2 is vertical role-specific work in workflows, Sprint 3 is strategic redesign, Sprint 4 is full operating-model integration.
Five stages. Consistent shape. The content shifts sprint by sprint; the rhythm doesn't.
Support runs across the programme's full arc – not just when each session is live.
Every sprint is wrapped by a larger support structure. Pre-work sets the table. A parking lot runs during sessions. Resource packs and a community of practice hold capability in place after. Between-sprint coaching threads the next one to the last.
Pre-work that earns its keep.
Participants receive pre-work tailored to their organisation's AI tooling and the specific use cases being covered. It establishes the baseline mental model and surfaces the specific tools, workflows and constraints each participant brings into the sprint.
By session one, every participant arrives with relevant context and a personal task to work on from the start.
The parking lot.
Facilitators operate a standing list for regulatory or organisation-specific questions that fall outside the session's scope. These are documented and returned to participants with guidance on where to seek authoritative answers within their compliance structures.
Resource pack · community.
Participants receive a digital resource pack containing the worked examples, prompt templates, validation framework references and skills used during the sprint.
They also gain access to the Brightbeam online community of practice – a moderated space where they can share experiences, ask questions and access updated materials.
Follow-up coaching.
Follow-up coaching sessions can be arranged to discuss progress, troubleshoot challenges encountered during internal cascading and tune the next sprint's content to emerging needs. The between-sprint conversation is where the next sprint's worked examples often get refined to reflect what the organisation has actually been doing.
The curriculum is a live document.
AI tools and regulatory frameworks evolve rapidly; a curriculum developed one year and delivered the next risks subtracting value rather than adding it. Brightbeam's maintenance approach has three components and runs on a quarterly cycle.
Tool and feature updates.
Each quarter the programme team reviews updates across the major AI platforms – Claude, Copilot, ChatGPT, Gemini and the harnesses each provides – and adjusts worked examples, demonstrations and skills accordingly.
New capabilities relevant to medtech workflows are incorporated. Deprecated features are removed.
Regulatory monitoring.
The EU AI Act implementation timeline, MDR and IVDR updates, FDA AI/ML guidance, MHRA software guidance, IMDRF positions and HPRA-specific developments are monitored continuously.
The regulatory context embedded in the curriculum is refreshed within thirty days of any material change. Version-controlled materials ensure participants and facilitators always work from current content.
Champion communication.
When the curriculum or worked examples are materially updated during the programme, internal champions and active participants receive a short briefing note explaining what changed and why – so cascading training inside the client organisation reflects current content. The briefing cadence runs alongside the programme; once the engagement closes, the client owns the cascade and can choose to extend the maintenance relationship if they want continued updates.
The maintenance discipline is the practical reason the curriculum has stayed defensible across a fast-moving two years. It is built into the cost of running the programme, not added on as a premium.
Four dimensions, equal weight. One number flatters; four tell the truth.
Brightbeam measures programme outcomes across four dimensions, agreed with the client at the leadership workshop and tracked throughout delivery. The four-dimension structure is deliberate – a single flattering number is easy to produce and easy to mistake for progress; the four together resist that.
Who is using AI, how often and with which tools.
Captured through usage logs and participant journals.
Activity on its own does not prove value, but the absence of activity is the earliest signal that adoption is stalling.
The measurable impact on the work itself. Rework cycles reduced. Turnaround times improved. Error rates tracked against the baselines established in the Plan phase.
This is where the work itself starts to look different.
Hours freed. Volume increased. Cost avoided. Tied directly to the KPIs agreed at the leadership workshop.
This is the dimension that gets quoted in board papers and underpins the case for continuing into subsequent sprints.
Incidents, near-misses and compliance findings related to AI use. Tracked to ensure adoption is safe as well as productive.
This is the dimension that protects the organisation from the failure modes that follow undisciplined adoption.
Measurement is built into every sprint so leaders can see the flywheel spinning and decide where to invest next. A comparative impact report is produced at the close of each sprint and at programme completion.
A Brightbeam Certificate of Completion. Documented evidence, not accreditation.
Participants who complete the programme receive a Brightbeam Certificate of Completion documenting the sprints completed, skills covered and the medtech-specific context in which they were delivered.
These certificates document the competencies acquired – including the regulatory awareness component – and can be used by the participant's organisation as evidence of AI literacy training. This is increasingly relevant given the AI literacy obligations now in force under Article 4 of the EU AI Act, which requires organisations deploying AI systems to ensure staff have sufficient AI literacy appropriate to their role.
The certificate is not a regulatory qualification and is not a substitute for any formal accreditation the client's quality system requires. It is documented evidence of a known curriculum delivered to a known standard.
The substance of the programme is in the curriculum. The methodology is in Our Approach. The practical detail of how it gets delivered is here. The outcomes detail – what previous cohorts have actually achieved – is in Outcomes.
If you have a specific delivery question that is not answered above, the FAQ has a How We Deliver section that goes deeper, organised by audience.
Talk to James Harte →