Big-4 Licensing Pitch Deck
Big-4 Licensing Pitch Deck Outline — RAIDT
Audience: Partner / Director leading the AI assurance, Trusted AI, or Responsible AI practice at KPMG, Deloitte, PwC, EY (or BDO/Mazars/Grant Thornton tier).
Length: 14 slides + appendix
Tone: commercial, regulator-aware, not academic. Lead with revenue, prove with rigour.
Goal of meeting: secure a paid pilot engagement (£60–150k, 12 weeks, 1 anchor client) that converts into an annual licence (£150k–£500k/yr) for the methodology, training, and certification.
Each slide below shows: Title / Headline / Body content / Visual / Speaker note. Convert each block to a single PowerPoint slide.
Slide 1 — Title
Title: RAIDT
Headline: The audit-grade evidence framework for generative AI assurance
Body: Run-level evidence. Five governance pillars. Mapped to EU AI Act, ISO/IEC 42001, NIST AI RMF.
Visual: Clean RAIDT lockup; small Portsmouth crest in corner; "Confidential — for [Firm Name]" footer
Speaker note: Today I want to show you why your AI assurance practice is exposed without a defensible run-level methodology, and how licensing RAIDT closes that gap.
Slide 2 — The opportunity in front of you
Headline: AI assurance is becoming a £[X]bn UK practice — but you don't have a defensible methodology yet
Body bullets:
- EU AI Act enforcement, ISO 42001 certification demand, and FCA/PRA/MHRA scrutiny have created mandatory evidence requirements
- Big-4 AI assurance engagements have grown but typically rely on adapted SOC 2 / IT GRC playbooks, not AI-specific evidence frameworks
- The first firm with a defensible, peer-reviewed methodology will set the market standard
Visual: Market growth chart with explicit sources (CDEI / techUK / Gartner)
Speaker note: Frame the commercial stakes before showing the solution. Numbers must be defensible — confirm before pitching.
Slide 3 — The methodology gap
Headline: Your audit teams are evidencing AI use with the wrong unit of analysis
Body:
- Most current methodologies govern at the model level (model cards, datasheets) or policy level (principles, AI ethics statements)
- When a regulator, claimant, or court asks "what happened in this specific AI-assisted decision?" model cards and policies don't answer it
- The unit that matters is the run — one configured execution of a GenAI system on a specific task, with a specific prompt, context, output, and human reviewer
Visual: Three-column diagram: Model-level / Policy-level / Run-level — with shaded gap above "run-level"
Speaker note: This is the wedge. Make sure the partner sees that their current toolkit cannot evidence a run, only a model or a policy.
Slide 4 — What RAIDT is
Headline: A peer-reviewed evidence framework for run-level governance
Body:
- Run-level evidence pack — bounded record of one GenAI run: prompt, model configuration, retrieval context, parameters, safeguards, output, human review, final use
- 5-pillar scoring profile — Responsibility, Auditability, Interpretability, Dependability, Traceability — each scored 1–5
- Standards mappings — EU AI Act, ISO/IEC 42001, NIST AI RMF + GenAI Profile
- Sector playbooks — healthcare, finance, law, public sector, R&D, supply chain, more
Visual: Two-artefact diagram — evidence pack inputs flowing into scoring profile output
Speaker note: Keep it concrete. RAIDT is methodology, not software. That's exactly what an audit firm needs.
Slide 5 — Why peer-reviewed academic foundations matter
Headline: When a methodology is challenged in court or by a regulator, you need a defensible foundation
Body:
- Trilogy of peer-reviewed papers: Foundations, Empirical Validation, Interoperable Governance
- Design-science methodology synthesising Information Systems governance, accountability theory, auditing literature, algorithmic-auditing research
- University of Portsmouth — supervisors with established track records in OR, IS governance, and decision analysis
- Roadmap toward BSI / ISO standards-track engagement
Visual: Three paper covers + "peer-reviewed" stamp
Speaker note: This is the moat. A consulting methodology built in-house cannot be cited in a regulator dispute. RAIDT can.
Slide 6 — Where RAIDT fits in your engagement
Headline: RAIDT slots into every phase of an AI assurance engagement
Body:
| Phase | RAIDT contribution |
|---|---|
| Pre-engagement scoping | Inventory of in-scope GenAI use cases mapped to RAIDT applicability profile |
| Readiness assessment | Run a sample of evidence packs; score governance readiness 1–5 across 5 pillars |
| Audit fieldwork | Structured evidence-pack capture for material runs; controls testing against scoring rubric |
| Reporting | Audit report with RAIDT scoring profile; gap remediation plan |
| Continuous assurance | Hosted scoring engine for ongoing capture; quarterly recertification |
| Visual: Engagement-phase swimlane with RAIDT modules overlaid | |
| Speaker note: Do not let them think this is a one-off product. Show it slots into the entire revenue cycle. |
Slide 7 — Sector playbooks
Headline: Ready-made playbooks for the verticals where you sell most engagements
Body:
- Healthcare — clinical decision support, medical imaging, RAG over EHR
- Financial services — credit explanation, fraud, claims, advice
- Legal — contract review, e-discovery, compliance triage
- Public sector — case-working AI, eligibility decisions, policy drafting
- Cybersecurity — incident triage, threat intel summarisation
- R&D / Scientific — literature synthesis, hypothesis generation
- (Plus: education, environment, crisis, supply chain, creative, planning, ageing societies)
Visual: Sector grid with maturity icons
Speaker note: This means you can sell RAIDT-on-Day-1 in any vertical your client lives in.
Slide 8 — Standards alignment
Headline: One framework, three regulatory regimes
Body:
- EU AI Act — Articles on transparency, human oversight, logging, risk management → covered by evidence pack + scoring
- ISO/IEC 42001 — AI management system controls → covered by RAIDT scoring profile
- NIST AI RMF + GenAI Profile — Govern / Map / Measure / Manage → covered by run-level evidence and continuous capture
- Crosswalks published in Paper 10 (Interoperable Governance)
Visual: Three-column crosswalk with RAIDT pillars in centre
Speaker note: Your clients sell into all three regimes simultaneously. RAIDT means one methodology for all three audits.
Slide 9 — Commercial structure for [Firm Name]
Headline: Three licensing options tailored to your practice scale
Body:
| Option | Term | Includes | Indicative annual fee |
|---|---|---|---|
| A. Non-exclusive practice licence | 3 yrs | Methodology, scoring rubric, sector playbooks, training for up to 50 staff, annual update | £150k–£250k |
| B. Exclusive Big-4 licence (1 of 4) | 5 yrs | All of A + first-look on new playbooks + co-authored standards engagement + named Centre of Excellence | £400k–£600k |
| C. Pilot pack | 12 wks | Anchor-client engagement, methodology toolkit, scoping & training | £80k–£150k fixed |
| Visual: Three pricing pillars with feature checklist | |||
| Speaker note: Anchor on Option C — pilot — to lower the decision threshold. Options A and B are the upgrade path after the pilot proves out. |
Slide 10 — What you get on Day 1
Headline: A turn-key methodology, ready to deploy
Body:
- Methodology handbook (~150 pages, branded for [Firm Name])
- Run-level evidence-pack templates (Word, Excel, JSON schema)
- Scoring rubric and rater training (online + 2-day in-person)
- Sector playbooks for your priority verticals
- Standards crosswalk packs (EU AI Act, ISO 42001, NIST AI RMF)
- Reference scoring engine (open-source connector + recommended hosted scoring path)
- Quarterly methodology updates as standards evolve
Visual: Stack of asset thumbnails
Speaker note: Make this feel like a product, not a research project.
Slide 11 — Pilot proposal
Headline: A 12-week pilot to prove value with one anchor client
Body:
- Week 1–2 — pilot scoping; client selection from your portfolio; engagement-letter alignment
- Week 3–6 — RAIDT readiness assessment on chosen GenAI use cases; evidence-pack instrumentation
- Week 7–10 — RAIDT audit fieldwork; scoring; gap analysis
- Week 11–12 — co-authored client deliverable; case-study write-up; conversion to annual licence
- Pilot fee: £80k–£150k fixed
- Outcome: a delivered audit report co-branded with [Firm Name] + RAIDT, plus an internal capability ready for scale
Visual: Timeline with milestones and deliverables
Speaker note: This is the slide they will be thinking about. Lower their barrier — fixed fee, fixed timeline, defined outcome.
Slide 12 — Risk and protection
Headline: What stops a competitor from copying this?
Body:
- Methodology IP held by University of Portsmouth, licensed to [Firm Name]
- Trade-mark protection on RAIDT and RAIT brands
- Standards-track engagement in progress with BSI / ISO — the firm that licences first earns Centre of Excellence positioning
- Continuous research updates keep the methodology ahead as EU AI Act, ISO 42001, and NIST evolve
- Exclusivity options available for one Big-4 firm under Option B
Visual: Shield diagram with four protection layers
Speaker note: Exclusivity is the carrot. Use it sparingly — the right buyer will ask about it.
Slide 13 — Why now, why us
Headline: First mover sets the audit standard
Body:
- The first Big-4 firm to publish a defensible AI assurance methodology will dominate the next 5 years of this market
- RAIDT is the only peer-reviewed run-level evidence framework available for licence in the UK today
- University of Portsmouth research team brings academic credibility, ongoing research updates, and standards engagement
- Pilot can start within 8 weeks of letter of intent
Visual: Timeline showing the next 12 months of regulatory milestones with RAIDT pilot dates overlaid
Speaker note: Create urgency without being pushy. The regulatory timeline is doing the work.
Slide 14 — The ask
Headline: Three things from this meeting
Body:
- Confirmation of fit with your AI assurance practice strategy
- Identification of one anchor client for the 12-week pilot
- A second meeting with your Trusted AI / Audit Quality leadership in the next 21 days
Visual: Three-step icon set
Speaker note: End on a clear ask. Don't leave them wondering what you want.
Appendix slides (use only if asked)
A1. RAIDT 5-pillar scoring rubric — full
A2. Run-level evidence pack — schema and example
A3. EU AI Act crosswalk — full
A4. ISO/IEC 42001 crosswalk — full
A5. NIST AI RMF crosswalk — full
A6. Sample sector playbook (healthcare or finance — pick one matching the firm's strongest practice)
A7. Worked example — synthetic credit-explanation case from the Configured Runs paper
A8. Research team bios
A9. Publication list and venues
A10. Frequently asked questions
A11. Contracting principles and IP terms outline
A12. Reference clients and advisors (as they confirm)
Speaker brief
Before the meeting:
- Spend 30 minutes on the firm's most recent AI/assurance thought-leadership and partner LinkedIn — name two specifics in the meeting
- Confirm what the firm is currently selling in AI assurance — read their service brochure
- Identify two recent regulator/EU/ISO developments you can quote without notes
- Walk in with printed methodology handbook table-of-contents — physical artefacts close decks faster
In the meeting:
- Slides 1–4 in 8 minutes — set up the gap
- Slide 5 — pause; let the academic credibility land
- Slides 6–8 — show fit with their world; ask: "does this match the gap in your current methodology?"
- Slides 9–11 — commercial structure and pilot
- Slide 14 — the ask; sit back and listen
After the meeting:
- Same-day thank-you email with attached one-page brief and methodology handbook table-of-contents
- Calendar a 21-day follow-up meeting before leaving the room
- Add to CRM with stage, signal, and next action