ICURe Explore Application
ICURe Explore Application — RAIDT
Programme: Innovate UK ICURe Explore (12 weeks, ~£35k)
Purpose: structured customer discovery to validate commercial routes for RAIDT before committing to a spin-out, licence, or further grant
Lead Researcher (proposed): Mohammad Ali Akeel
Host Institution: University of Portsmouth, School of Organisations, Systems and People
Academic Supervisor: Prof. Mark Xu
This skeleton mirrors the typical Innovate UK ICURe Explore application sections. Treat each > block as a draft to refine; bracketed [ ] items need facts/figures you'll confirm before submission.
1. Executive Summary (200 words)
RAIDT (Responsibility, Auditability, Interpretability, Dependability, Traceability) is a peer-reviewed run-level evidence framework for governing generative AI in organisations, developed through PhD research at the University of Portsmouth. It captures bounded records of individual GenAI runs and scores governance readiness on a 1–5 scale, with explicit mappings to the EU AI Act, ISO/IEC 42001, and the NIST AI RMF.
The compliance burden on UK organisations using generative AI is shifting from policy commitments to evidence. Regulators, auditors, and procuring authorities increasingly demand artefacts that show what happened in a specific AI-assisted decision and how it was governed. Existing tooling (model cards, principles frameworks, generic risk registers) does not provide this run-level evidence.
This Explore project will conduct 100 structured customer-discovery interviews across four candidate market segments — audit firms, regulated-sector compliance leaders, AI assurance vendors, and public-sector procurement — to test which value proposition, customer, and price point are most likely to convert into paying engagements.
Outputs will be (a) a validated commercial hypothesis, (b) a target customer profile and pricing model, and (c) a route-to-market recommendation (spin-out, exclusive licence, open-core, or KTP-led pathway) to inform the subsequent ICURe Discover/Mentor stage.
2. Research Base & Technology Description
2.1 What is RAIDT
- A run-level evidence framework specifying two linked artefacts:
- Run-level evidence pack — bounded record of one configured GenAI run (prompt, model deployment, retrieval context, parameters, safeguards, output, human review, final use)
- 5-pillar scoring profile — observable assessment of governance readiness on Responsibility, Auditability, Interpretability, Dependability, Traceability (1–5)
- Methodology: design science research synthesised from Information Systems governance, accountability and auditing traditions, and algorithmic auditing literature
2.2 Underpinning research outputs
- Paper 08 — RAIDT Foundations: run-level evidence for governing generative AI [Status: under review at peer-reviewed journal — confirm target]
- Paper 09 — Measuring governance readiness in GenAI: empirical validation using influence methods (RAG, PEFT/LoRA, RLHF/DPO) as governance interventions
- Paper 10 — Interoperable governance pathways across the EU AI Act, ISO/IEC 42001, and NIST AI RMF
- Configured Runs manuscript — the configured run as a run-level evidence object for accountable GenAI
- Sector playbooks in active development across healthcare, finance, education, environment, crisis, supply chain, cybersecurity, public policy, law, R&D, creative industries, planning, ageing societies
2.3 IP position
- Methodology and scoring rubric: copyright, University of Portsmouth (researcher-authored)
- Schema and reference scoring engine: planned as open-source connector + paid hosted scoring engine (open-core)
- No filed patents to date; trade-mark and design-rights potential for the RAIT product mark and dashboard
- Tech transfer engagement: [confirm with Portsmouth Research & Innovation Services]
3. Market Opportunity
3.1 Why now
- EU AI Act enforcement timetable for high-risk AI systems creates immediate compliance demand for organisations operating in the EU and for UK firms exporting to it
- ISO/IEC 42001 AI management system certification market is opening; certification bodies need defensible audit methodologies
- UK AI Opportunities Action Plan and the Sovereign AI agenda explicitly call for AI assurance infrastructure
- FCA, PRA, ICO, MHRA, SRA are issuing GenAI-specific guidance that asks for evidence, not commitments
- The audit / assurance market for AI in the UK is projected to grow from [£X estimate] to [£Y estimate] by [year] — confirm with [source: techUK / DSIT AI Assurance Roadmap / CDEI report]
3.2 Candidate market segments (to validate during Explore)
- Audit & assurance firms — Big 4 + mid-tier; need defensible methodology to monetise AI audit engagements
- Regulated-sector AI users — banks, insurers, NHS Trusts, law firms, defence suppliers; need evidence packs to satisfy regulators
- AI assurance vendors / RegTech — need academic-backed methodology IP to differentiate
- Public-sector procurement — DSIT, GDS, Crown Commercial Service; need a procurement standard for AI suppliers
3.3 Initial commercial hypotheses (to test)
- H1: Audit firms will license RAIDT methodology at £100k–£500k/yr per firm
- H2: Regulated organisations will pay £25k–£100k/yr for hosted RAIT tracker SaaS
- H3: AI assurance vendors will integrate RAIDT under licence at £20k–£100k/yr per vendor
- H4: Public sector will adopt RAIDT as a procurement requirement, creating downstream certification revenue from suppliers
- H5: The most defensible commercial structure is open-core (free schema and connector, paid scoring engine and audit reports)
4. Why This Team
- Mohammad Ali Akeel — RAIDT framework lead; PhD researcher with [N] years in [data architecture / financial services / Microsoft Fabric — confirm]; CV under separate cover
- Prof. Mark Xu — academic supervisor; lead of CORDA at Portsmouth; expertise in operational research and decision systems; led the RAID Lab proposal
- Dr Muhammad Awais Shakir Goraya — co-supervisor; expertise in Information Systems governance
- Dr Salem Chakhar — co-supervisor; expertise in multi-criteria decision analysis and AI evaluation
- Industry advisors — to be confirmed during Explore: [target 2–3 advisors from audit, regulated industry, public sector]
- Portsmouth Research & Innovation Services — tech transfer support and IP management
5. Customer Discovery Plan (12 weeks, 100 interviews)
5.1 Targets per segment
| Segment | Target interviews | Examples |
|---|---|---|
| Audit firms (partners + senior managers) | 20 | Big 4 + BDO, Mazars, Grant Thornton, RSM, Crowe |
| Regulated-sector compliance/risk leads | 35 | Tier-1 banks, challenger banks, NHS Trusts (5–8), insurers, law firms, defence suppliers |
| AI assurance vendors / RegTech | 15 | UK-based + UK-presence vendors |
| Public sector AI buyers | 15 | DSIT, GDS, NHS Digital, Cabinet Office, defence procurement |
| Standards/regulator stakeholders | 10 | BSI, ICO, FCA, MHRA, SRA, AISI |
| Adjacent (academia/think-tanks) | 5 | Ada Lovelace, Alan Turing Institute, RAi UK leadership |
5.2 Interview structure
- 30-minute semi-structured interviews
- Same 8-question protocol across segments, with segment-specific probes
- Recorded with consent, transcribed, coded against hypotheses H1–H5
- Outputs logged in shared CRM/sheet with stage progression
5.3 Core questions
- How does your organisation currently evidence governance of GenAI use?
- Where does that evidence break down under regulatory or audit scrutiny?
- What would a credible run-level evidence framework look like for you?
- Who in your organisation owns this problem and controls the budget?
- What would you pay (or charge) for a methodology / tooling that solves it?
- What proof or pilots would you need to commit?
- What concerns would block adoption?
- Who else in this space should we be talking to?
5.4 Geographies
- Primary: UK
- Secondary: EU (Ireland, Germany, Netherlands), given EU AI Act extraterritorial scope
- Tertiary opportunistic: US, UAE (if target stakeholders surface)
5.5 Milestones
- Weeks 1–2 — outreach setup, advisory board confirmed, first 20 meetings booked
- Weeks 3–8 — 80 interviews conducted, weekly hypothesis-update synthesis
- Weeks 9–10 — final 20 interviews focused on highest-converting segment; pricing tests
- Weeks 11–12 — synthesis, route-to-market recommendation, Discover stage proposal
6. Expected Outputs at End of Explore
- Validated customer profile — name of segment, role, buying triggers, willingness to pay
- Commercial hypothesis — chosen route (spin-out / licence / open-core / KTP-led) with evidence
- Pricing model — tested ranges for the chosen route
- Term sheet draft — for the chosen route (e.g., licence terms, equity split, founder package)
- Discover/Mentor stage application — to advance to next ICURe stage
- Public deliverable — a short market report on UK GenAI assurance demand (builds reputation, generates inbound)
7. Beyond Explore — Pathway to Commercialisation
- ICURe Discover (12 weeks, ~£75k) — pilot the chosen route with 1–2 anchor customers
- ICURe Mentor (3–6 months) — incorporate the company OR finalise the licence; close pre-seed if going startup route
- Parallel grant pipeline — Innovate UK Smart, BridgeAI, RAi UK, KTP partnerships, Sovereign AI larger programme
- Standards engagement — submit RAIDT for inclusion in BSI / ISO working groups in parallel
8. Risks and Mitigations
| Risk | Likelihood | Mitigation |
|---|---|---|
| Customer-discovery interviews uncover no clear willingness-to-pay | Medium | Pre-scoped four segments; pivot logic built into weekly hypothesis updates |
| Crowded market in AI assurance tooling makes differentiation hard | Medium | RAIDT's wedge is run-level evidence + standards mapping, not metrics — interviews will validate |
| University IP terms slow commercialisation | Medium | Engage Portsmouth Tech Transfer in Week 1; consider non-exclusive licence if equity path is slow |
| Regulator priorities shift (e.g., UK AI Bill delayed) | Low–Medium | Methodology is regulation-agnostic and maps to multiple frameworks |
| Solo founder bandwidth | Medium | Advisory board + supervisors + Portsmouth innovation team support |
9. Budget Sketch (~£35k Explore envelope)
| Item | Estimate |
|---|---|
| Researcher stipend / buy-out (12 weeks) | £15,000 |
| Travel — UK + 1–2 EU trips | £4,500 |
| Conference / event attendance (RAi UK, techUK, Big-4 partner roundtables) | £2,500 |
| Subsistence | £1,500 |
| Software / CRM / transcription | £1,500 |
| Advisory board honoraria (3 advisors × £500) | £1,500 |
| Market reports & data subscriptions | £2,000 |
| IP / legal advice (Portsmouth-supported) | £2,500 |
| Outputs production (market report, brand identity) | £2,000 |
| Contingency | £2,000 |
| Total | £35,000 |
(Refine against the live ICURe Explore budget cap and eligible costs.)