ICURe Explore Application

ICURe Explore Application — RAIDT

Programme: Innovate UK ICURe Explore (12 weeks, ~£35k)
Purpose: structured customer discovery to validate commercial routes for RAIDT before committing to a spin-out, licence, or further grant
Lead Researcher (proposed): Mohammad Ali Akeel
Host Institution: University of Portsmouth, School of Organisations, Systems and People
Academic Supervisor: Prof. Mark Xu

This skeleton mirrors the typical Innovate UK ICURe Explore application sections. Treat each > block as a draft to refine; bracketed [ ] items need facts/figures you'll confirm before submission.


1. Executive Summary (200 words)

RAIDT (Responsibility, Auditability, Interpretability, Dependability, Traceability) is a peer-reviewed run-level evidence framework for governing generative AI in organisations, developed through PhD research at the University of Portsmouth. It captures bounded records of individual GenAI runs and scores governance readiness on a 1–5 scale, with explicit mappings to the EU AI Act, ISO/IEC 42001, and the NIST AI RMF.

The compliance burden on UK organisations using generative AI is shifting from policy commitments to evidence. Regulators, auditors, and procuring authorities increasingly demand artefacts that show what happened in a specific AI-assisted decision and how it was governed. Existing tooling (model cards, principles frameworks, generic risk registers) does not provide this run-level evidence.

This Explore project will conduct 100 structured customer-discovery interviews across four candidate market segments — audit firms, regulated-sector compliance leaders, AI assurance vendors, and public-sector procurement — to test which value proposition, customer, and price point are most likely to convert into paying engagements.

Outputs will be (a) a validated commercial hypothesis, (b) a target customer profile and pricing model, and (c) a route-to-market recommendation (spin-out, exclusive licence, open-core, or KTP-led pathway) to inform the subsequent ICURe Discover/Mentor stage.


2. Research Base & Technology Description

2.1 What is RAIDT

2.2 Underpinning research outputs

2.3 IP position


3. Market Opportunity

3.1 Why now

3.2 Candidate market segments (to validate during Explore)

  1. Audit & assurance firms — Big 4 + mid-tier; need defensible methodology to monetise AI audit engagements
  2. Regulated-sector AI users — banks, insurers, NHS Trusts, law firms, defence suppliers; need evidence packs to satisfy regulators
  3. AI assurance vendors / RegTech — need academic-backed methodology IP to differentiate
  4. Public-sector procurement — DSIT, GDS, Crown Commercial Service; need a procurement standard for AI suppliers

3.3 Initial commercial hypotheses (to test)


4. Why This Team


5. Customer Discovery Plan (12 weeks, 100 interviews)

5.1 Targets per segment

Segment Target interviews Examples
Audit firms (partners + senior managers) 20 Big 4 + BDO, Mazars, Grant Thornton, RSM, Crowe
Regulated-sector compliance/risk leads 35 Tier-1 banks, challenger banks, NHS Trusts (5–8), insurers, law firms, defence suppliers
AI assurance vendors / RegTech 15 UK-based + UK-presence vendors
Public sector AI buyers 15 DSIT, GDS, NHS Digital, Cabinet Office, defence procurement
Standards/regulator stakeholders 10 BSI, ICO, FCA, MHRA, SRA, AISI
Adjacent (academia/think-tanks) 5 Ada Lovelace, Alan Turing Institute, RAi UK leadership

5.2 Interview structure

5.3 Core questions

  1. How does your organisation currently evidence governance of GenAI use?
  2. Where does that evidence break down under regulatory or audit scrutiny?
  3. What would a credible run-level evidence framework look like for you?
  4. Who in your organisation owns this problem and controls the budget?
  5. What would you pay (or charge) for a methodology / tooling that solves it?
  6. What proof or pilots would you need to commit?
  7. What concerns would block adoption?
  8. Who else in this space should we be talking to?

5.4 Geographies

5.5 Milestones


6. Expected Outputs at End of Explore

  1. Validated customer profile — name of segment, role, buying triggers, willingness to pay
  2. Commercial hypothesis — chosen route (spin-out / licence / open-core / KTP-led) with evidence
  3. Pricing model — tested ranges for the chosen route
  4. Term sheet draft — for the chosen route (e.g., licence terms, equity split, founder package)
  5. Discover/Mentor stage application — to advance to next ICURe stage
  6. Public deliverable — a short market report on UK GenAI assurance demand (builds reputation, generates inbound)

7. Beyond Explore — Pathway to Commercialisation


8. Risks and Mitigations

Risk Likelihood Mitigation
Customer-discovery interviews uncover no clear willingness-to-pay Medium Pre-scoped four segments; pivot logic built into weekly hypothesis updates
Crowded market in AI assurance tooling makes differentiation hard Medium RAIDT's wedge is run-level evidence + standards mapping, not metrics — interviews will validate
University IP terms slow commercialisation Medium Engage Portsmouth Tech Transfer in Week 1; consider non-exclusive licence if equity path is slow
Regulator priorities shift (e.g., UK AI Bill delayed) Low–Medium Methodology is regulation-agnostic and maps to multiple frameworks
Solo founder bandwidth Medium Advisory board + supervisors + Portsmouth innovation team support

9. Budget Sketch (~£35k Explore envelope)

Item Estimate
Researcher stipend / buy-out (12 weeks) £15,000
Travel — UK + 1–2 EU trips £4,500
Conference / event attendance (RAi UK, techUK, Big-4 partner roundtables) £2,500
Subsistence £1,500
Software / CRM / transcription £1,500
Advisory board honoraria (3 advisors × £500) £1,500
Market reports & data subscriptions £2,000
IP / legal advice (Portsmouth-supported) £2,500
Outputs production (market report, brand identity) £2,000
Contingency £2,000
Total £35,000

(Refine against the live ICURe Explore budget cap and eligible costs.)


10. Submission Checklist

Powered by Forestry.md