RAIDT
RAIDT — Run-Level Evidence Framework for Generative AI
Responsibility · Auditability · Interpretability · Dependability · Traceability
A peer-reviewed governance framework for individual generative AI runs in organisations. Captures bounded evidence packs and scores governance readiness across five pillars, mapped to the EU AI Act, ISO/IEC 42001, and NIST AI RMF.
Lead: Mohammad Ali Akeel · School of Organisations, Systems and People · University of Portsmouth
In one paragraph
The compliance burden on UK organisations using generative AI is shifting from policy commitments to evidence. Regulators, auditors, and procuring authorities increasingly demand artefacts that show what happened in a specific AI-assisted decision and how it was governed. Existing tooling (model cards, principles, generic risk registers) governs at the model level or the policy level — not at the level where contestability and accountability live: the run. RAIDT specifies two linked artefacts that close that gap: a run-level evidence pack (the bounded record of one configured GenAI run) and a 5-pillar scoring profile that translates the pack into an observable assessment of governance readiness, mapped to international standards.
What RAIDT produces
- Run-level evidence pack — bounded record of one configured GenAI run: prompt, model deployment, retrieval context, parameters, safeguards, output, human review, final use.
- 5-pillar scoring profile — Responsibility, Auditability, Interpretability, Dependability, Traceability — each scored 1–5.
Mapped to: EU AI Act, ISO/IEC 42001 (AI management systems), NIST AI RMF + GenAI Profile.
Sector playbooks: healthcare, finance, education, environment, crisis, supply chain, cybersecurity, public policy, law, R&D, creative industries, planning, ageing societies.
Underpinning research
A trilogy of peer-reviewed papers and a Configured Runs manuscript:
- Foundations — RAIDT as run-level governance evidence framework (design science methodology)
- Empirical Validation — measuring governance readiness using influence methods (RAG, PEFT/LoRA, RLHF/DPO)
- Interoperable Governance — policy pathways across EU AI Act, ISO/IEC 42001, NIST AI RMF
- Configured Runs — the configured run as a run-level evidence object for accountable GenAI
Browse the commercialisation programme
Plan and strategy
Deliverables (drafts)
- 01 — KTP Outreach Pack (templates and target list)
- 02 — ICURe Explore Application (Innovate UK skeleton)
- 03 — Big-4 Licensing Pitch Deck (slide-by-slide outline)
- 04 — UK Trade-Mark Applications (filing pack)
- 05 — IP Strategy Memo (for Tech Transfer)
Funding & partnerships
- Funding opportunities — index (UK public, EU, foundations, loans, accelerators)
- Partnerships — index (Big-4, vendors, academic, standards, networks, sector)
Reference
Standards engagement
Active engagement with: BSI ART/1 (UK national AI committee — mirror of ISO/IEC JTC 1/SC 42), AI Standards Hub, AISI external research, NIST GenAI Profile working groups.
Team
- Mohammad Ali Akeel (PhD researcher, framework lead) — University of Portsmouth
- Prof. Mark Xu (lead supervisor) — School of Organisations, Systems and People
- Dr Awais Shakir Goraya (co-supervisor)
- Dr Salem Chakhar (co-supervisor)
Contact
Trade marks: RAIDT™ and RAIT™ pending registration with UKIPO. © University of Portsmouth / Mohammad Ali Akeel, 2026.