RAIDT

RAIDT — Run-Level Evidence Framework for Generative AI

Responsibility · Auditability · Interpretability · Dependability · Traceability

A peer-reviewed governance framework for individual generative AI runs in organisations. Captures bounded evidence packs and scores governance readiness across five pillars, mapped to the EU AI Act, ISO/IEC 42001, and NIST AI RMF.

Lead: Mohammad Ali Akeel · School of Organisations, Systems and People · University of Portsmouth


In one paragraph

The compliance burden on UK organisations using generative AI is shifting from policy commitments to evidence. Regulators, auditors, and procuring authorities increasingly demand artefacts that show what happened in a specific AI-assisted decision and how it was governed. Existing tooling (model cards, principles, generic risk registers) governs at the model level or the policy level — not at the level where contestability and accountability live: the run. RAIDT specifies two linked artefacts that close that gap: a run-level evidence pack (the bounded record of one configured GenAI run) and a 5-pillar scoring profile that translates the pack into an observable assessment of governance readiness, mapped to international standards.

What RAIDT produces

  1. Run-level evidence pack — bounded record of one configured GenAI run: prompt, model deployment, retrieval context, parameters, safeguards, output, human review, final use.
  2. 5-pillar scoring profile — Responsibility, Auditability, Interpretability, Dependability, Traceability — each scored 1–5.

Mapped to: EU AI Act, ISO/IEC 42001 (AI management systems), NIST AI RMF + GenAI Profile.

Sector playbooks: healthcare, finance, education, environment, crisis, supply chain, cybersecurity, public policy, law, R&D, creative industries, planning, ageing societies.

Underpinning research

A trilogy of peer-reviewed papers and a Configured Runs manuscript:

  1. Foundations — RAIDT as run-level governance evidence framework (design science methodology)
  2. Empirical Validation — measuring governance readiness using influence methods (RAG, PEFT/LoRA, RLHF/DPO)
  3. Interoperable Governance — policy pathways across EU AI Act, ISO/IEC 42001, NIST AI RMF
  4. Configured Runs — the configured run as a run-level evidence object for accountable GenAI

Browse the commercialisation programme

Plan and strategy

Deliverables (drafts)

Funding & partnerships

Reference

Standards engagement

Active engagement with: BSI ART/1 (UK national AI committee — mirror of ISO/IEC JTC 1/SC 42), AI Standards Hub, AISI external research, NIST GenAI Profile working groups.

Team

Contact

mohammad.akeel@myport.ac.uk


Trade marks: RAIDT™ and RAIT™ pending registration with UKIPO. © University of Portsmouth / Mohammad Ali Akeel, 2026.

Powered by Forestry.md