Skip to content
Discover Phase · Gate 1 Pre-QualificationGovernance Artefactv1.0 · Discover Phase

AI Opportunity & Problem
Qualification Assessment

A two-phase, 12-dimension structured assessment for qualifying AI initiatives before architecture decisions are made. Designed for teams who need to prove the problem is real before committing to a solution.

What this assessment does

Gate the decision before spending the budget.

Most AI compliance failures are seeded before a single line of code is written. Teams select solutions before validating problems, skip data governance audits, and start building before anyone has defined what success — or failure — looks like.

This assessment is the SpanForge Gate 1 instrument. It structures the conversation that should happen before design begins: a scored, evidence-backed qualification of whether an AI initiative is technically viable, commercially justified, and compliant by design. Scores above 1 require documented evidence. Disagreements are recorded as governance signal, not averaged away.

Scoring scale — 0 to 3
0
Not Present

No evidence required — state the reason for absence

1
Minimal / Early Stage

Describe current state; no formal evidence required

2
Developing / Partial

Evidence required — document what exists, what is missing, and the source

3
Strong / Well-Established

Evidence required — provide specific, verifiable proof with named source and date

Phase 1 · Qualify

Is this worth solving with AI?

7 dimensions · Gate threshold: score ≥ 45 to advance to Phase 2 · ~85 minutes

45minimum score
to proceed
D0
15%10 minBlocker

Problem Statement Quality

Evaluates whether the initiative is built on a clear, agreed, and bounded problem definition. The most common cause of Gate 1 failure is the wrong problem defined — it is cheaper to find it here than at Gate 3.

D1
15%10 min

Automation Potential

Assesses whether the problem is structurally suited to AI. Repetition, rule-legibility, data density, and blast-radius tolerance are scored before any architecture decisions are made.

D2
15%10 min

Data Availability

Confirms whether the data required to train, fine-tune, or run the AI system exists, is accessible, and meets volume requirements. Scores above 1 require documented evidence.

D3
10%10 min

Data Risk & Fitness for AI

Assesses data quality, labelling, recency, and governance fitness. Good data availability does not mean it is safe or suitable to train on — this dimension separates those two questions.

D4
15%15 min

Business Value & ROI

Validates that a credible, measurable business case exists. Vague productivity benefits and CEO announcements are scored at 0. Evidence requires a named KPI baseline and a quantified delta.

D5
15%10 minBlocker

Regulatory & Ethical Risk

Maps applicable regulatory obligations: EU AI Act risk classification, GDPR data processing requirements, sector-specific constraints. Compliance and legal must score this dimension.

D6
15%10 minBlocker

Human Oversight

Confirms that human control checkpoints are defined, that an accountable owner exists, and that the system cannot operate without appropriate oversight mechanisms in place.

Phase 2 · Readiness

Are you ready to build it?

5 dimensions · Conditional on Phase 1 score ≥ 45 · ~35 minutes

≥45Phase 1 required
to unlock
D7

Technical Infrastructure Readiness

Confirms that the deployment environment, compute, integration points, and MLOps toolchain can support production AI.

D8

Skills & Team Capacity

Assesses whether the team has the AI/ML engineering, domain expertise, and governance capacity to deliver and own the system.

D9

Organisational Change Readiness

Evaluates whether affected business units are prepared to change workflows, own AI outputs, and absorb operational responsibility.

D10

Security & Access Control

Reviews credential management, prompt injection exposure, PII handling posture, and model access governance before design begins.

D11

Cost & Budget Confidence

Validates that infrastructure cost estimates, token budgets, and ongoing operational costs are understood and have named budget accountability.

Who needs this

Required attendees & roles

SpanForge non-negotiable: a named Gate Authority with budget accountability must be identified before this assessment begins. Without one, any gate decision produced is advisory only and cannot be enforced.

Assessment Lead
Mandatory — full session
Facilitates, documents scores and evidence
Gate Authority / Executive Sponsor
Mandatory — full session
Non-negotiable precondition; makes binding gate decisions
Business Domain Expert
Mandatory — full session
Scores automation potential, human oversight, and value hypothesis
AI / Technical Lead
Mandatory — full session
Scores data, infrastructure, and skills dimensions
Compliance / Legal
Highly recommended — D4 & D5
Scores regulatory and T.R.U.S.T.™ dimensions
Data Steward / Owner
Recommended — D2 & D3
Confirms data access, quality, and governance status
How to run it

Three ways to run this assessment

Single session

4–6 people90–120 min

First-time assessment; all stakeholders available simultaneously

Two sessions

4–6 people2 × 60 min

Phase 1 first; Phase 2 after data and governance review

Async + review

4–6 people2–3 days

Participants pre-score individually; Lead consolidates; 45-min alignment review

Free download

Get the assessment artefact

Available as a PDF for print and review, and as an editable DOCX for running live sessions and recording scored evidence directly in the document. No sign-in required.

SpanForgeGovernance Artefact
AI Opportunity & Problem Qualification Assessment
Version 1.0·Discover Phase
  • Phase 1: 7 qualification dimensions
  • Phase 2: 5 readiness dimensions
  • Gate threshold: Phase 1 score ≥ 45
  • Scoring scale: 0–3 with evidence rules
  • Session guide & facilitator notes
  • Role requirements & time budgets