AI Opportunity & Problem
Qualification Assessment
A two-phase, 12-dimension structured assessment for qualifying AI initiatives before architecture decisions are made. Designed for teams who need to prove the problem is real before committing to a solution.
Gate the decision before spending the budget.
Most AI compliance failures are seeded before a single line of code is written. Teams select solutions before validating problems, skip data governance audits, and start building before anyone has defined what success — or failure — looks like.
This assessment is the SpanForge Gate 1 instrument. It structures the conversation that should happen before design begins: a scored, evidence-backed qualification of whether an AI initiative is technically viable, commercially justified, and compliant by design. Scores above 1 require documented evidence. Disagreements are recorded as governance signal, not averaged away.
No evidence required — state the reason for absence
Describe current state; no formal evidence required
Evidence required — document what exists, what is missing, and the source
Evidence required — provide specific, verifiable proof with named source and date
Is this worth solving with AI?
7 dimensions · Gate threshold: score ≥ 45 to advance to Phase 2 · ~85 minutes
to proceed
Problem Statement Quality
Evaluates whether the initiative is built on a clear, agreed, and bounded problem definition. The most common cause of Gate 1 failure is the wrong problem defined — it is cheaper to find it here than at Gate 3.
Automation Potential
Assesses whether the problem is structurally suited to AI. Repetition, rule-legibility, data density, and blast-radius tolerance are scored before any architecture decisions are made.
Data Availability
Confirms whether the data required to train, fine-tune, or run the AI system exists, is accessible, and meets volume requirements. Scores above 1 require documented evidence.
Data Risk & Fitness for AI
Assesses data quality, labelling, recency, and governance fitness. Good data availability does not mean it is safe or suitable to train on — this dimension separates those two questions.
Business Value & ROI
Validates that a credible, measurable business case exists. Vague productivity benefits and CEO announcements are scored at 0. Evidence requires a named KPI baseline and a quantified delta.
Regulatory & Ethical Risk
Maps applicable regulatory obligations: EU AI Act risk classification, GDPR data processing requirements, sector-specific constraints. Compliance and legal must score this dimension.
Human Oversight
Confirms that human control checkpoints are defined, that an accountable owner exists, and that the system cannot operate without appropriate oversight mechanisms in place.
Are you ready to build it?
5 dimensions · Conditional on Phase 1 score ≥ 45 · ~35 minutes
to unlock
Technical Infrastructure Readiness
Confirms that the deployment environment, compute, integration points, and MLOps toolchain can support production AI.
Skills & Team Capacity
Assesses whether the team has the AI/ML engineering, domain expertise, and governance capacity to deliver and own the system.
Organisational Change Readiness
Evaluates whether affected business units are prepared to change workflows, own AI outputs, and absorb operational responsibility.
Security & Access Control
Reviews credential management, prompt injection exposure, PII handling posture, and model access governance before design begins.
Cost & Budget Confidence
Validates that infrastructure cost estimates, token budgets, and ongoing operational costs are understood and have named budget accountability.
Required attendees & roles
SpanForge non-negotiable: a named Gate Authority with budget accountability must be identified before this assessment begins. Without one, any gate decision produced is advisory only and cannot be enforced.
Three ways to run this assessment
Single session
First-time assessment; all stakeholders available simultaneously
Two sessions
Phase 1 first; Phase 2 after data and governance review
Async + review
Participants pre-score individually; Lead consolidates; 45-min alignment review
Get the assessment artefact
Available as a PDF for print and review, and as an editable DOCX for running live sessions and recording scored evidence directly in the document. No sign-in required.
- Phase 1: 7 qualification dimensions
- Phase 2: 5 readiness dimensions
- Gate threshold: Phase 1 score ≥ 45
- Scoring scale: 0–3 with evidence rules
- Session guide & facilitator notes
- Role requirements & time budgets