Before deploying AI, you need to know if your organization can actually support it. 73% of enterprise AI projects fail — not because of model quality, but because of missing infrastructure, bad data, and organizational unreadiness.
The 5-Pillar AI Readiness Framework
| Pillar | Weight | What It Measures |
|---|
| Data Maturity | 30% | Quality, accessibility, governance of your data |
| Infrastructure | 20% | Compute, storage, MLOps tooling |
| Talent & Skills | 20% | Engineering and data science capability |
| Governance | 15% | Ethics, compliance, risk management |
| Culture | 15% | Leadership support, change management |
Step 1: Score Data Maturity (30%)
1.1 Data Quality Audit
import pandas as pd
def score_data_quality(df: pd.DataFrame) -> dict:
"""Score a dataset's quality on key dimensions"""
total_cells = df.size
null_cells = df.isnull().sum().sum()
scores = {
"completeness": round((1 - null_cells / total_cells) * 100, 1),
"uniqueness": round(df.drop_duplicates().shape[0] / df.shape[0] * 100, 1),
"consistency": _check_consistency(df),
"freshness": _check_freshness(df),
}
scores["overall"] = round(
sum(scores.values()) / len(scores), 1
)
return scores
def _check_consistency(df):
"""Check for format consistency in key columns"""
issues = 0
for col in df.select_dtypes(include='object').columns:
# Check for mixed case inconsistency
if df[col].str.lower().nunique() < df[col].nunique():
issues += 1
consistency = max(0, 100 - issues * 10)
return consistency
def _check_freshness(df):
"""Check timestamp columns for data freshness"""
date_cols = df.select_dtypes(include='datetime64').columns
if len(date_cols) == 0:
return 50 # Can't evaluate
latest = df[date_cols].max().max()
days_old = (pd.Timestamp.now() - latest).days
if days_old < 1: return 100
if days_old < 7: return 85
if days_old < 30: return 65
return 40
1.2 Data Accessibility Checklist
| Question | Score |
|---|
| Can analysts query production data without DBA involvement? | /10 |
| Is there a central data catalog (e.g., DataHub, Collibra)? | /10 |
| Are datasets documented with schema definitions? | /10 |
| Is there a self-service data access request process? | /10 |
| Can you join data across 3+ source systems? | /10 |
Step 2: Score Infrastructure Readiness (20%)
2.1 Compute Assessment
# Check GPU availability
nvidia-smi --query-gpu=name,memory.total,driver_version \
--format=csv,noheader 2>/dev/null || echo "No GPU detected"
# Check available RAM
free -h | head -2
# Check Docker availability
docker --version 2>/dev/null || echo "Docker not installed"
# Check Kubernetes
kubectl cluster-info 2>/dev/null || echo "No Kubernetes cluster"
2.2 Infrastructure Scoring
| Capability | Level 1 (Basic) | Level 2 (Ready) | Level 3 (Advanced) |
|---|
| Compute | Shared VMs | Dedicated GPU instances | Auto-scaling GPU clusters |
| Storage | Local/NAS | Cloud object storage | Lakehouse with governance |
| MLOps | Manual scripts | MLflow / Weights & Biases | Full Kubeflow / SageMaker |
| Monitoring | Basic logs | APM + custom metrics | AI-specific observability |
| Networking | Public internet | VPN/Private endpoints | Zero-trust architecture |
Step 3: Score Talent & Skills (20%)
Skills Matrix
| Skill Area | Minimum for AI Readiness | Assessment Method |
|---|
| Data Engineering | 2+ engineers who can build ETL pipelines | Review recent pipeline work |
| ML/Data Science | 1+ scientist who can train & evaluate models | Technical interview |
| MLOps/DevOps | 1+ engineer who can containerize & deploy | Deploy a test model |
| Data Literacy | Managers can interpret model outputs | Run a decision exercise |
| AI Ethics | Someone owns responsible AI policy | Review policy document |
# Simple skills gap calculator
skills = {
"data_engineering": {"current": 2, "needed": 3},
"ml_data_science": {"current": 1, "needed": 2},
"mlops": {"current": 0, "needed": 1},
"data_literacy": {"current": 60, "needed": 80}, # % of managers
"ai_ethics": {"current": 0, "needed": 1},
}
for skill, counts in skills.items():
gap = counts["needed"] - counts["current"]
status = "✅ Met" if gap <= 0 else f"⚠️ Gap: {gap}"
print(f" {skill}: {status}")
Step 4: Score Governance Readiness (15%)
Governance Checklist
Step 5: Score Culture & Leadership (15%)
Culture Assessment
| Signal | Points |
|---|
| C-suite sponsor for AI initiatives | +20 |
| Dedicated AI budget (not borrowed from IT) | +20 |
| Cross-functional AI steering committee | +15 |
| Pilot projects completed (even if small) | +15 |
| Data-driven decision-making culture | +15 |
| Willingness to fail and iterate | +15 |
Step 6: Calculate Your Overall Score
def calculate_ai_readiness(scores: dict) -> dict:
weights = {
"data_maturity": 0.30,
"infrastructure": 0.20,
"talent_skills": 0.20,
"governance": 0.15,
"culture": 0.15,
}
weighted_score = sum(
scores[pillar] * weights[pillar]
for pillar in weights
)
tier = (
"🟢 AI-Ready" if weighted_score >= 75 else
"🟡 Foundation Building" if weighted_score >= 50 else
"🔴 Not Ready — Build Foundations First"
)
return {
"overall_score": round(weighted_score, 1),
"tier": tier,
"pillar_scores": scores,
"recommendation": _get_recommendation(scores)
}
def _get_recommendation(scores):
weakest = min(scores, key=scores.get)
return f"Priority: Strengthen '{weakest}' (score: {scores[weakest]})"
Interpretation Guide
| Score Range | Tier | Action |
|---|
| 75-100 | AI-Ready | Proceed with production pilots |
| 50-74 | Foundation Building | Address gaps, run contained experiments |
| 25-49 | Early Stage | Invest in data + skills before AI |
| 0-24 | Not Ready | Focus on digital transformation basics |
Readiness Assessment Checklist
:::note[Source]
This guide is derived from operational intelligence at Garnet Grid Consulting. Try the free AI Readiness Assessment Tool or get a Premium AI Readiness Report.
:::