Phase 0 — AI Project Preconditions | Ciph Lab Self-Audit
← Return to Ciph Lab
Pre-Project Preconditions

0.0The Groundwork
Before the AI Project Begins.

Eight preconditions that determine whether an organization has authorized AI use — or merely authorized authentication.

How This Fits the Ciph Lab Framework Phase 0 is a project-level pre-flight check. It asks whether a specific AI deployment has the groundwork in place to proceed — replacing the legacy pattern of AI decisions spread across ad-hoc task forces and emails with a structured, measurable baseline. Phase 0 items map directly to the G (Governance) and V (Verification) columns of the GRACI™ matrix: who governs which AI tools and integrations are permitted, and who verifies AI output before it becomes a decision. It is distinct from the AI Intelligence Score™ (Tier 0), which measures organizational maturity across governance, operations, and alignment. Complete this first. Then take the AI Intelligence Score™ to understand your broader posture.
What it is
An eight-section self-audit for organizations evaluating, deploying, or expanding AI tools. Completed before vendor selection, procurement, or rollout.
Who completes it
The VP of Intelligence Resources™ — or, in organizations where the function has not yet been formalized, the person currently operating as its proxy — in coordination with Legal, Risk, IT, and affected business stakeholders.
What it produces
A Phase 0 Completeness Score — a measurable signal of whether your AI program is ready to begin, or whether you are about to deploy basic AI at enterprise cost.
How to Use This Instrument

For each of the twenty-four items below, mark whether the precondition is complete, partial (underway, undocumented, or contested), or missing. Be honest. An accurate "missing" is more useful than an optimistic "complete."

Privacy: Your answers stay in your browser. Nothing is sent, saved, or captured by Ciph Lab. When you close this tab, your score clears — screenshot or save anything you want to keep.
Complete · 2 pts Partial · 1 pt Missing · 0 pts
0.1

Security Rules of Engagement

§ 1 of 8

Until Security writes this down, the organization has authorized how people log in — not what they can do once inside.

Approved data categories are documentedWhat the AI may process — by classification level
Approved vs. prohibited use cases are namedThe G (Governance) column for this project — specific use cases, specific verdicts, not a blanket policy
Logging, monitoring, and audit trail requirements are definedWhere logs go, who reads them, how long they're kept
0.2

Approved Data Scope

§ 2 of 8

Each category of data — public, internal, confidential, restricted, regulated — receives its own explicit verdict. Bulk approval is not approval.

PII / PHI / regulated data handling rules are explicitNamed classifications, named restrictions
Data residency and retention protocols are documentedWhere data lives, how long it lives there
Prompt and output handling rules are publishedWhat cannot be pasted in. What cannot be stored. What cannot leave.
0.3

Identity & Access Baseline

§ 3 of 8

SSO is the floor, not the ceiling. Most organizations stop here and mistake authentication for governance.

SSO / Okta provisioning is configuredThe minimum; not the completion
Role-based access control maps to job functionNot a flat access tier
Named AI tool admin with documented escalation pathIf no one can answer "who controls this?" — this item is missing
0.4

Vendor & Third-Party Risk

§ 4 of 8

AI-specific vendor diligence extends beyond standard SaaS TPRA. Training data terms and subprocessor chains are where the novel exposure lives.

DPA / BAA executed; SOC 2 or ISO 27001 verifiedThe baseline enterprise vendor review
Training data terms reviewed and documentedDoes the vendor train on your data? Can you opt out?
IP indemnification and subprocessor chain mappedEspecially for models that route to other models
0.5

Integration Scope Approval

§ 5 of 8

Each integration is its own project. Slack, Drive, Email, CRM — each requires independent security review, independent admin sign-off, and independent coordination.

Each approved integration is named individually"Everything" is not an integration scope
Source app admins are identified and engagedSlack admin, Drive admin, CRM admin — each known by name
The VP of Intelligence Resources™ (or proxy) holds the G for this projectIn GRACI™ terms: the named Governance owner with authority to clear the project or hold it. Where the VP IR function hasn't been formalized, one named executive must hold G explicitly — not inferred, not defaulted.
0.6

Audit, Compliance & Regulatory

§ 6 of 8

Public company obligations, sector rules, and jurisdictional exposure — mapped before deployment, not during audit.

Applicable regimes are mappedSOX, HIPAA, GDPR, DSA, state privacy law — by name and by use case
Audit log architecture is definedLocation, retention, access controls — answerable to an auditor
Regulatory change monitoring is assignedAI rules are moving. Someone owns watching.
0.7

Operational Ownership & Change

§ 7 of 8

What happens after Day One. New use cases, new roles, new incidents — the governance that keeps Phase 0 alive.

Process exists to approve new use cases post-launchGovernance extends past the launch date
Change management covers ROE and data scope expansionScope grows; the rules must grow with it
Incident response for AI-specific events is definedPrompt injection, data leak, hallucinated output acted upon
0.8

Verification & Human Oversight

§ 8 of 8

AI-generated output is not a decision until a human verifies it. Who checks the output, on what cadence, with what authority — is as foundational as who approves the input.

A named verifier (V) reviews AI output before actionThe explicit V column owner for this project's AI output — role-based, not ad hoc review
Human-in-the-loop (HITL) thresholds are definedWhich decisions require human sign-off; which do not
Output verification is logged as part of the audit trailWho verified, when, what they changed — captured and retrievable
Phase 0 Completeness Score
Awaiting input
0/48
SSO-Only Partial Emerging Ready
Awaiting responses
Complete the audit above.
Each "Complete" scores 2 points. Each "Partial" scores 1. Each "Missing" scores 0. Your total reflects how much of Phase 0 is actually in place versus how much is assumed, improvised, or deferred.
Next Step Take the AI Intelligence Score™ (Tier 0). Phase 0 is project-level. Tier 0 measures organizational maturity across governance, operations, and alignment — and determines your eligibility for the Tier 1 Strategic Diagnostic.
Take Tier 0 Assessment →
Reset clears all answers · Your data was never transmitted