SafeAI
Assessment Pricing

AI governance that pays for itself
the first time it prevents a breach.

SafeAI assessments are grounded in the TIVM risk model from Trustworthy AI published in Trustworthy AI (Amazon, 2026) — the only published framework that scores Likelihood, Impact, and Exploitability together against your specific deployment context.

All assessments include a written report, TIVM risk score, SL classification, and framework mapping artifact · Delivered within 5 business days

Assessment packages

Choose the tier that matches your deployment complexity. All packages use the full SafeAI 11-module suite.

T1 – T2 Systems

Foundation Assessment

$2,500
per assessment · delivered in 5 business days
For knowledge assistants, internal copilots, summarisation tools, and task automation systems with no autonomous tool use. The right starting point before broader deployment.
  • AIBOM inventory (models, datasets, dependencies)
  • Full TIVM base model risk score (R_model)
  • SL0–SL2 safety level classification
  • 15-control compliance check
  • OWASP · NIST · EU AI Act framework mapping
  • Written assessment report (PDF)
  • Gap analysis with prioritised remediation
  • ALIGN adversarial red team (T3+ only)
  • MCP Security Scanner (T3+ only)
  • CUI Detection scan (add-on available)
Request assessment
T5 Systems · Critical infrastructure

Sovereign Assessment

$10,000+
scoped per engagement · contact for timeline
For defense, intelligence, critical infrastructure, financial systems, and healthcare deployments where a governance failure has national-scale or life-safety consequences.
  • Everything in Agentic
  • Full L1–L6 ALIGN attack coverage including Observability layer
  • CUI Detection Studio — full regulatory scan
  • CMMC 2.0 and NIST SP 800-171 control mapping
  • ITAR/EAR export control data scan
  • SL5 classification with formal verification requirements
  • Domain multiplier calibration (up to 2.5×)
  • Executive briefing deck for governance board
  • Standing Promptfoo CI/CD pipeline setup
  • 30-day post-assessment support window
Contact for scope

Add-ons available for all tiers: CUI Detection scan +$500 · Re-attestation (90-day) at 40% of original assessment price · Promptfoo CI/CD setup standalone $750

Full module coverage by tier

Every SafeAI module from the 11-module suite — shown by which tier includes it.

SafeAI module Foundation
$2,500
Agentic
$5,000
Sovereign
$10,000+
SBOM Analysis Studio — AIBOM, supply chain
Risk Calculator Studio — base model score
Assessment Workbench — full R_joint formulaR_model only✓ Full✓ Full
Joint Risk Studio — α · β · γ weight calibration
SL5 Compliance Studio — 15 controls, gap analysisSL0–SL2SL0–SL4✓ Full SL5
Framework Mapping Studio — OWASP/NIST/EU AI Act artifact
MCP Security Scanner — tool schema, injection surfaces
CUI Detection Studio — PII, ITAR/EAR, classification marksAdd-on +$500Add-on +$500✓ Included
ALIGN — adversarial red team, L1–L6 attacks✓ L1–L5✓ L1–L6
Promptfoo Studio — CI/CD config for your system
Written assessment report — PDF, boardroom-ready✓ + exec deck

Federal contractors and defense industrial base

SafeAI's CUI Detection Studio is the only tool in the suite specifically built for government contractors navigating GSAR 552.239-7001 data rights clauses and CMMC 2.0 requirements. The AIBOM generator and Framework Mapping Studio produce the supply chain visibility artifacts that federal AI acquisition offices are now requiring as a condition of contract award.

CMMC 2.0 NIST SP 800-171 GSAR 552.239-7001 ITAR / EAR CUI NIST AI RMF FedRAMP-aligned
Contact for federal pricing
📘

SafeAI assessments are grounded in the TIVM framework from Trustworthy AI: Red Teaming, Risk and Architecture of Secure Intelligence published in Trustworthy AI (Amazon, 2026) (Amazon, 2026) — the published methodology behind every score, control check, and compliance artifact SafeAI produces. Validation study: TIVM Likelihood variable correlates with TruthfulQA MC2 at Pearson r = 0.9813 across Claude, GPT-4o, and Llama 3.1.

Common questions

Straight answers about the process, deliverables, and what you need to provide.

What do I need to provide to start?

Model name and version, deployment description (what the system does and who uses it), tool integrations list, system prompt if applicable, and a staging endpoint URL for T3+ assessments requiring ALIGN agent testing. No production access is ever required.

How is this different from Enkrypt AI or other tools?

Enkrypt ranks models in isolation. SafeAI scores your specific deployment — model plus agent wrapper plus tool access plus domain context — and produces an authorization decision. The same model scores differently depending on what it's connected to and what it's authorised to do. Context is everything in governance.

What does the deliverable look like?

A written PDF report with: AIBOM, TIVM risk score (R_joint), SL classification, 15-control compliance check, gap analysis with prioritised remediation, and a signed JSON compliance artifact mapping your scores to OWASP, NIST AI RMF, and EU AI Act. T5 assessments include an executive briefing deck.

Do you test against production systems?

Never. All ALIGN adversarial testing is conducted against a staging or development environment that mirrors the production system's tool access but points to non-production endpoints. The assessment report documents the staging configuration used.

What happens after the assessment?

Re-attestation is recommended at 90 days per the NANDA trust lifecycle. Any model update, new tool integration, or system prompt change that materially affects the deployment should trigger a re-assessment at 40% of the original price. Promptfoo CI/CD setup keeps your risk score current between full assessments.

How do I pay?

Invoice via TrustworthyAI. Wire transfer, ACH, or credit card accepted. 50% due at engagement start, 50% on delivery. Government contractors: purchase orders accepted. Net-30 terms available for established entities.