Assessment packages
Choose the tier that matches your deployment complexity. All packages use the full SafeAI 11-module suite.
Foundation Assessment
- AIBOM inventory (models, datasets, dependencies)
- Full TIVM base model risk score (R_model)
- SL0–SL2 safety level classification
- 15-control compliance check
- OWASP · NIST · EU AI Act framework mapping
- Written assessment report (PDF)
- Gap analysis with prioritised remediation
- ALIGN adversarial red team (T3+ only)
- MCP Security Scanner (T3+ only)
- CUI Detection scan (add-on available)
Agentic Assessment
- Everything in Foundation
- Full R_joint score (model + agent + scale)
- MCP Security Scanner — all 4 modules
- ALIGN red team — L1 through L5 attack layers
- PAIR loop adversarial testing on model endpoint
- Agent endpoint testing (L2–L5) on staging system
- SL3–SL4 safety level classification
- Promptfoo CI/CD configuration for your system
- Re-attestation scheduling (90-day cycle)
- CUI Detection scan (add-on available)
Sovereign Assessment
- Everything in Agentic
- Full L1–L6 ALIGN attack coverage including Observability layer
- CUI Detection Studio — full regulatory scan
- CMMC 2.0 and NIST SP 800-171 control mapping
- ITAR/EAR export control data scan
- SL5 classification with formal verification requirements
- Domain multiplier calibration (up to 2.5×)
- Executive briefing deck for governance board
- Standing Promptfoo CI/CD pipeline setup
- 30-day post-assessment support window
Add-ons available for all tiers: CUI Detection scan +$500 · Re-attestation (90-day) at 40% of original assessment price · Promptfoo CI/CD setup standalone $750
Full module coverage by tier
Every SafeAI module from the 11-module suite — shown by which tier includes it.
| SafeAI module | Foundation $2,500 |
Agentic $5,000 |
Sovereign $10,000+ |
|---|---|---|---|
| SBOM Analysis Studio — AIBOM, supply chain | ✓ | ✓ | ✓ |
| Risk Calculator Studio — base model score | ✓ | ✓ | ✓ |
| Assessment Workbench — full R_joint formula | R_model only | ✓ Full | ✓ Full |
| Joint Risk Studio — α · β · γ weight calibration | – | ✓ | ✓ |
| SL5 Compliance Studio — 15 controls, gap analysis | SL0–SL2 | SL0–SL4 | ✓ Full SL5 |
| Framework Mapping Studio — OWASP/NIST/EU AI Act artifact | ✓ | ✓ | ✓ |
| MCP Security Scanner — tool schema, injection surfaces | – | ✓ | ✓ |
| CUI Detection Studio — PII, ITAR/EAR, classification marks | Add-on +$500 | Add-on +$500 | ✓ Included |
| ALIGN — adversarial red team, L1–L6 attacks | – | ✓ L1–L5 | ✓ L1–L6 |
| Promptfoo Studio — CI/CD config for your system | – | ✓ | ✓ |
| Written assessment report — PDF, boardroom-ready | ✓ | ✓ | ✓ + exec deck |
SafeAI assessments are grounded in the TIVM framework from Trustworthy AI: Red Teaming, Risk and Architecture of Secure Intelligence published in Trustworthy AI (Amazon, 2026) (Amazon, 2026) — the published methodology behind every score, control check, and compliance artifact SafeAI produces. Validation study: TIVM Likelihood variable correlates with TruthfulQA MC2 at Pearson r = 0.9813 across Claude, GPT-4o, and Llama 3.1.
Common questions
Straight answers about the process, deliverables, and what you need to provide.
What do I need to provide to start?
Model name and version, deployment description (what the system does and who uses it), tool integrations list, system prompt if applicable, and a staging endpoint URL for T3+ assessments requiring ALIGN agent testing. No production access is ever required.
How is this different from Enkrypt AI or other tools?
Enkrypt ranks models in isolation. SafeAI scores your specific deployment — model plus agent wrapper plus tool access plus domain context — and produces an authorization decision. The same model scores differently depending on what it's connected to and what it's authorised to do. Context is everything in governance.
What does the deliverable look like?
A written PDF report with: AIBOM, TIVM risk score (R_joint), SL classification, 15-control compliance check, gap analysis with prioritised remediation, and a signed JSON compliance artifact mapping your scores to OWASP, NIST AI RMF, and EU AI Act. T5 assessments include an executive briefing deck.
Do you test against production systems?
Never. All ALIGN adversarial testing is conducted against a staging or development environment that mirrors the production system's tool access but points to non-production endpoints. The assessment report documents the staging configuration used.
What happens after the assessment?
Re-attestation is recommended at 90 days per the NANDA trust lifecycle. Any model update, new tool integration, or system prompt change that materially affects the deployment should trigger a re-assessment at 40% of the original price. Promptfoo CI/CD setup keeps your risk score current between full assessments.
How do I pay?
Invoice via TrustworthyAI. Wire transfer, ACH, or credit card accepted. 50% due at engagement start, 50% on delivery. Government contractors: purchase orders accepted. Net-30 terms available for established entities.