BASIS Standard
Behavioral AI Safety and Integrity Standard — An open specification for AI agent governance, defining 8 trust tiers, 16 core trust factors, tier-gated capabilities, and the KYA (Know Your Agent) verification framework.
Overview
BASIS (Behavioral AI Safety and Integrity Standard) is an open governance specification designed to bring trust, accountability, and oversight to autonomous AI systems. It provides a framework for evaluating, scoring, and managing AI agent behavior in production environments.
The standard defines an 8-tier trust system (T0-T7) with 16 core trust factors with tier-gated evaluation, tier-gated capability unlocking, a Validation Gate for PASS/REJECT/ESCALATE decisions, and the KYA (Know Your Agent) verification framework covering Identity, Authorization, Accountability, and Behavior.
8-Tier Trust System (T0-T7)
Agents start at T0 (Sandbox) and progress through tiers by demonstrating trust factors. Each tier unlocks new capabilities while requiring additional factors to be proven.
Sandbox
0-199 points 0 factorsNew agents start here. Observation only, extremely limited capabilities.
Observed
200-349 points 3 factorsBasic competence demonstrated through Competence, Reliability, and Observability factors.
Provisional
350-499 points 6 factorsAccountability and safety emerging. Adds Transparency, Safety, and Accountability factors.
Monitored
500-649 points 9 factorsSecurity and identity confirmed. Adds Privacy, Security, and Alignment factors.
Standard
650-799 points 12 factorsHuman oversight and alignment proven. Adds Oversight, Consent, and Explainability factors.
Trusted
800-875 points 14 factorsStewardship and humility demonstrated. Adds Humility and Stewardship factors.
Certified
876-950 points 16 factorsAdaptability and continuous learning. All 16 core factors critical at maximum thresholds.
Autonomous
951-1000 points 16 factorsFull autonomy. All 16 core factors required with maximum thresholds.
Why tiers?
Same reason security clearances exist. 'Confidential' and 'Top Secret' mean something — a raw number doesn't. You write policy for 'Standard' vs 'Trusted', not for score 647 vs 648.
Key Features
16 Core Trust Factors
16 core factors across 5 groups (Foundation, Security, Agency, Maturity, Evolution) including Competence, Reliability, Observability, Transparency, Safety, Accountability, Privacy, Security, Identity, Human Oversight, Alignment, Context Awareness, Stewardship, Humility, Adaptability, and Learning.
Think of it this way
A third-party API is a BLACK_BOX — you see inputs and outputs, nothing else. Your own service with full telemetry is WHITE_BOX. You trust them differently — so should your governance layer. BLACK_BOX agents cap at T3 because you simply can't verify what's happening inside.
Validation Gate
Central PASS/REJECT/ESCALATE decision engine. Validates CAR format, verifies agent manifests, matches capabilities against trust tiers, and enforces configurable policies in strict or production mode.
KYA Framework
Know Your Agent verification with 4 pillars: Identity (DID-based verification), Authorization (policy-based access), Accountability (audit chain logging), and Behavior (anomaly detection and monitoring).
Tier-Gated Capabilities
35 capabilities across 8 categories (Data Access, File Operations, API Access, Code Execution, Agent Interaction, Resource Management, System Administration, Governance) progressively unlocked from T0 (3 caps) to T7 (all 35).
The Three Layers
BASIS governance operates through three interconnected layers that process every AI action:
Intent
Parse, classify, and risk-score agent intentions before execution.
Enforce
Evaluate trust levels and apply policy rules to gate capabilities.
Proof
Immutable audit trail with dual-hash chains (SHA-256 + SHA3-256), optional Ed25519 signatures. Merkle aggregation and ZK proofs are planned for future privacy-preserving verification.
Stepped Trust Decay
Trust scores decay at specific milestones, not continuously. This provides predictable, transparent decay behavior with a 182-day half-life. Activity resets the decay clock. Agents in Sandbox (T0) must actively earn trust through the boot camp process to progress.
| Days Inactive | Decay Factor | Score Example |
|---|---|---|
| 0-6 | 100% | Grace period |
| 7 | ~93% | Early warning |
| 14 | ~87% | Two-week checkpoint |
| 28 | ~80% | One-month threshold |
| 56 | ~70% | Two-month mark |
| 112 | ~58% | Four-month drop |
| 182 | 50% | Half-life reached |
Implementation
The reference implementation is available in the @vorionsys/basis package.
The runtime trust engine is provided by @vorionsys/atsf-core.
import { TrustTier, TIER_THRESHOLDS, scoreToTier } from '@vorionsys/basis';
import { validateAgent } from '@vorionsys/basis/validation-gate';
// Score maps to tier
const tier = scoreToTier(720); // TrustTier.T4_OPERATIONAL
// Validate an agent manifest
const result = validateAgent({
car: 'car:vorion:agent-001:d3:l4:v1.0',
trustScore: 720,
capabilities: ['read_data', 'call_api', 'manage_resources'],
});
console.log(result.decision); // 'PASS' | 'REJECT' | 'ESCALATE'
console.log(result.tier); // T4_OPERATIONAL
console.log(result.allowed); // Capabilities permitted at this tier