Safety Documentation

We applied aerospace safety methodology to AI agent governance. Every claim has a hazard analysis behind it. Every gap is documented. Come find what we missed.

STPA Hazard Analysis

Systems-Theoretic Process Analysis (Leveson & Thomas, 2018) applied to both single-node and distributed 100k-agent deployment architectures.

16
Hazards
61
Unsafe Control Actions
19
Safety Constraints

Includes 7 human operator UCAs. Safety constraint traceability matrix with implementation status. Full Mermaid control structure diagram.

Read the full STPA analysis →

Verifiable Governance Security Model

G1
Capability Integrity
Agent cannot execute outside its declared envelope without detection.
G2
Behavioral Verifiability
Every governance decision is cryptographically signed and verifiable by third parties.
G3
Tamper Evidence
Any tampering produces detectable proof divergence.

14 enumerated threats. 3 conjectures (pending formal TLA+/Coq verification). Operational definition of all trust tiers T0-T7. Governance of the governance system.

Read the security model →

Key Management Lifecycle

6 key types. Rotation schedules. CRL-based revocation with bloom filter lookups. Automated compromise response in <5 seconds. Designed for 100,000 concurrent agent keypairs.

Read the key management spec →

Threat Model

7 threat categories. 7 residual risks. SVD fingerprinting attack surface analysis. We published the attack vectors. If you can break it, we want to know.

Read the threat model →

Methodology

All safety analysis follows the STPA Handbook (Leveson & Thomas, 2018). We call our security claims "conjectures," not "theorems" — because we have not completed formal machine-checked proofs. We intend to verify in TLA+ or Coq.

The safety documentation is self-authored. We are actively seeking external STPA experts and cryptographic auditors to review our work. If you have the expertise, we want to hear from you.

All Safety Documents