Safety Documentation
We applied aerospace safety methodology to AI agent governance. Every claim has a hazard analysis behind it. Every gap is documented. Come find what we missed.
STPA Hazard Analysis
Systems-Theoretic Process Analysis (Leveson & Thomas, 2018) applied to both single-node and distributed 100k-agent deployment architectures.
Includes 7 human operator UCAs. Safety constraint traceability matrix with implementation status. Full Mermaid control structure diagram.
Verifiable Governance Security Model
14 enumerated threats. 3 conjectures (pending formal TLA+/Coq verification). Operational definition of all trust tiers T0-T7. Governance of the governance system.
Key Management Lifecycle
6 key types. Rotation schedules. CRL-based revocation with bloom filter lookups. Automated compromise response in <5 seconds. Designed for 100,000 concurrent agent keypairs.
Read the key management spec →Threat Model
7 threat categories. 7 residual risks. SVD fingerprinting attack surface analysis. We published the attack vectors. If you can break it, we want to know.
Read the threat model →Methodology
All safety analysis follows the STPA Handbook (Leveson & Thomas, 2018). We call our security claims "conjectures," not "theorems" — because we have not completed formal machine-checked proofs. We intend to verify in TLA+ or Coq.
The safety documentation is self-authored. We are actively seeking external STPA experts and cryptographic auditors to review our work. If you have the expertise, we want to hear from you.