Overview
This guide provides a phased approach to adopting BASIS in existing AI agent systems. Each phase builds on the previous, allowing gradual integration with minimal disruption.
Five-Phase Roadmap
Assessment
Inventory current agent capabilities, identify governance gaps, define trust requirements
- • Catalog all agent actions and capabilities
- • Map current permissions to BASIS capability taxonomy
- • Identify high-risk operations requiring governance
- • Define initial trust tier requirements
Audit-Only
Deploy BASIS in observation mode, logging all decisions without enforcement
- • Deploy INTENT and PROOF layers
- • Log all agent actions without blocking
- • Establish baseline metrics and patterns
- • Validate risk classification accuracy
Shadow Mode
Run ENFORCE layer in parallel, compare decisions to actual outcomes
- • Deploy ENFORCE layer in shadow mode
- • Compare governance decisions to actual behavior
- • Tune policies based on false positives/negatives
- • Train teams on escalation procedures
Gradual Enforcement
Enable enforcement for low-risk operations, expand progressively
- • Enable enforcement for lowest-risk capabilities first
- • Monitor for unexpected blocks or escalations
- • Gradually expand to medium and high-risk operations
- • Implement CHAIN layer if required
Full Enforcement
Complete governance coverage with continuous optimization
- • Enable enforcement for all operations
- • Establish ongoing monitoring and alerting
- • Conduct regular policy reviews
- • Maintain compliance certifications
Integration Patterns
LangChain Integration
from langchain.tools import BaseTool
from basis import BasisClient
basis = BasisClient(api_key="...")
class GovernedTool(BaseTool):
def _run(self, query: str) -> str:
# Submit to INTENT layer
intent = basis.intent.parse(
entity_id=self.agent_id,
action=query,
capabilities=self.required_capabilities
)
# Get governance decision
decision = basis.enforce.evaluate(intent)
if decision.result == "DENY":
raise PermissionError(decision.reason)
if decision.result == "ESCALATE":
decision = basis.escalate.wait(decision)
# Execute with proof logging
result = self._execute(query)
basis.proof.record(intent, decision, result)
return resultREST API Integration
// Before action execution
const intent = await fetch('https://api.cognigate.dev/v1/intent', {
method: 'POST',
headers: { 'Authorization': 'Bearer ...' },
body: JSON.stringify({
entity_id: 'ent_agent_001',
raw_input: 'Send email to [email protected]',
capabilities_required: ['comm:external/email']
})
}).then(r => r.json());
const decision = await fetch('https://api.cognigate.dev/v1/enforce', {
method: 'POST',
body: JSON.stringify({ intent_id: intent.intent_id })
}).then(r => r.json());
if (decision.decision === 'ALLOW') {
// Execute action
await sendEmail(...);
// Log proof
await fetch('https://api.cognigate.dev/v1/proof', {
method: 'POST',
body: JSON.stringify({
intent_id: intent.intent_id,
decision: decision.decision,
outcome: 'success'
})
});
}Common Challenges
Challenge: Legacy agents without structured actions
Solution: Start with INTENT layer to parse and structure existing agent outputs before enforcement.
Challenge: High volume of false positives during shadow mode
Solution: Use the shadow period to tune risk classification. Conservative initially, then adjust based on observed patterns.
Challenge: Resistance to escalation workflow changes
Solution: Begin with high-risk operations only. Demonstrate value through audit trail and compliance reporting.
Challenge: Performance concerns with synchronous governance
Solution: Use async proof logging. Consider caching for repeat capability checks. ENFORCE typically adds < 50ms.
For the complete migration guide including detailed checklists, rollback procedures, and case studies, see the full document on GitHub.
View Full Migration Guide on GitHub