Analyzed about 19 hours ago
Lightweight SDK that makes agentic AI safer and more auditable.Before executing any tool call or side-effect action, the agent must propose a structured JSON contract declaring:
- explicit intent
- impact classification (read / write / money / irreversible / privacy / etc.)
- provenance sources
... [More]
with trust levels (untrusted → trusted via verification)
- claims backed by verifiable evidence (SHA-256 file hashes or Ed25519 signatures)
Verifier checks everything locally and fails closed if provenance doesn't match or high-impact actions lack trusted evidence.
Integrates easily as middleware (LangGraph tool-node wrapper) or via MCP for enterprise tool guarding. [Less]
3.85K
lines of code
0
current contributors
1 day
since last commit
1
users on Open Hub