ORIGIN
The Same Impulse, Translated Into Code
My parents held consciousness-raising circles for farmworkers in Nicaragua — Freirean method: knowledge as a tool for liberation, not a credential for gatekeeping. The people who most needed access to the legal system had the least access to the language of it.
The Chorus starts from the same place. Legal contracts determine what people owe, what they're owed, and what happens when things go wrong. The people who most need to understand them are often the least resourced to do so. AI can change that — but only if it's built to be trustworthy, not just capable.
THE PROBLEM
Where "Mostly Right" Is Not Enough
AI systems fail differently than humans do. They fail confidently, at scale, without flagging that something went wrong. In high-stakes domains — legal analysis, compliance review, healthcare — a missed clause or misattributed obligation doesn't compound gradually. It arrives all at once, usually at the worst possible time.
Most AI contract tools optimize for throughput. The Chorus optimizes for trustworthiness. These are different design goals, and they produce fundamentally different systems.
DESIGN PRINCIPLE
Governance + Reasoning, Working in Layers
DETERMINISTIC STATE MACHINE
Governance layer. The building inspector, not the builder. It doesn't generate analysis — it checks whether analysis is sound enough to proceed.
LLM REASONING
The analytical layer. Probabilistic, generative, capable of reading between the lines. Powerful — and requires governance to be trustworthy.
BEHAVIORAL CONTRACTS
Scope definition for each agent. What it does, what it doesn't, how it handles failure. Contracts create predictability at scale.
STANCE ARCHITECTURE
Replaces numeric confidence scores. Named operational states with behavioral contracts — so the system knows how to behave, not just how confident to appear.
OUTCOMES
What This Architecture Makes Possible
AUDITABLE WORKFLOWS
Every decision is traceable to the state, the agent, and the verification gate that cleared it.
GRACEFUL FAILURE
Errors surface as named patterns, not silent failures. The system knows what kind of trouble it's in.
COMPOUND LEARNING
A dedicated Memory agent builds procedural knowledge across runs. The system improves with use.
CALIBRATED TRUST
Users understand what the system can and cannot reliably do — because the system tells them, in plain language.