Audicta
Audit-grade decision provenance for regulated AI.
Audicta produces immutable, content-addressed records of how an AI system reached its decisions — at decision time. Months later, when an audit asks what was the model thinking, the record reproduces exactly. Independently. Verifiably.
Read the architectureWhat an audit-grade decision record proves
-
01 Specialist reasoning, captured as it happened
Multiple specialist agents collaborate on each decision, each citing the published criteria they're reasoning against. The reasoning isn't logged after the fact — it is the substrate of the decision.
-
02 An immutable record, written before the score
Every decision is a versioned JSON document with a SHA-256 content hash. Tamper a single byte, the hash breaks. The record self-attests to its own integrity.
-
03 An evaluator that cannot see the case
A type-checked architectural property, not a policy claim: the function that scores reasoning quality literally cannot accept the case data, the patient record, the criteria rules, or the agents' code. It only sees the decision record.
-
04 Two independent LLMs, convergent scores
A local open-weight model and a cloud frontier model each score the record. Their convergence — small divergence across evaluation dimensions — is the audit-defensibility signal. Two independently-trained models reading the same record and reaching the same scores means the record itself is unambiguous.
-
05 Reproducible months later
Run a case, record the score. Modify the agent weeks later. Replay the original record — the score reproduces exactly, because the record captured the agent as it was, not as it is.
What "audit-grade" actually means
The wrong question for a regulated-industry audience is "is this AI clinically, financially, or legally accurate?" That is not the question Audicta answers.
Audicta's claim is narrower and harder: regardless of the underlying agent's accuracy, the decision record is contemporaneous, immutable, and independently audit-grade.
A regulator asking three years from now "what was the AI thinking when it made this decision?" can read the record, verify its hash, replay it through an isolated evaluator, and reach the same scores the original audit did.
Whatever decision was made — made defensible.
That property is what regulated industries need before they can deploy AI where decisions can be challenged.
Reproducible across time
The reproduction harness is the system's core property.
Generate a record today. Tomorrow, modify the agent's prompt or skill file. Run the original record through the evaluator: same score. Run a new case through the modified agent: a different agent_version_hash in the new record, different reasoning content — but the original record is untouched, and still reproduces.
- The record captures the agent as it was at decision time.
- The evaluator scores the record, not the agent.
- Versions of the agent can change; the record does not.
Three years from now, when an auditor asks "what was the decision basis on April 25, 2026?" the record from that day produces the same answer it did then.
See the schema and isolation proof