AI systems are already integrated into KYC refresh cycles, sanctions screening, document verification, and NAV reconciliation at scale. The regulatory conversation has focused on whether to allow them; the operational conversation has focused on how to audit them. This report takes the operational conversation seriously. It surveys six production deployments at regulated financial institutions, extracts a common attestation pattern, and proposes a framework in which every agentic action is accompanied by a cryptographically signed claim that names the model, the inputs, the policy mandate it executed under, and the human who authorised the mandate. We argue that this framing preserves human accountability while capturing most of the efficiency gains of agentic automation.
- 01Across the six deployments surveyed, the dominant audit failure is not a wrong model decision — it is the inability to reconstruct, months later, which inputs the model saw and under which policy mandate it acted.
- 02A signed attestation tuple of (model-id, input-hash, mandate-id, timestamp, operator-signer) resolves the reconstruction problem without requiring deterministic reproducibility of the model itself.
- 03ERC-8004 identity, when combined with the AP2 mandate pattern, is sufficient to express the accountability chain; no new standard is strictly required for production deployments.
- 04The binding constraint in practice is not cryptography — it is governance: who is authorised to issue which mandates, and what expiry and scope rules apply. Institutions that treat this as an access-control problem rather than a cryptographic one achieve both stronger audit trails and better operational velocity.
1. The audit problem
Every regulated financial institution we interviewed for this report runs AI systems in production. Every one of them could tell us, in abstract terms, what those systems are doing. Only two could tell us, for a specific action six months in the past, what inputs the model had seen at the moment it made the decision, which policy mandate it was operating under, and which human had authorised that mandate.
This is not a failure of sophistication. It is a failure of operational plumbing. The systems produce logs; the logs are stored; the logs are, in principle, inspectable. But the logs were not designed for third-party audit reconstruction, and they routinely lack the specific fields that an auditor needs — particularly the mandate-ID that links the action back to a signed human authorisation.
2. The attestation tuple
We propose that every agentic action be accompanied by an attestation: a signed tuple containing the model identifier, a hash of the inputs the model saw, the mandate-ID under which the action was taken, a timestamp, and the identity of the operator who signed the mandate.
The attestation does not claim that the model's decision was correct. It claims that a specific model, given specific inputs, produced a specific output under a specific authorisation. This is a weaker claim than reproducibility but a stronger claim than most production systems currently make, and it is sufficient for audit reconstruction.
Crucially, the attestation is produced by the agent at the moment of action, signed with its ERC-8004 identity, and anchored in the tenant's audit chain. It is not reconstructed from logs after the fact. This inversion is the central design move of the framework.
3. Governance is the binding constraint
The technical mechanics of attestation are straightforward. Every institution we studied had the cryptographic primitives in place within weeks of deciding to deploy them. What took months — in some cases years — was the governance work: deciding who could issue which mandates, how mandates expired and renewed, how mandate scope was expressed, and how mandates interacted with existing delegation-of-authority policies.
This is an access-control problem, not a cryptographic one. The institutions that framed it as an access-control problem — using existing identity and access management patterns — achieved production readiness in months. The institutions that framed it as a novel cryptographic problem took years.
4. Implications for supervisors
Supervisors currently ask regulated institutions whether they can explain their AI systems. The more tractable question is whether they can reconstruct what their AI systems did. The attestation framework makes the second question answerable by construction.
We recommend that supervisory guidance move in this direction. Requiring mandate-signed, identity-bound attestations on every material agentic action is a proportionate ask that most institutions can meet, and it yields an audit trail that is meaningfully more useful than the current practice of retrospective log inspection.
This paper will be available as a signed PDF. The Association publishes PDFs of all research on release; they carry a cryptographic signature anchored to a Swiss qualified electronic-signature provider to ensure provenance.
Boli Association. (2026). Cryptographic attestation frameworks for AI-assisted compliance. Boli Association Research Report No. BA-2026-02. Zurich.