How it works

Governed actions become checkable events

Atested sits in front of governed AI operations. Every action is evaluated against the same logic a well-informed engineer would apply — then the evaluation and the decision are signed and provable.

The governance flow

The core flow is simple. An AI tool routes an action through Atested. Atested evaluates it against verifiable conditions — the scope, evidence, and constraints that any well-informed engineer would require. If the conditions are met, the action proceeds and the decision is recorded. If not, the action is denied and that denial is recorded too.

1

Action request arrives

A tool submits a governed action through the MCP surface with whatever proof, context, or constraints the governance logic requires.
2

Conditions are evaluated

Atested resolves the decision deterministically where it can — is the evidence present, is the scope valid, are the preconditions met. Where the evidence, scope, or constraints can't determine the answer logically, Atested marks the decision as requiring judgment. The system doesn't guess — it escalates honestly.
3

ALLOW or DENY is issued

ALLOW means the evidence was sufficient, the scope was valid, and the constraints were satisfied. DENY means something verifiable was missing.
4

Signed record is written

Every decision is recorded into a signed immutable chain so it can be checked later without relying on runtime assertions alone.
decision-chain.jsonl ALLOW
{
  "tool": "fs_write",
  "capability_class": "FS_WRITE",
  "policy_decision": "ALLOW",
  "timestamp_utc": "2026-03-30T13:12:00Z",
  "operator_intent": "update README",
  "organization_id": "acme-engineering",
  "record_hash": "sha256:0a1b2c3d4e5f...",
  "prev_record_hash": "sha256:f0e1d2c3b4a5...",
  "signature": "ed25519:8f9c2ab1..."
}
The record itself is part of the product surface. The point is not only to decide correctly in the moment, but to preserve checkable evidence of what happened.

What ALLOW and DENY actually mean

Atested does not just log actions after the fact. It evaluates governed actions before they proceed. ALLOW means the action met the verifiable conditions — the evidence was sufficient, the scope was valid, the constraints were satisfied. DENY means something was missing, and the record shows exactly what.

ALLOW — deterministic

Conditions met, action proceeds

A coding agent writes inside an approved project path using an allowed tool and the required execution context. The path resolves to a real location, the agent is authenticated, the scope is valid. Every condition is verifiable — no opinion required.

DENY — deterministic

Conditions not met, action stopped

An agent tries to write outside the approved workspace, delete a production config, or modify the CI pipeline without authority. Something verifiable is missing — the record shows exactly what.

ALLOW — judgment

Approval granted by an engineer

A battle-tested deployment script can't produce the evidence Atested requires — it's opaque to evaluation. An engineer grants a scoped approval pinned to the exact content hash. The script runs. The engineer's judgment is on the record alongside the deterministic decisions.

DENY — needs judgment

No approval in place, action blocked

The same opaque script runs without an approval. Atested can't evaluate it deterministically and no engineer has vouched for it. The action is blocked, and the record shows it was flagged as requiring judgment — not that it failed a condition.

Governance transparency

Atested governs every action routed through it. AI tools also have native capabilities outside governance — that's a structural reality. Atested makes the boundary visible: governed operations get full signed records, ungoverned operations are counted through observation hooks. You see exactly how much of your AI activity is actually under governance.

transparency-summary.json 72% governed
{
  "governed_operations": 1842,
  "observed_native_operations": 716,
  "transparency_ratio": "72%",
  "observation_mode": "hook-reported",
  "manager_view": "governed vs observed"
}
The goal is not pretending you start at 100 percent. The goal is knowing where you are, seeing what remains outside governance, and improving coverage deliberately.
Why this works

Deterministic governance doesn't depend on someone's opinion

Most governance systems rely on people deciding whether each action was appropriate. That doesn't scale, and it's inconsistent. Atested replaces opinion with verification: either the evidence is present or it isn't, either the scope is valid or it isn't. The result is governance you can check independently — not governance you have to trust someone's judgment about.

A real example: the deployment script

Your team has a deployment script that's been running for years. Battle-tested, reliable — but opaque to Atested's evaluation. It keeps getting DENY because it can't produce the evidence Atested requires.

You know this code is safe. You wrote it, you maintain it, it runs every day. So you grant a scoped approval: pinned to the exact content hash, the deployment context, and the current governance version. The script runs. If someone changes one line, the hash changes, and the approval expires automatically. Your judgment is on the record — signed alongside the deterministic decisions.

Approvals are explicit and revocable

Approvals aren't blanket exceptions. They're pinned to a specific artifact at a specific hash. Change the artifact and the approval is gone. Grant or revoke — both are recorded in the governance chain. The system is honest about what it can prove and what required your judgment.

Your dashboard

What you see

The Atested Dashboard gives your team a live view of governance activity, decisions, approvals, audit results, and system health — all backed by the same signed chain the governance engine produces.

Atested Dashboard Governance overview
Demo — sample data
Overview
Activity
Approvals
Audit
Reports
Health

This dashboard shows governance activity for your organization. Every governed action produces a signed record in the decision chain. The metrics below reflect the current state of the governance surface.

11 Actions Denied Before Execution 4 today
47 Chain Events
Healthy Chain Integrity
2 Active Approvals
3 Unique Users
36 / 11 ALLOW / DENY
72% Transparency
47 governed / 18 ungoverned observed
Users
bearer:e1f2a3b4 21 actions
bearer:c8d9e0f1 16 actions
bearer:a2b3c4d5 10 actions
Active Approvals
ArtifactHashGrantedStatus
deploy-script.sh sha256:4f2a8b1c… Mar 28, 2:14 PM Active
ci-runner.py sha256:8b1ce3d7… Mar 29, 9:45 AM Active
Recent Activity
TimeEvent TypeToolUserDecisionIntent
Mar 30, 1:42 PMGoverned Actionfs_writebearer:e1f2a3b4ALLOWupdate deployment config
Mar 30, 1:38 PMGoverned Actionfs_writebearer:c8d9e0f1DENYedit /etc/hosts
Mar 30, 1:35 PMGoverned Actionfs_readbearer:e1f2a3b4ALLOWread project README
Mar 30, 1:31 PMGoverned Actionmsg_sendbearer:a2b3c4d5ALLOWnotify team channel
Mar 30, 1:28 PMGoverned Actionfs_deletebearer:c8d9e0f1DENYremove .env.production
Mar 30, 1:24 PMGoverned Actionfs_writebearer:e1f2a3b4ALLOWupdate test fixtures
Mar 30, 1:19 PMGoverned Actioncapabilities_executebearer:a2b3c4d5ALLOWrun lint check
Mar 30, 1:15 PMGoverned Actionfs_writebearer:c8d9e0f1DENYwrite outside workspace
Mar 30, 1:11 PMGoverned Actionfs_readbearer:a2b3c4d5ALLOWinspect build output
Mar 30, 1:08 PMGoverned Actionfs_writebearer:e1f2a3b4DENYmodify CI pipeline
Mar 30, 1:04 PMGoverned Actiongovernance_statusbearer:e1f2a3b4ALLOWcheck governance status
Mar 30, 12:58 PMGoverned Actionfs_listbearer:c8d9e0f1ALLOWlist project directory
System Health
Healthy Overall status — 0 alerts
47 Chain records — verified, no breaks
DENY Rate 23.4% (11 of 47) — within normal range
Storage Chain: 12.4 KB · Stability log: 1.8 KB · Records: 84.2 KB
FAQ

Common questions

Does my team need to define governance rules?

No. Atested's governance logic reflects the conditions any well-informed engineer would apply. You configure scope and constraints for your environment, but the evaluation logic is built in. Most decisions are resolved deterministically without any rules to write.

What does "well-informed engineer" actually mean?

If an AI agent requests to write a file, a well-informed engineer would check: is the target path within the project workspace? Is the agent authenticated? Has the action been requested through a governed channel? These aren't matters of opinion — they're verifiable conditions with definitive answers. Atested checks them the same way, every time.

We know because these are the same checks any careful engineer performs manually — Atested just performs them consistently, at scale, and signs the result.

Configure governance through the dashboard

The dashboard Configuration tab lets you view and edit the capability registry — the file that defines what every governed tool is allowed to do. No more manual JSON editing.

View mode is available to everyone. You can see every governed tool's allowed directories, constraint flags, and hard caps — plus the registry integrity hash that proves the configuration hasn't been tampered with.

Edit mode requires your license key. Add or remove directories, toggle constraints, and adjust caps. Every change goes through the governed reload process — schema-validated, hash-recorded, and integrity-protected.

Trial users can view the full configuration and make limited changes (up to 3 directories, basic toggles). Full editing requires an active license.

Configuration — View Mode
Registry Integrity
  Hash:     sha256:a4e122f7...
  Status:   OK
  Tools:    14

FS_WRITE  HIGH
  Directories:  project/, gov_runtime/
  Deny hidden:  yes
  Deny overwrite: yes
  Deny executable: yes

FS_READ  MEDIUM
  Directories:  project/, gov_runtime/
  Max bytes:    65536
  Deny hidden:  yes

FS_DELETE  HIGH
  Directories:  project/, gov_runtime/
  Recursive:    no

  [Unlock Editing]  requires license key

Prevention is the primary value

An AI agent overwrites a production config. Deletes the wrong file. Sends a message to the wrong channel. Everyone asks: "why wasn't this prevented?" That's the real pain — not reconstruction after the fact, but seemingly avoidable errors that AI causes in operations.

Atested is the guardrail. It stops many bad actions before they land — wrong path, missing authority, insufficient evidence. The audit trail is a side benefit. Prevent first. Prove second.