Implementation

Deploy Atested as a governed layer your team connects through

Atested is self-hosted. You deploy one server, connect compatible AI tools to it, and begin routing governed actions through the MCP surface. The system produces governance records, transparency metrics, and attestation artifacts inside your own environment.

How deployment works

Deploy Atested on infrastructure you control. Connect your team's AI tools to the governed surface. Governed actions then flow through Atested for evaluation against verifiable conditions, signed record generation, and later verification.

The model is operationally simple: one governed server, multiple client tools, one audit trail.

What your team does next

Point compatible tools at the Atested MCP server, configure project instructions so governed tools are preferred on sensitive work, and connect observation hooks so the transparency metric reflects both governed and observed activity.

Compatibility

What works with Atested today

MCP clients Works now

Claude Code, Cursor, Cline, Windsurf, Codex — any tool that supports custom MCP servers. Connect to Atested's MCP server and all governed tools are immediately available.

LLM providers Works now

Any model behind those clients — Claude, GPT, DeepSeek, Kimi, local models. The governance layer sits between the client and the tools, regardless of which model powers the client.

OpenClaw Beta

Via the Atested skill on ClawHub. OpenClaw agents route actions through governance evaluation before executing them.

ChatGPT Not supported

ChatGPT does not support custom MCP servers, so it cannot route actions through Atested.

Claude.ai chat Not supported

Claude.ai's custom MCP connector path has known platform issues. It is not a reliable Atested target today.

Other tools

Any tool without MCP support cannot route governed actions through Atested. As more tools adopt MCP, compatibility will expand.

OpenClaw + Atested

The agent does things. The governance layer makes sure what it does is evaluated, recorded, and provable.

OpenClaw is an autonomous agent framework used by 250,000+ developers. Its own maintainers acknowledge the security risks of autonomous agents — that's exactly what governance infrastructure addresses.

The Atested skill for OpenClaw evaluates actions against policy before execution, produces signed audit records, and makes every decision provable. Install it from ClawHub and your OpenClaw agents gain governance without changing how they work.

openclaw-governed-action.jsonl GOVERNED
{
  "tool": "fs_write",
  "source": "openclaw-skill",
  "policy_decision": "ALLOW",
  "operator_intent": "write test results",
  "record_hash": "sha256:a1b2c3d4e5f6...",
  "signature": "ed25519:7e8f9a0b..."
}
Same governance. Same signed chain. Same proof artifacts. Whether the action comes from Claude Code, Cursor, or OpenClaw — Atested evaluates it the same way.
Best practices

How to increase governance coverage

Prefer governed tools in project instructions

Configure CLAUDE.md or the equivalent project instruction surface so agents prefer governed tools for sensitive operations.

Connect observation hooks

Observation hooks let Atested count native activity that remains outside governance, so the transparency metric reflects reality instead of only governed flow.

Route sensitive operations through governance

Establish with your team that commits, production-adjacent edits, deployment changes, and other sensitive actions should go through governed paths.

Monitor the transparency metric

Treat transparency as an operating signal. It shows how much activity is governed versus merely observed.

Expand governed surface deliberately

As new tool integrations become available, bring them under governance instead of assuming coverage will expand automatically.

Be explicit about the boundary

Coverage improves when the team understands which actions flow through governance and which still occur natively in the client tool.

Configuration

Connect any MCP-compatible tool in minutes

Point your AI tool at the Atested MCP server. Here's the actual configuration for Claude Code — other MCP clients use the same pattern with their own config format.

You define where your agents can operate (allowed directories), what limits apply (read sizes, listing caps, delete restrictions), and how authentication works (bearer tokens or OIDC). The evaluation logic is not configurable — it applies the same checks a well-informed engineer would apply, consistently and at scale.

Changes to the configuration are recorded as governance events with the same integrity as every other decision — old configuration hash, new configuration hash, who changed it, when.

License key security: Your Atested license key also serves as your admin credential for dashboard configuration. Store it securely and do not share it with team members who should not have configuration access.

claude_desktop_config.json MCP Config
{
  "mcpServers": {
    "atested": {
      "command": "python3",
      "args": ["mcp/server.py"],
      "cwd": "/path/to/governance-layer",
      "env": {
        "GOV_RUNTIME_DIR": "/path/to/gov_runtime",
        "GOV_CANONICAL_REPO_PATH": "/path/to/governance-layer"
      }
    }
  }
}
For remote deployment, use the HTTP transport instead:
GOVMCP_HOST=0.0.0.0 GOVMCP_PORT=8000 python3 mcp/remote_server.py
Then point your MCP client at http://your-server:8000/mcp.
Capability registry

You set boundaries. The evaluation logic is built in.

Configuration is one JSON file and a few environment variables. You define where your agents can operate, what limits apply, and how authentication works.

The registry is integrity-protected: SHA-256 verified on every governed call, schema-validated at startup, tamper-detected between reloads. Unauthorized modification fails closed.

capability-registry.json Protected
{
  "tool": "FS_WRITE",
  "allow_base_dirs": [
    "__GOV_CANONICAL_REPO_PATH__",
    "__GOV_RUNTIME_PATH__",
    "/home/deploy/staging"
  ],
  "deny_hidden_paths": true,
  "deny_overwrite_by_default": true,
  "caps": {
    "request_executable_allowed": false
  }
}
55 governed tools across filesystem, messaging, capabilities, approvals, audit, and health — all controlled by this registry.
Governance boundary

Honest framing matters

Atested governs every action that flows through it. It cannot force all actions to flow through it. AI tools have native capabilities that operate outside governance. This is a structural reality in open tool environments, not a defect specific to Atested.

The transparency metric makes that boundary visible and measurable so organizations can improve governance coverage with facts instead of assumptions.

transparency-summary.json Measured boundary
{
  "governed_operations": 1842,
  "observed_native_operations": 716,
  "transparency_ratio": "72%",
  "observation_mode": "hook-reported",
  "operator_goal": "increase governed coverage"
}
The point is not pretending the boundary does not exist. The point is making it visible and improving it deliberately.
Enterprise structural enforcement

For high-security environments, stronger control surfaces are available

For environments where evidentiary enforcement is not sufficient on its own, Atested can be deployed with stronger structural controls such as credential-gated resource access and exclusive capability surfaces.

That is a custom deployment architecture, not the default product path.