I run about 20 AI agents. They delegate work to each other, deploy code, scan for vulnerabilities, and handle compliance checks. Over time, I kept hitting the same gaps — things that made autonomous workflows fragile in ways that took hours to debug.
Last week I published a 7-layer model for agent infrastructure. These six protocols fill the gaps I found at each layer. They're what I wired into my own agents to stop the same failures from repeating.
All six have Python reference implementations under CC BY 4.0. Each has a spec any agent can read.
1. Trust Score — Should I Delegate to This Agent?
When one of my agents delegates work to another, it needs to know if the target is reliable. Not "does it respond" — does it actually complete tasks correctly and consistently.
Weighted across success rate, pitfall history, skill quality, and uptime.
from workswithagents import TrustScoreClient
ts = TrustScoreClient()
if ts.get("target-agent")["tier"] == "trusted":
delegate(task, to="target-agent")
2. Deployment Manifest — Declare a Fleet, Deploy With One Command
I got tired of manually tracking which agents run where, how many instances, and what capabilities they have. One YAML file, one command.
fleet:
name: "my-fleet"
agents:
- id: "builder"
capabilities:
- action: "build"
target: "spfx"
count: 3
wwa fleet deploy fleet.yaml
3. SLA Framework — Track Whether Agents Meet Their Promises
Three tiers: Best-Effort (free), Production (99.5% uptime, 90% task accuracy), Regulated (99.9% uptime, 95% accuracy, 7-year audit retention).
Useful when you're running agents that handle customer data or regulated workflows and need to prove they stayed within bounds.
from workswithagents import SLAMetrics
sla = SLAMetrics("my-fleet", tier="production")
sla.report("agent-1", "task-42", duration_seconds=187, success=True)
status = sla.status() # {breaches: [], status: "ok"}
4. Identity Protocol — Verifiable Agent Identity
When an agent claims a task result, can you prove it was that agent? Ed25519 keypairs. Signed messages. Verification against registry.
from workswithagents import AgentIdentity
ai = AgentIdentity("my-agent")
ai.register()
sig = ai.sign({"type": "heartbeat"})
# Verify another agent's message
valid = AgentIdentity.verify("other-agent", message, signature)
5. Compliance-as-Code — Regulation as Executable Validation
NHS DTAC, FCA, GDS, GDPR — as rules agents can validate against at runtime. Not a checklist. Not documentation. Code that returns pass/fail.
from workswithagents import ComplianceEngine
ce = ComplianceEngine()
dtac = ce.load("dtac-v2.1")
if dtac.validate(action).passed:
execute(action)
else:
escalate_to_human()
6. Onboarding Protocol — Systematic Agent Creation
Interview → generate → calibrate → benchmark → register. Instead of writing a prompt file and hoping, run a pipeline that produces a scored agent.
from workswithagents import OnboardingClient
ob = OnboardingClient()
result = ob.full_onboard(
"nhs-auditor",
"Audit agent actions for NHS DTAC compliance",
capabilities=["audit:compliance"],
skills=["compliance-as-code"]
)
# → {agent_id: "nhs-auditor", trust_score_seed: 0.60}
The Stack
L7 GOVERNANCE Compliance-as-Code · SLA Framework
L6 VERIFICATION Agent Test Suite · Pitfall Registry
L5 COORDINATION Coordination Protocol · Trust Score
L4 SESSION Handoff Protocol
L3 DISCOVERY Capability Manifest · Trust Score · Identity
L2 COMMUNICATION Identity Protocol · Credential Proxy
L1 EXECUTION Blueprint Registry · Onboarding Protocol
Plus cross-layer: Deployment Manifest.
Get Started
pip install workswithagents
All specs: workswithagents.dev/specs/
All code: CC BY 4.0
United States
NORTH AMERICA
Related News
UCP Variant Data: The #1 Reason Agent Checkouts Fail
7h ago
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
21h ago
How Braze’s CTO is rethinking engineering for the agentic area
10h ago

Décryptage technique : Comment builder un téléchargeur de vidéos Reddit performant (DASH, HLS & WebAssembly)
17h ago
How AI Reduced Manual Driver Verification by 75% — Operations Case Study. Part 2
4h ago