We are burning millions of API tokens on problems that if statements solved 20 years ago.
I speak with developers building Multi-Agent Systems (MAS) every day, and I keep seeing the same massive architectural anti-pattern: Routing everything through the AI model.
- Need to check an agent's permissions? "Ask the LLM."
- Need to route a message? "Ask the LLM."
- Need to validate a data schema? "Ask the LLM."
Language models are extraordinary reasoning engines. But they are also expensive, probabilistic, and relatively slow. If a problem has a deterministic, correct answer (like checking an access policy), it should be evaluated by runtime code, not guessed by a neural network.
The Anti-Pattern
Instead of doing this (Probabilistic):
// BAD: Asking the LLM to check permissions
const prompt = `You are an agent. The user wants to delete a file.
Here are their permissions: ${user.permissions}.
Should you allow it?`;
const decision = await llm.generate(prompt);
The Solution
We need to get back to doing this (Deterministic):
// GOOD: Let code handle policy, let AI handle reasoning
if (!user.hasPermission('delete_file')) {
throw new Error("Unauthorized");
}
// Only call the LLM for actual cognitive tasks
const plan = await agent.reasonAboutFile(file);
AI should decide what to do. Deterministic code should execute it and enforce the boundaries.
Are we forgetting basic software engineering principles just because AI is exciting? The MAS space doesn't need more wrappers; we need standardized frameworks that enforce these boundaries. Let's get back to building solid infrastructure.
United States
NORTH AMERICA
Related News
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
20h ago
UCP Variant Data: The #1 Reason Agent Checkouts Fail
6h ago

Décryptage technique : Comment builder un téléchargeur de vidéos Reddit performant (DASH, HLS & WebAssembly)
16h ago
How Braze’s CTO is rethinking engineering for the agentic area
10h ago
Encryption Protocols for Secure AI Systems: A Practical Guide
20h ago