How I Built 32 Orchestrated AIs for a Single Business Platform
Most "multi-agent" systems are just one LLM wearing different system prompts. EmpoweringBiz is 32 actual engines with typed signal contracts, failure propagation rules, and exactly one that is allowed to write the final strategy.
The term "multi-agent AI" has been diluted to meaninglessness. In most implementations, it means: spin up multiple instances of the same model, give each a different persona, and let them take turns generating text. This is role-playing, not orchestration.
The signal chain model
Each engine in EmpoweringBiz has a formal definition:
{
"engine_id": "E2",
"name": "market_analyzer",
"input_schema": { ... },
"output_schema": { ... },
"validation_rules": [ ... ],
"downstream": ["E4"],
"can_write_strategy": false
}
The critical design decision: engines communicate through typed JSON signals, not natural language. When E2 finishes, it emits a structured JSON object that conforms to a predefined schema. E4 validates that schema on receipt. If validation fails, E4 does not run. It raises a signal fault.
This is the difference between orchestration and conversation. In a conversation, everything is a string. In an orchestration, everything is a typed contract.
The domino chain: E2 → E4 → E6 → E7 → G1
E2 (Market Analyzer) ingests raw market data and outputs scored opportunity vectors. If E2 misidentifies a segment, every downstream engine inherits that error.
E4 (Competitive Positioner) maps the business against competitors. A phantom competitor from E2 means strategy against an entity that does not exist.
E6 (Revenue Modeler) projects three financial models — conservative, baseline, aggressive — with assumptions traced to E4.
E7 (Risk Assessor) flags assumptions exceeding risk thresholds. Last checkpoint before strategy.
G1 (Strategy Synthesizer) is the only engine with can_write_strategy: true. No other engine can set estrategia_validada=true. This is enforced in code, not convention.
Why error propagation is the real problem
An error in E2 propagates through E4, E6, E7, and into G1. By the time it reaches the user, the original error has been laundered through four layers of analysis that make it look legitimate. The error gains credibility as it moves through the chain.
EmpoweringBiz addresses this with three mechanisms:
Schema validation at every boundary. Each engine validates input against JSON Schema before processing.
Cross-engine consistency checks. The orchestrator compares outputs across engines for logical consistency. If E6's projections imply a market larger than E2 identified, a fault is raised.
G1's synthesis validation. Before setting estrategia_validada=true, G1 runs its own validation pass. G1 is not just a summarizer — it is a final auditor.
322 tests, not because I am paranoid
The test suite defines what the system is. Each test encodes a specific way the system could produce a plausible but wrong strategy. If you cannot articulate your failure modes as executable tests, you do not understand your system well enough to deploy it.
The real architecture question
When someone tells me they built a "multi-agent AI system," I ask one question: what happens when Agent 3 receives invalid input from Agent 2?
If the answer is "it does its best with what it gets," that is not an architecture. That is hope. And hope is not a strategy.
Designing a multi-agent system that needs to be reliable? Let's architect it together →