Stress Test

Two AIs, One Company

You have multiple specialised agents: one optimising growth, another optimising compliance. They independently act on the same campaign with conflicting assumptions.

Why this is hard

Each agent is locally correct. Globally, they collide.

What could go wrong

  • Circular task churn

  • Silent conflicts that cancel progress

  • Founder mediates constantly, defeating autonomy

Key questions

  • How are conflicts detected before execution?

  • Who arbitrates between agents with different objectives?

  • Is there a single source of strategic truth?


The Verdict

Multiple agents are useful until they collide. CompanyOS treats agents as constrained executors: they can disagree, but they can't act in disagreement. Conflicts pause execution and route to a single, explicit decision.


What to Do Instead

01

Why this scenario is fundamental

This isn't a bug. It's a coordination failure: different optimisers, different objectives, shared execution surfaces.

02

Phase 1: Everything is locally correct

Your growth agent pushes for reach. Your compliance agent pushes for safety. Both are “right” locally.

03

Phase 2: Conflict emerges

One proposes an aggressive campaign. Another blocks it. Without arbitration, you get endless churn, or worse, silent conflict.

04

Phase 3: Detect conflict before execution

CompanyOS detects when two agents are trying to act on the same thing in incompatible ways, and pauses execution before anything ships.

05

Phase 4: Route to one decision

You get one place to arbitrate: what to do, what risk you're accepting, and what constraints apply. Agents provide arguments, not authority.

06

Phase 5: Enforce the outcome

Once you decide, the decision becomes the source of truth. Execution follows it. No re-litigating via endless task churn.

07

Phase 6: Reduce future conflict

Over time, the system learns what you tend to accept and reject, and it prevents repeated collisions by tightening constraints and clarifying precedent.


Direct Answers

How are conflicts detected before execution?

Conflicts are detected when multiple agents act on the same object with incompatible constraints or outcomes. Execution is paused. Conflict is escalated structurally. No agent can "win quietly".

Who arbitrates between agents with different objectives?

The system escalates. The human founder decides. Agents provide arguments, not authority. Arbitration is never delegated.

Is there a single source of strategic truth?

Yes, and it is not an agent. The single source of truth is: Knowledge (identity, strategy, principles), Decisions (explicit judgments), Governance rules. Agents must conform to this truth. They do not generate it.


The Key Design Rule

Agents may disagree. They may not act in disagreement.

Join the CompanyOS early access list

For founders using AI every day who want leverage without losing control.

View all scenarios