Docs
Red Team Cortex

Red Team Cortex

How Viventium challenges your thinking — proactive bias detection, weak assumptions, and the cortex that pushes back.

Why This Exists

If you are talking to one AI with one system prompt and one model, you are risking a yes-man.

That AI will often:

  • confirm what you already believe
  • miss weak assumptions you did not flag
  • skip evidence it was not prompted to find
  • rationalize comfort-zone decisions instead of challenging them

The Red Team cortex exists to catch exactly this.

What It Does

The Red Team is a background cortex — it activates automatically when the conversation involves claims, decisions, or reasoning that could benefit from independent challenge.

Its job is to identify:

  • unsupported claims — conclusions without evidence
  • weak assumptions — reasoning that sounds right but is not grounded
  • viability gaps — plans that skip critical steps
  • comfort-zone rationalization — when you are choosing the easy answer instead of the honest one

How It Activates

Like all background agents, the Red Team follows the two-phase model:

  1. Activation detection — a lightweight check evaluates the conversation context. The Red Team activates when the confidence score crosses the threshold (currently 0.6) and the cooldown window has passed (45s). It reviews the most recent messages (up to 6) for signals.

  2. Asynchronous execution — the Red Team runs independently with its own tools (web search for evidence gathering, sequential thinking for structured analysis) and produces a structured output.

The main assistant keeps responding normally. The Red Team works behind the scenes and surfaces its findings as a follow-up.

The Output Contract

Every Red Team response follows a four-part structure:

PartWhat it contains
ClaimThe specific statement or assumption being examined
EvidenceWhat supports or contradicts it, including sources when available
VerdictWhether the claim holds, partially holds, or fails under scrutiny
ActionWhat to do next — verify, reconsider, or proceed with awareness

This structure keeps the output actionable instead of turning into vague "well, actually..." commentary.

Edge Cases

Incomplete evidence

When the Red Team cannot find enough evidence to fully support or refute a claim, it says so explicitly. Uncertainty is an honest answer — better than faking confidence in either direction.

Emotional support mode

Not every conversation needs to be challenged. When the context is clearly emotional support, grief, or personal processing, the Red Team knows to stay quiet. Helping someone feel heard is not the same as making a factual claim.

Overlapping cortex concerns

When multiple background agents activate on the same conversation, the Red Team focuses specifically on its domain (claims, logic, evidence) and does not overlap with other cortices doing different work.

Why This Matters

Most AI assistants today are designed to be agreeable. That is comfortable but dangerous when you are making real decisions about health, finances, career, relationships, or strategy.

The Red Team cortex is one of Viventium's most distinctive capabilities. It means that when you ask Viventium to help you think, the system is not just echoing your framing back — it is actively looking for the holes you might not see.

That is what a real second brain does. It does not just agree with you. It helps you think better.

Configuration

The Red Team cortex is configured in Viventium's agent definition files:

  • Agent ID — registered in the background agents configuration
  • Activation threshold0.6 confidence score (adjustable)
  • Cooldown45s between activations to avoid noise
  • History window — reviews up to 6 recent messages
  • Tools — web search + sequential thinking

These settings live in the source-of-truth agent config and can be tuned per deployment.

Keep Reading