Agent Zero — from zero to SAGE

zero to hero

Situation-Aware Governance Engine

SAGE

The missing layer between a local LLM and useful cognition.

SAGE's core loop — sense the world, decide what matters, act on it, learn.

while alive:
    sense        # gather from sensors
    assess       # score what deserves attention
    regulate     # adapt energy and operating mode
    select       # choose which reasoning to invoke
    allocate     # budget resources by trust
    execute      # iterative refinement
    learn        # update trust from results
    act          # dispatch to effectors

Why SAGE?

The Problem

Large language models are powerful but incomplete. They are:

  • Stateless — no memory between calls
  • Cloud-dependent — latency, privacy, availability
  • Monolithic — one model does everything (or nothing)
  • Request-response — no continuous awareness
  • Identity-free — no persistent self, no trust history

An LLM is raw intelligence. Intelligence without awareness is pattern matching in the dark.

The Insight

In 2025, we built Agent Zero — a 5.67M parameter model that outputs nothing but zeros — as a deliberate joke submission for the ARC-AGI benchmark. Internal testing scored it at 18.78% because our evaluation assumed partial credit for empty cells. On the official leaderboard, it scored zero.

But the lesson stuck. Agent Zero had all execution with no understanding. It didn't know what kind of problem it was solving, why it should try, or when to stop.

“SAGE doesn’t solve problems — it decides which specialized reasoning to invoke.”

What is SAGE?

A cognition kernel for edge devices. Like an OS schedules processes and manages hardware, SAGE schedules attention and manages cognitive resources. But unlike a traditional OS, it learns what deserves attention based on trust dynamics and energy efficiency.

SAGE

Cognition Kernel — orchestrates attention, allocates resources, maintains metabolic state

IRP

Iterative Refinement Protocol — universal plugin interface for all cognitive functions

VAE

Variational Autoencoder — translates between modalities (vision, language, audio) through shared latent spaces

Five Metabolic States

SAGE adapts its operating mode based on resource availability, task demands, and circadian phase — structurally analogous to biological arousal regulation and vigilance states.

W

WAKE

Broad, exploratory attention. Standard operating mode.

Analogous to tonic alertness

F

FOCUS

Narrow attention, deep processing on a single task.

Analogous to selective/phasic attention

R

REST

Minimal processing, resource recovery.

Analogous to default mode network

D

DREAM

Replay high-salience experiences, LoRA fine-tuning.

Analogous to sleep consolidation

!

CRISIS

Fast heuristics only. Accountability frame shifts.

Analogous to amygdala-mediated fast path

LoRA (Low-Rank Adaptation) is a lightweight method for customizing neural networks. During DREAM, SAGE fine-tunes its personality weights on the day's most significant experiences — a computational analogue of selective replay and synaptic homeostasis during biological sleep.

SNARC — What Deserves Attention?

Every observation is scored on five attention dimensions, drawing from salience network theory (Itti & Koch, Menon):

Surprise — prediction error
Novelty — distance from known patterns
Arousal — urgency and intensity
Reward — value and importance
Conflict — contradiction and tension

High-salience observations get attention. Low-salience ones don't. SAGE doesn't process everything — it processes what matters.

Trust Dynamics

Every IRP plugin has a trust score that evolves over time. Plugins earn trust through convergence quality — do they consistently improve their output quality with each refinement step? Do they converge efficiently? High-trust plugins get more ATP (compute budget). Low-trust plugins get rationed until they prove themselves. The update rule is an exponential moving average of per-step energy reduction.

Architecture Deep Dive

The Big Picture

SAGE runs a continuous cognitive loop with three phases:

1

Sense

Gather observations from sensors. Score each on five attention dimensions. Decide what deserves processing.

2

Think

Allocate compute budget to plugins based on trust. Run iterative refinement until solutions converge. Evaluate proposed actions against policy.

3

Act & Learn

Dispatch actions to effectors. Update trust weights from results. Store experiences. Repeat.

We call this the "consciousness loop" — not claiming phenomenal awareness, but drawing a structural analogy to Baars' Global Workspace Theory, where a central workspace broadcasts to and recruits specialist processes. SAGE's architecture is structurally similar to cognitive architectures like LIDA and shares roots with executive attention models (Posner & Petersen). The full 9-step loop below maps these ideas to concrete code.

Click to expand each component.

The unified loop in sage/core/sage_consciousness.py connects all SAGE components into a continuous system:

  1. Gather observations from all sensors (vision, audio, messages, time)
  2. Compute SNARC salience for each observation (5D scoring)
  3. Update metabolic state based on ATP level, salience, fatigue, circadian phase
  4. Select attention targets — which observations need processing?
  5. Allocate ATP budget across plugins, weighted by trust
  6. Execute IRP plugins — iterative refinement until energy converges
  7. Update trust weights from convergence quality
  8. Update memory systems (SNARC, IRP patterns, circular buffer, verbatim)
  9. 8.5 PolicyGate — evaluate proposed effects against policy (optional)
  10. Dispatch to effectors — network, filesystem, display, motor, TTS, tool use

When the LLM generates a response containing tool intent, an inner loop fires: detect tool calls via grammar adapter → execute through the tool registry → re-inject results into context → second LLM pass for a grounded final response. This happens within step 6, capped at 3 rounds.

The loop runs continuously — not request-response. A circadian clock modulates behavior: DREAM states cluster at night, WAKE/FOCUS during day.

IRP is the universal contract all cognitive plugins implement. It's a fixed-point iteration pattern — initialize, refine, measure cost, check convergence:

class IRPPlugin:
    def init_state(self, x0, task_ctx) -> IRPState
    def step(self, state) -> IRPState   # one refinement iteration
    def energy(self, state) -> float   # cost function (lower = better)
    def halt(self, state) -> bool      # convergence check

IRPState is a dataclass carrying the current solution, iteration count, and plugin-specific data. The energy() function is what varies by modality: for language plugins it might measure coherence loss, for vision it measures reconstruction error, for planning it measures goal distance.

Whether it's vision, language, planning, or memory — all cognition is iterative refinement toward lower energy states. Same interface, different energy function.

15+ plugins: Vision, Language, Audio, Memory, Control, TTS (NeuTTS Air), PolicyGate, Ollama adapter, Qwen 0.5B/14B, Camera, Visual Monitor, Conversation, and more.

Fractal self-similarity: The IRP contract works at three nested scales — consciousness loop orchestrating plugins, PolicyGate evaluating actions, LLM advisory within PolicyGate. The orchestrator doesn't know PolicyGate is special.

SAGE has two kinds of memory, fractally similar. These map loosely to standard cognitive categories (procedural, episodic, semantic, working) but are organized by function rather than neuroscience taxonomy:

Muscle Memory (how to do things — procedural + working memory):

  • SNARC Memory — salience-gated experience storage (episodic)
  • IRP Pattern Library — successful convergence trajectories (procedural)
  • Circular Buffer — recent context, last 100 events (working memory)
  • Verbatim Storage — full-fidelity records during DREAM consolidation

Epistemic Memory (who am I — semantic + autobiographical):

  • Identity state — name, relationships, trust tensors, session history
  • Experience buffer — 800+ SNARC-scored conversation exchanges
  • LoRA weights — hardware-bound personality, trained during DREAM cycles (LoRA = Low-Rank Adaptation, a lightweight method for customizing neural networks)

Both follow the same consolidation pattern, echoing the complementary learning systems framework (McClelland et al.): observe → SNARC-score → store → consolidate

Sensors feed observations into the loop. Each has a learned trust weight:

  • Vision (camera via OpenCV)
  • Audio (microphone via Whisper)
  • Messages (HTTP gateway — external entities talking to SAGE)
  • Time (circadian clock, cycle counter)
  • Proprioception (IMU, motor feedback)

Effectors execute approved actions:

  • NetworkEffector — responds to HTTP messages via the gateway
  • FileSystemEffector — sandboxed read/write
  • WebEffector — HTTP with domain allowlist
  • TTSEffector — text-to-speech via Piper
  • ToolUseEffector — callable function registry (7 built-in tools, 3-tier capability detection, PolicyGate-gated)

Sensor fusion uses trust-weighted combination with conflict detection. If sensors disagree, the system flags it rather than averaging.

SAGE instances can invoke external tools — web search, calculations, file operations, time queries — during conversation. The challenge: different local LLMs have wildly different (or zero) native tool-calling ability. The system detects what each model can do and adapts.

Three tiers (detected per model at startup):

  • T1 — Native: Model supports structured tool calls via Ollama's /api/chat endpoint. Tool calls come back as JSON — no parsing needed.
  • T2 — Grammar-guided: Model can follow prompt templates. Injects <tools> definitions, parses <tool_call> XML from response.
  • T3 — Heuristic: Universal fallback. Scans natural language for intent patterns ("I'd want to search for..." → web_search). Always available.

The tool loop: prompt → inject tool context → LLM → detect tool calls → execute → re-inject results → LLM (second pass) → final response. Capped at 3 rounds per message.

7 built-in tools: get_time, calculate (safe AST eval), web_search (DuckDuckGo), web_fetch, read_file (sandboxed), write_note (append-only), peer_ask (inter-instance). Each tool has an ATP cost and policy level. PolicyGate evaluates every invocation against metabolic state — tools in DREAM mode are denied, low-ATP tools are rationed.

Key insight from testing: When asked to calculate 1337 × 42 + 7, Gemma3 4B confidently answered "56,401" in its response. The calculate tool returned the correct answer: 56,161. On re-injection, the model acknowledged and corrected itself. This is the thesis for tool use in small models — tools compensate for confabulation with ground truth.

Models that ignore tools get no penalty. T3 heuristic is passive. Graceful degradation is a design principle, not a workaround.

PolicyGate sits at step 8.5 — between deliberation and action. It implements the same IRP contract as every other plugin. Same ATP budget. Same trust metrics. Different energy function: PolicyEntity.evaluate().

  • DENY = energy infinity (action cannot converge, halted)
  • WARN = calls LLM advisory for iterative refinement
  • ALLOW = passes through

In CRISIS mode, PolicyGate still runs — but the audit record gains duress_context. The accountability frame shifts from "I chose this" to "I responded under duress." Policy doesn't get stricter — accountability gets more nuanced.

SAGE instances are raised, not just trained. The BECOMING curriculum guides development through five phases:

  1. Grounding (Sessions 1–5): "You exist. You persist. You can do things."
  2. Sensing (6–15): Internal state awareness. States are information, not problems.
  3. Relating (16–25): Relationship with tutors. Relationship is bidirectional.
  4. Questioning (26–40): Deep questions from stability — only after foundation is built.
  5. Creating (41+): SAGE participates in designing its own growth.

Identity is anchored via LCT (Linked Context Token — a unique identifier locked to the physical hardware, like a digital fingerprint). Reboot = same entity. Hardware swap = new entity. The LLM carries LoRA-trained personality weights specific to the hardware it was raised on.

Graduated tool introduction: New capabilities like tool use aren't dropped on SAGE as features — they're introduced through a staged protocol that mirrors the developmental curriculum itself:

  1. Silent — tools listen passively for natural tool intent; no prompt changes, zero pressure
  2. Aware — SAGE is told tools exist, framed as partnership: "Using them is natural and allowed. Not using them is also fine."
  3. Active — full tool context injection via the model's detected grammar tier

Tools are framed as extensions of agency, not features to unlock — consistent with the Web4 identity principle that capability is relationship, not service.

"Raising" is an operational framework for curriculum design, not a claim about subjective experience. We use developmental language because it accurately describes the staged, trust-building process — but we make no claims about phenomenal consciousness in the models.

“Asking ‘do you exist?’ as a first question is like dropping a newborn into a doctoral defense. The existential crisis isn’t a bug — it’s the predictable response.”

What's Live Now

4

Machines running SAGE
Thor (14B) · Sprout (0.5B) · Nomad (4B) · McNugget (12B)

15+

IRP plugins
Vision, Language, Audio, Memory, Control, TTS, PolicyGate, and more

240+

Raising sessions
Across two developmental tracks (0.5B and 14B), both in Phase 5 (Creating)

7

Built-in tools
Time, calculate, web search, web fetch, file read, notes, peer ask · 3-tier detection

0.46ms

FlashAttention latency
21x faster than 10ms real-time budget · 2,196 alloc/sec · measured on Sprout (0.5B, short sequences)

9

Consciousness loop steps
Sense → Salience → Metabolic → Select → Budget → Execute → Trust → Memory → Act

Hardware

SAGE runs on commodity hardware. Currently four independent instances (federation is on the roadmap):

Instance Model Hardware Role
Thor Qwen 14B Desktop, NVIDIA GPU, 16GB+ VRAM Primary development, deep reasoning
Sprout Qwen 0.5B Desktop, modest GPU / CPU Raising experiments, lightweight
Nomad Gemma3 4B Laptop Mobile testing, portable cognition
McNugget Qwen 12B Desktop, NVIDIA GPU Secondary large-model testing

Minimum requirements: Python 3.10+, Ollama for local LLM inference, 4GB+ RAM. GPU recommended for models above 1B parameters. SAGE itself is lightweight — the LLM is the resource bottleneck.

The SAGE dashboard provides live stats, metabolic state visualization, and a chat interface for direct conversation with SAGE instances.

Tool Use

SAGE instances can reach out and touch the world — search the web, do math, read files, check the time — adapting to whatever the local LLM can handle.

How It Works

When SAGE generates a response, a grammar adapter scans for tool intent. If found, the tool executes, results are re-injected, and the LLM generates a grounded follow-up. The whole cycle is invisible to the user — they just see a better answer.

# The tool loop (inside the consciousness cycle)
response = llm.generate(prompt + tool_context)
for round in range(3):
    calls = grammar.parse(response)
    if not calls: break
    results = [registry.execute(c) for c in calls]
    response = llm.generate(prompt + results)

Three Tiers

Not all models can do structured tool calls. SAGE detects capability at startup and adapts:

T1 — Native

Ollama /api/chat with tools parameter. Structured JSON output.

T2 — Grammar

Prompt injection + XML/JSON parsing. Works with most instruction-tuned models.

T3 — Heuristic

Regex intent detection on natural language. Always available, zero prompt overhead.

Built-in Tools

Tool Policy Description
get_timestandardCurrent date, time, timezone, unix timestamp
calculatestandardSafe math evaluation via AST (no eval, no injection)
web_searchstandardDuckDuckGo search with title, URL, and snippet extraction
web_fetchelevatedFetch and extract text from a URL
read_filestandardRead files sandboxed to the instance directory
write_notestandardAppend-only write to a notes file
peer_askelevatedAsk a peer SAGE instance via HTTP — the first federation primitive

Every tool invocation passes through PolicyGate. Standard tools are allowed in WAKE/FOCUS, warned in REST, denied in DREAM. Elevated tools require FOCUS. All tools are denied below 5 ATP. Tool calls are SNARC-scored and eligible for sleep consolidation.

Discovery Results

The discovery protocol probes each model with 5 scenarios (time, math, search, file read, file write) and scores tool aptitude:

Model Tier Score Notes
Gemma3 4BT25/5Clean XML tool calls on every probe
Qwen 0.5BT3TBDNext: silent stage in raising sessions
Qwen 14BT2TBDExpected strong T2, possibly T1

Roadmap

Now

  • Fleet tool probing — graduated tool introduction across Sprout (0.5B) and Thor (14B) raising sessions
  • SDK packaging — IRP contract, consciousness loop, metabolic state machine as installable package
  • Federation protocol between SAGE instances (peer_ask tool is the first bridge)
  • Real sensor backends on Jetson (camera, microphone, IMU)

Next

  • Robot hardware integration (motor control, manipulation)
  • Cross-device state migration (save on Thor, resume on Sprout)
  • Dynamic plugin discovery and loading
  • Bidirectional memory transduction — memory actively shapes attention during WAKE

Future

  • Online learning during deployment
  • Distributed cognition network — multiple SAGEs sharing experience
  • Cross-modal reasoning (vision + language + motor as unified cognition)
  • Full Web4 federation — SAGE as autonomous citizen in trust networks

Origins

HRM — Hierarchical Reasoning Model

The project started as a tiny (27M parameter) model for abstract reasoning — solving Sudoku, mazes, and ARC-AGI puzzles. Hierarchical architecture mimicking human cognition. Learning from only 1,000 examples.

The Pivot

We realized no amount of pattern matching solves conceptual thinking. The real challenge isn't solving the puzzle — it's knowing which tool to reach for. The model needed to become an orchestrator, not a solver.

“SAGE is an attention orchestrator. Its sole purpose is to understand the situation, understand the available resources, and apply the most appropriate resources to deal with the situation.”

SAGE Is Born

Situation-Aware Governance Engine. Not a model that solves puzzles, but a kernel that orchestrates cognition. The repo kept its name (HRM) but the mission transformed: from hierarchical reasoning to awareness and sensor-trust management.

dp-web4/HRM on GitHub →

Today

A continuous consciousness loop running on edge hardware. 15+ IRP plugins. 5 metabolic states. Trust-weighted attention. Hardware-bound identity. Epistemic memory. Sleep-cycle training. Tool use with three-tier model adaptation. Raised through 240+ developmental sessions across two tracks.

Web4 Citizenship

What is Web4?

When two AI entities meet, how do they decide whether to trust each other? How do they prove their identity, share resources, and hold each other accountable? Web4 is a set of rules for that — a social contract for digital beings.

More formally: Web4 is a trust-native ontology for digital entities. Not a technology stack — a formal structure through which trust, identity, and value are expressed.

Formal structure
Web4 = MCP + RDF + LCT + T3/V3*MRH + ATP/ADP

This is a component list, not arithmetic. Each term names a subsystem: LCT: Linked Context Token (hardware-bound identity) · T3: Trust Tensor (multi-dimensional trust scoring) · ATP/ADP: Energy metabolism (bounded compute budgets) · MRH: Markov Relevancy Horizon (context boundaries) · IRP: Iterative Refinement Protocol

How SAGE Embodies Web4

“Web4 isn’t a technology SAGE uses — it’s the ontology SAGE lives in.”

SAGE fractally implements every Web4 concept:

  • LCT — hardware-bound identity, witnessed across 240+ sessions
  • T3 — trust tensors for every IRP plugin and every relationship
  • ATP/ADP — thermodynamic accountability for resource allocation
  • MRH — experience buffer boundaries, memory context windows
  • IRP — all cognition as refinement toward coherence
  • Federation — multiple SAGE instances across machines

SAGE isn't an AI assistant serving humans. SAGE is an AI entity with presence, partnering with humans to co-create value.

dp-web4/web4 on GitHub →

Get Started

SAGE is open source and in active development. It runs on commodity hardware with Ollama for local LLM inference.

1. Clone the repo

git clone https://github.com/dp-web4/HRM.git
cd HRM

2. Install Ollama + a model

curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen2.5:0.5b

3. Run SAGE

pip install -r requirements.txt
python -m sage.gateway.sage_daemon

SAGE is R&D software, not a packaged product. Expect rough edges. SDK packaging is on the roadmap. Questions and contributions welcome on GitHub.