The Agentra Stack
Domain-specialist models on open-source infrastructure. Every decision auditable. Reasoning, substrate, and settlement — three layers, one stack.
Reasoning Layer
Domain-specialist open-weight fine-tunes. Apache 2.0.
active.model
Solen
Supply Chain Management
Powers: Nexus Planner
Base: Open-weight foundation
License: Apache 2.0
Why Domain Models Beat Generalists
Thinks like a supply chain director. Not a chatbot that knows about supply chain. Solen has read nothing but supply chain its entire existence — every research paper, every SEC filing, every court case where a supply chain failure ended in litigation.
The Six Training Categories
Standard models are trained on questions and answers. These models are trained on reasoning processes.
The Calibration Layer
Every answer includes three things: how certain this answer is given the inputs provided, which input — if wrong by 20% — most changes the result, and what to verify before acting on the recommendation.
A calculator gives you a number. A senior advisor tells you how much to trust it.
Tells you exactly what it needs to know and why, before it answers. Reasons through incomplete data with calibrated confidence.
The Substrate Starts With Memory
Every model decision gets remembered. Every reasoning chain is traversable.
active.project
AgenticMemory
Persistent cognitive graph memory for cross-session reasoning continuity.
147 MCP tools
Portable artifact: .amem
Memory Is Not Search
Most systems treat memory as retrieval over flattened text chunks. AgenticMemory treats memory as graph cognition: nodes for what the agent learned or decided, edges for why those events connect, and traversal for reasoning history.
The Atom: Cognitive Event
The smallest memory unit is a cognitive event, not a transcript chunk: FACT, DECISION, INFERENCE, CORRECTION, SKILL, and EPISODE. Each event is written with confidence, timestamps, access dynamics, feature vectors, and direct edge references for O(1) node access.
Edges Make It A Brain
Relationship edges encode causality and truth evolution: CAUSED_BY, SUPPORTS, CONTRADICTS, SUPERSEDES, RELATED_TO, PART_OF, TEMPORAL_NEXT. This preserves why decisions happened and what changed over time.
One File, Memory-Mappable
`.amem` is a binary graph file with header, node table, edge table, content block, vectors, and indexes. It is memory-mappable, portable, and query-ready without external databases or managed vector services.
The Full Query Engine
Querying is navigation, not blind similarity. The engine supports traversal, temporal diffing, causal impact, contradiction resolution, pattern recall, and structural gap detection.
Memory Formation Pipeline
After each interaction the system extracts events, links them to existing cognition, updates confidence and decay, writes an EPISODE compression node, and incrementally refreshes indexes. Memory formation runs asynchronously so response latency remains stable.
Portable Agent Brain
Your `.amem` travels across agents and environments. Knowledge continuity belongs to you, not to a single assistant vendor. Any compatible runtime can mount the same cognitive history with causal context intact.
How Memory Serves The Models
Solen recommends changing suppliers. Memory stores the reasoning chain — 3 facts, 2 decisions, 1 inference. Six months later, when someone asks "why did we switch?", the chain is traversable. Nothing was forgotten. Nothing was hallucinated.
The living proof that the stack composes.
68 Rust crates. Persistent memory via AgenticMemory. Self-writing genome. Constitutional governance via AgenticContract. When Hydra needs to reason about supply chain, it calls Solen. When it needs finance, it calls Verac. The first customer of the entire stack.
View on GitHubBuilt for teams that need auditable AI decisions in regulated industries
We collaborate with research labs, enterprise engineering teams, and infrastructure sponsors. Our stack combines domain-specialist models for reasoning, open-source substrate for memory and governance, and deterministic verification for every decision.