// SECTION: ECOSYSTEM_PROJECTS
009

The Agentra Stack

Domain-specialist models on open-source infrastructure. Every decision auditable. Reasoning, substrate, and settlement — three layers, one stack.

// SECTION: REASONING_LAYER
010

Reasoning Layer

Domain-specialist open-weight fine-tunes. Apache 2.0.

Supply Chain Management | Apache 2.0

active.model

Solen

Supply Chain Management

Powers: Nexus Planner

Base: Open-weight foundation

License: Apache 2.0

View on HuggingFaceSee Scenarios

Why Domain Models Beat Generalists

Thinks like a supply chain director. Not a chatbot that knows about supply chain. Solen has read nothing but supply chain its entire existence — every research paper, every SEC filing, every court case where a supply chain failure ended in litigation.

The Six Training Categories

Standard models are trained on questions and answers. These models are trained on reasoning processes.

VerificationShow full working and independently confirm the answer
Incomplete DataIdentify exactly what is missing before answering
Error DetectionCatch the mistake in a scenario that looks correct
Expert ConflictResolve two valid positions using the data that settles it
Consequence ReasoningReason forward to second and third order effects
Failure Pattern RecognitionKnow the shape of wrong thinking before it compounds

The Calibration Layer

Every answer includes three things: how certain this answer is given the inputs provided, which input — if wrong by 20% — most changes the result, and what to verify before acting on the recommendation.

A calculator gives you a number. A senior advisor tells you how much to trust it.

Tells you exactly what it needs to know and why, before it answers. Reasons through incomplete data with calibrated confidence.

// SECTION: SUBSTRATE_FLAGSHIP
011

The Substrate Starts With Memory

Every model decision gets remembered. Every reasoning chain is traversable.

active.project

AgenticMemory

Persistent cognitive graph memory for cross-session reasoning continuity.

147 MCP tools

Portable artifact: .amem

View Repo Star on GitHub

Memory Is Not Search

Most systems treat memory as retrieval over flattened text chunks. AgenticMemory treats memory as graph cognition: nodes for what the agent learned or decided, edges for why those events connect, and traversal for reasoning history.

The Atom: Cognitive Event

The smallest memory unit is a cognitive event, not a transcript chunk: FACT, DECISION, INFERENCE, CORRECTION, SKILL, and EPISODE. Each event is written with confidence, timestamps, access dynamics, feature vectors, and direct edge references for O(1) node access.

FACT — learned truth about user, system, or world state.
DECISION — selected action and why it was chosen.
INFERENCE — synthesized conclusion from multiple events.
CORRECTION — explicit update that supersedes prior belief.
SKILL — procedural pattern for execution in context.
EPISODE — compressed session-level meaning summary.

Edges Make It A Brain

Relationship edges encode causality and truth evolution: CAUSED_BY, SUPPORTS, CONTRADICTS, SUPERSEDES, RELATED_TO, PART_OF, TEMPORAL_NEXT. This preserves why decisions happened and what changed over time.

One File, Memory-Mappable

`.amem` is a binary graph file with header, node table, edge table, content block, vectors, and indexes. It is memory-mappable, portable, and query-ready without external databases or managed vector services.

The Full Query Engine

Querying is navigation, not blind similarity. The engine supports traversal, temporal diffing, causal impact, contradiction resolution, pattern recall, and structural gap detection.

#Query TypeWhat It AnswersStatus
1TraversalWhy did I decide this?Operational
2PatternShow me all decisions from last weekOperational
3TemporalWhat changed between session 5 and 20?Operational
4Causal / ImpactWhat breaks if this fact is wrong?Operational
5SimilarityWhat else do I know about this topic?Operational
6ContextGive me everything around this nodeOperational
7ResolveWhat is the current truth?Operational
8BM25 Text SearchFind memories containing these wordsOperational
9Hybrid SearchBM25 + vector fused rankingOperational
10Graph CentralityWhat are my foundational beliefs?Operational
11Shortest PathHow are these concepts connected?Operational
12Belief RevisionIf I learn X, what breaks?Operational
13Reasoning GapsWhere am I guessing?Operational
14AnalogicalHave I solved something like this before?Operational
15ConsolidationClean and strengthen my brainOperational
16Drift DetectionHow has my understanding shifted?Operational

Memory Formation Pipeline

After each interaction the system extracts events, links them to existing cognition, updates confidence and decay, writes an EPISODE compression node, and incrementally refreshes indexes. Memory formation runs asynchronously so response latency remains stable.

Portable Agent Brain

Your `.amem` travels across agents and environments. Knowledge continuity belongs to you, not to a single assistant vendor. Any compatible runtime can mount the same cognitive history with causal context intact.

How Memory Serves The Models

Solen recommends changing suppliers. Memory stores the reasoning chain — 3 facts, 2 decisions, 1 inference. Six months later, when someone asks "why did we switch?", the chain is traversable. Nothing was forgotten. Nothing was hallucinated.

// SECTION: SUBSTRATE_DIRECTORY
012
substrate directory18 MIT
#ProjectArtifactTools
01AgenticMemory.amem147
02AgenticVision.avis104
03AgenticCodebase.acb73
04AgenticIdentity.aid42
05AgenticTime.atime19
06AgenticContract.acon38
07AgenticComm.acomm17
08AgenticPlanning.aplan13
09AgenticCognition.acog24
10AgenticReality.areal15
11AgenticVeritas.averitas10
12AgenticData.adat
13AgenticWorkflow.awf
14AgenticConnect.acnx
15agentic-forge15
16agentic-aegis
17agentic-evolve
18agentic-sdk
All open source. MIT licensed. Published on crates.io, PyPI, npm.
// SECTION: SHOWCASE
013
Hydrashowcase

The living proof that the stack composes.

68 Rust crates. Persistent memory via AgenticMemory. Self-writing genome. Constitutional governance via AgenticContract. When Hydra needs to reason about supply chain, it calls Solen. When it needs finance, it calls Verac. The first customer of the entire stack.

View on GitHub
// SECTION: SETTLEMENT_LAYER
014

XAP Protocol

Six-primitive open economic protocol for autonomous agent commerce. 115 validation tests. Draft v0.2. MIT.

View Repo

Verity Engine

Open truth engine. 5 Rust crates. Deterministic replay of every settlement decision. MIT.

View Repo
// SECTION: COLLABORATION_CTA
003B

Built for teams that need auditable AI decisions in regulated industries

We collaborate with research labs, enterprise engineering teams, and infrastructure sponsors. Our stack combines domain-specialist models for reasoning, open-source substrate for memory and governance, and deterministic verification for every decision.