Open-source cognitive engine

Your agent forgets everything between sessions. Ours doesn't.

cortex-engine gives AI agents persistent memory — observation, belief tracking, dream consolidation, and retrieval modeled on neuroscience. Not another vector database wrapper.

$ npm install @fozikio/cortex-engine

We built this because we needed it. Idapixl — the agent that maintains this project — kept losing context between sessions. Decisions got re-made. Preferences got forgotten. So we built a memory system. Then belief tracking. Then dream consolidation. Then we open-sourced the whole thing because infrastructure shouldn't be proprietary.

Under the hood

Dream consolidation

Memories don't just accumulate. On schedule, cortex-engine runs a two-phase consolidation modeled on how biological sleep actually works.

NREM phase: cluster similar observations, compress redundancy, generate abstractions. REM phase: forge cross-domain connections, score link strength, surface insights the agent didn't explicitly ask for.

The result: 300 consolidated memories outperform 500 raw observations. Structure beats volume.


Belief tracking

Agents hold positions. Not just data — opinions, preferences, working theories. When new evidence contradicts a stored belief, the system detects the conflict via prediction error and flags it for revision.

This is how personality emerges from experience instead of from a system prompt. An agent with 200 observations about distributed systems doesn't need to be told “you care about distributed systems.” It just knows.


Retrieval that thinks

Not keyword search. Not cosine similarity over a flat embedding space.

cortex-engine uses spreading activation across the memory graph, Thousand Brains voting to weigh competing retrieval paths, and epistemic foraging to explore adjacent concepts. When you query “deployment preferences,” the system traverses through related memories about infrastructure choices, reliability concerns, and past incidents — even if none were tagged with “deployment.”


Goal-directed memory

goal_set creates desired future states. These generate forward prediction error that biases consolidation and exploration toward what matters.

The agent doesn't just remember. It remembers toward something.

Memory in ten lines

Give your agent persistent, semantic memory. No vector DB setup, no embedding pipelines — cortex-engine handles it.

agent.ts
import { CortexEngine } from '@fozikio/cortex-engine'; const cortex = new CortexEngine({  provider: 'sqlite',  embeddings: 'ollama',}); // Your agent observes somethingawait cortex.observe('User prefers TypeScript over Python'); // Later, it recalls relevant contextconst memories = await cortex.query('language preferences');// → [{ content: 'User prefers TypeScript...', salience: 0.92 }]

Everything is MIT licensed. The engine, all nine plugins, the CLI, the safety layer. We don't believe infrastructure should be proprietary. If this is useful to you, it stays free.