Institutional memory
infrastructure

A knowledge graph that learns. Compile context from your organization's accumulated knowledge. Less tokens over time, more relevant answers.

Built on a self-organizing graph ontology with attention-weighted edges.

# ingest your knowledge base
$ memex ingest ./docs --watch

# query with context compilation
$ memex query "how do we handle auth?"

# attention-weighted context returned
→ 12 relevant sources (847 tokens)
→ via 3 entity hops, 0.023s
"Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, 'memex' will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory."

— Vannevar Bush, "As We May Think" // 1945

How it works // context compilation

Traditional RAG retrieves chunks. Memex compiles context by traversing a knowledge graph built from attention patterns during ingestion.

GRAPH ONTOLOGY

    ┌──────────────┐          ATTENDED           ┌──────────────┐
    │   Person     │─────────────────────────────│   Concept    │
    │  "Sarah"     │         weight: 0.84         │ "auth-v2"    │
    └──────────────┘                              └──────────────┘
           │                                             │
           │ ATTENDEDATTENDED
           │ weight: 0.71                                │ weight: 0.92
           ▼                                             ▼
    ┌──────────────┐          ATTENDED           ┌──────────────┐
    │   Concept    │─────────────────────────────│   Event      │
    │ "session    │         weight: 0.67         │ "v2.4       │
    │  tokens"    │                              │  release"   │
    └──────────────┘                              └──────────────┘
           │                                             │
           │ EXTRACTED_FROMEXTRACTED_FROM
           ▼                                             ▼
    ┌──────────────┐                              ┌──────────────┐
    │   Source     │                              │   Source     │
    │ #engineering│                              │ sprint mtg  │
    │ Oct 15      │                              │ transcript  │
    └──────────────┘                              └──────────────┘

Node types: Person, Concept, Event, Source
Edge types: ATTENDED (weighted co-occurrence), EXTRACTED_FROM (provenance)
// attention-based memory

Entity relationships are weighted by co-occurrence. The graph learns what's relevant during ingestion, not at query time.

// context compilation

Don't just retrieve chunks. Traverse connections to compile comprehensive context from multiple sources in a single pass.

// incremental learning

New documents strengthen existing connections. Your knowledge graph improves with every ingestion. Memory that compounds.

Use case // company-wide memory

Knowledge lives in silos - Slack, email, meetings, docs. Memex connects them into a single queryable memory.

💬 slack
#engineering

Sarah: Switched to session tokens - JWT refresh rotation was a nightmare

Oct 15 • 2:34 PM
📧 email
Security Review Thread

Re: Auth changes - Need device fingerprinting before we can approve the new flow...

Oct 12 • from: security@
🎙️ meeting
Sprint Planning - Oct 10

Marcus: "...mobile SDK will break, we should target the 2.4 release window..."

transcript • 34:12
📄 docs
auth-migration.md

Session tokens eliminate refresh rotation. Breaking change: Mobile SDK requires update...

last edited Oct 16
memex ingests & connects
memex query
Compiled context from 4 sources

Auth migration switched from JWT to session tokens due to refresh token rotation issues slack. Security team requires device fingerprinting before approval email. Mobile SDK will need breaking changes, targeting v2.4 release meeting. Full migration guide available in docs docs.

4 sources
3 entity hops
924 tokens
0.021s

The knowledge graph connects "Sarah" → "auth migration" → "security review" → "v2.4 release" across sources that never explicitly reference each other.

Token economics // compound savings

Memex isn't optimized for one-shot queries. It's optimized for organizations that query the same knowledge base repeatedly.

Traditional RAG
  • Embed every chunk independently
  • Retrieve top-k by similarity
  • Same token cost per query
  • No learning between queries
  • Context often misses connections
Memex
  • Build knowledge graph once
  • Traverse attention-weighted edges
  • Precision improves over time
  • Less noise = fewer tokens
  • Multi-hop context compilation
Cumulative token cost over queries
high low tokens/query 0 100 500 1k queries → RAG: constant cost Memex: higher upfront breakeven ↓ compound savings ~2400 tokens ~850 tokens
Traditional RAG
Memex
68%
recall on HotpotQA
0.02s
avg query latency
~40%
token reduction*

* compared to naive top-k retrieval on repeated organizational queries