Institutional memory
infrastructure
A knowledge graph that learns. Compile context from your organization's accumulated knowledge. Less tokens over time, more relevant answers.
Built on a self-organizing graph ontology with attention-weighted edges.
$ memex ingest ./docs --watch
# query with context compilation
$ memex query "how do we handle auth?"
# attention-weighted context returned
→ 12 relevant sources (847 tokens)
→ via 3 entity hops, 0.023s█
"Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, 'memex' will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory."
How it works // context compilation
Traditional RAG retrieves chunks. Memex compiles context by traversing a knowledge graph built from attention patterns during ingestion.
GRAPH ONTOLOGY ┌──────────────┐ ATTENDED ┌──────────────┐ │ Person │─────────────────────────────│ Concept │ │ "Sarah" │ weight: 0.84 │ "auth-v2" │ └──────────────┘ └──────────────┘ │ │ │ ATTENDED │ ATTENDED │ weight: 0.71 │ weight: 0.92 ▼ ▼ ┌──────────────┐ ATTENDED ┌──────────────┐ │ Concept │─────────────────────────────│ Event │ │ "session │ weight: 0.67 │ "v2.4 │ │ tokens" │ │ release" │ └──────────────┘ └──────────────┘ │ │ │ EXTRACTED_FROM │ EXTRACTED_FROM ▼ ▼ ┌──────────────┐ ┌──────────────┐ │ Source │ │ Source │ │ #engineering│ │ sprint mtg │ │ Oct 15 │ │ transcript │ └──────────────┘ └──────────────┘ Node types: Person, Concept, Event, Source Edge types: ATTENDED (weighted co-occurrence), EXTRACTED_FROM (provenance)
Entity relationships are weighted by co-occurrence. The graph learns what's relevant during ingestion, not at query time.
Don't just retrieve chunks. Traverse connections to compile comprehensive context from multiple sources in a single pass.
New documents strengthen existing connections. Your knowledge graph improves with every ingestion. Memory that compounds.
Use case // company-wide memory
Knowledge lives in silos - Slack, email, meetings, docs. Memex connects them into a single queryable memory.
Sarah: Switched to session tokens - JWT refresh rotation was a nightmare
Re: Auth changes - Need device fingerprinting before we can approve the new flow...
Marcus: "...mobile SDK will break, we should target the 2.4 release window..."
Session tokens eliminate refresh rotation. Breaking change: Mobile SDK requires update...
Auth migration switched from JWT to session tokens due to refresh token rotation issues slack. Security team requires device fingerprinting before approval email. Mobile SDK will need breaking changes, targeting v2.4 release meeting. Full migration guide available in docs docs.
The knowledge graph connects "Sarah" → "auth migration" → "security review" → "v2.4 release" across sources that never explicitly reference each other.
Token economics // compound savings
Memex isn't optimized for one-shot queries. It's optimized for organizations that query the same knowledge base repeatedly.
- Embed every chunk independently
- Retrieve top-k by similarity
- Same token cost per query
- No learning between queries
- Context often misses connections
- Build knowledge graph once
- Traverse attention-weighted edges
- Precision improves over time
- Less noise = fewer tokens
- Multi-hop context compilation
* compared to naive top-k retrieval on repeated organizational queries