Open Source SDK for LLM Memory

Inkess Memory

Memory that thinks like you do.
A plugin-based memory SDK for LLM applications. Remote storage, local runtime, zero latency.

import { InkessMemory } from '@inkess/memory'
import { extraction } from '@inkess/plugin-extraction'
import { retrieval } from '@inkess/plugin-retrieval'

const memory = new InkessMemory({
  server: { apiKey: process.env.INKESS_KEY },
  llm: { provider: 'anthropic', apiKey: process.env.ANTHROPIC_KEY },
  plugins: [extraction(), retrieval()],
})

await memory.init() // Pull memories from server

// Wrap any LLM call with automatic memory
const chat = memory.wrap(myLLMCall)
const reply = await chat([{ role: 'user', content: 'Hi!' }])

Built for Developers

Everything you need to give your AI application persistent, intelligent memory.

🧠

Human-like Memory

Four-layer architecture mirrors how humans remember: working, short-term, long-term, and archive. Memories decay, consolidate, and strengthen with use.

🔌

Plugin System

All memory intelligence is pluggable. Use official plugins for extraction, retrieval, decay, consolidation, and conflict detection — or build your own.

Zero Latency

Git-like architecture: pull memories once, run everything locally. No network round-trip on retrieval. Works offline, syncs when connected.

🔑

One Key, Everywhere

Switch devices, switch projects. Your memories follow you. One API key, one init() call — full context restored.

🤖

Bring Your Own LLM

Use your own API keys for Anthropic, OpenAI, or any OpenAI-compatible provider. Server has zero LLM cost.

📦

MCP Ready

Built-in MCP server adapter. Connect to Claude Desktop, Cursor, or any MCP-compatible client with zero code.

SDK + Server

SDK runs locally for zero latency. Server handles storage and sync. Like Git for memory.

SDK — Local Runtime
Cache + Plugin Manager + Sync Engine + LLM Provider
Extraction
Retrieval
Decay
Consolidation
Conflict
Your Plugin
Server — Remote Storage
Auth + Sync API + File Storage (R2/S3) + Indexes (KV)

How We Compare

No vector database. No graph database. No server-side LLM cost.

Mem0ZepLettaInkess Memory
StorageVector DBGraph + VectorPostgreSQLFile Storage
LLM CostServer-sideServer-sideServer-sideClient-side (yours)
LatencyNetworkNetworkLowZero
Offline
Memory ModelFlat vectorTemporal graphOS memory blocksHuman-like layers
Plugin System

From personal assistants to production chatbots

🧠

Long-term Personal Memory

Your AI remembers preferences, habits, and past conversations across sessions. Tell it once, remembered forever.

💻

Context-Aware Coding Assistant

AI remembers your tech stack, coding style, and project decisions. Always follows your conventions.

🎧

Cross-Session Support Bot

Bot remembers previous tickets, solutions, and customer preferences. No repeating yourself.

📦

MCP-Powered Memory

Add persistent memory to Claude Desktop, Cursor, or any MCP client. Zero code — just configure and go.

Start in 30 Seconds

Install the SDK and start building with persistent memory.

npm install @inkess/memory @inkess/plugin-extraction @inkess/plugin-retrieval

View on GitHub
Customer Support
QR Code
Scan to contact via WeChat
Chat Now