OpenClaw is a 430K-line TypeScript project that turns any messaging platform into an interface for an autonomous AI agent. Instead of reading about it, explore the architecture below.
OpenClaw: The Full Topology
A hub-and-spoke architecture that composes familiar systems abstractions into an autonomous AI agent.
430K
Lines TypeScript
15+
Channels
5,705+
Skills
197K
GitHub Stars
WhatsApp
Baileys
Unofficial WA Web API via Baileys library
Telegram
grammY
Bot API via grammY framework
Discord
discord.js
Rich embeds, slash commands, voice
iMessage
native macOS
Native macOS AppleScript bridge
Slack
Bolt
Workspace bot via Slack Bolt SDK
+10 more
Matrix, Email...
Matrix, Gmail, Voice, SMS, and more
Gateway
WebSocket + Scheduler
Session Resolver
namespace isolation
Isolates each conversation's state
Context Assembler
prompt building
Builds prompt from AGENTS.md + SOUL.md + memory
Streaming LLM
Claude, GPT, etc.
Streaming inference with any provider
Tool Executor
5,705+ skills
Executes skills, web search, calendar, code
State Persister
JSONL + SQLite
Durable state in JSONL logs + SQLite
Memory System
MEMORY.md + search
Virtual memory: MEMORY.md, daily logs, embeddings
Primitive 1: Autonomous Invocation
The agent doesn't wait for messages. It can wake itself via cron, webhooks, voice, heartbeats, or Pub/Sub triggers, each scoped to an isolated session.
Click to expand trigger types
Cron schedules (daily summaries, check-ins)
Webhooks (GitHub, Stripe, custom)
Voice wake word detection
Heartbeat / keep-alive pings
Gmail Pub/Sub (email-triggered actions)
Session isolation per conversation
Primitive 2: Externalized Memory
Long-term memory lives on disk, not in the context window. The agent pages knowledge in and out like an OS manages virtual memory.
The most distinctive primitive: long-term memory that lives on disk, not in the context window.
Virtual Memory for Cognition
Long-term memory lives on disk, not in the context window. The agent pages knowledge in and out like an OS manages virtual memory.
LLM Context
Cache (volatile)
The active working set. Fast but limited. Everything here is lost when the conversation ends or context fills up.
Context Usage85%
170K tokens used200K limit
System prompt (4.2K)
Conversation history (95K)
Tool results (48K)
Memory pages (22.8K)
Local Disk
Source of truth (durable)
Persistent storage that survives across sessions. Unlimited capacity. The ground truth for all agent knowledge.
MEMORY.md12KB
memory/2026-02-16.md3KB
sessions.sqlite8MB
embeddings.db24MB
Search & Retrieval
Page-in mechanism
Dual search paths find relevant memories and page them back into context when needed.
BM25 Keyword
Exact term matching, fast
Vector Similarity
Semantic matching, flexible
merge & re-rank
Ranked Results
Top-K memory pages paged into context
/compact — Context Paging
1
Write durable notes from context to MEMORY.md
✓
2
Summarize conversation history (compress)
✓
3
Drop redundant tool outputs from context
✓
4
Rebuild context window with essential state only
✓
Before
170K tokens
85% capacity
→
After
50K tokens
25% capacity
The entire system is an exercise in composition: message queues, schedulers, filesystems, and virtual memory, familiar abstractions from operating systems, recomposed into an AI agent.