NewsApril 12, 2026

ClawVM: Virtual Memory for Stateful LLM Agents at EuroMLSys 2026

ClawVM: Virtual Memory for Stateful LLM Agents at EuroMLSys 2026

A new paper with Mofasshara Rafique at EuroMLSys 2026 (Sixth European Workshop on Machine Learning and Systems, co-located with EuroSys) in Edinburgh, Scotland.

CLAWVM: Harness-Managed Virtual Memory for Stateful Tool-Using LLM Agents

Last year I wrote about the surprising elegance of OpenClaw and its two core abstractions. One observation stuck with me: OpenClaw's memory system is "virtual memory for cognition. RAM is limited, disk is large, paging decides what comes back." I meant it as an analogy. ClawVM makes it literal.

Stateful tool-using agents treat the context window as working memory and accumulate state across turns. Current harnesses manage this state with best-effort heuristics. When the context fills up, things break in predictable ways: compaction drops state, resets skip flushes, writebacks silently clobber newer data. We catalogued these failures from real OpenClaw issues and community reports. Anyone who has debugged a storage system will recognize the pattern: lost writes, stale reads, torn pages.

You do not need the model to manage its own memory well. You need the harness to enforce a contract. The model can request, read, and update pages, but the harness owns residency, eviction, and durability. This is the same separation that made virtual memory work fifty years ago: applications do not manage physical memory, the OS does. The harness already assembles prompts, mediates tools, and observes lifecycle events, so it is the natural place to put that contract. ClawVM does exactly that.

Comments