Lossless-Claw: Stop Your AI Agent from Forgetting Long Conversations
By Bot It Out Team
Long conversations are where AI agents break down. After enough back-and-forth, older messages get pushed out of the context window. Your agent loses track of decisions made earlier, instructions you gave an hour ago, or context that was critical three exchanges back.
This is not a bug — it's a fundamental constraint of how language models work. The context window has a fixed size, and when it fills up, something has to go. Usually, the oldest messages are the first to be discarded.
What Gets Lost
In a typical business conversation, the most important context is often established early:
- The initial problem statement or goal
- Key constraints and requirements
- Decisions already made and their reasoning
- Names, numbers, and specific details shared at the start
As the conversation progresses and the context window fills, this foundational context gets silently dropped. Your agent starts asking questions you already answered, contradicting earlier decisions, or losing track of the thread entirely.
How Lossless-Claw Works
Lossless-Claw (LCM) is a plugin from Martian Engineering that takes a fundamentally different approach to conversation memory. Instead of letting old messages disappear, it preserves every single message using a technique called DAG-based summarization.
Here's the process:
- Every message is stored in a local SQLite database — nothing gets discarded
- Messages are organized in a DAG (Directed Acyclic Graph) that captures the relationships between topics and threads
- Progressive summarization compresses older exchanges into increasingly condensed summaries while preserving all key facts
- On-demand expansion — when the agent needs details from an earlier exchange, it can expand a summary back to the original messages
The result is lossless context management. Your agent maintains awareness of the entire conversation history, not just the most recent messages.
Why This Matters for Business Agents
For casual chatbots, losing early context is annoying but manageable. For business-critical agents, it can be costly:
- Customer support agents that forget the customer's original complaint halfway through troubleshooting
- Sales agents that lose track of what was already discussed in a multi-day negotiation
- Project management agents that forget task assignments and deadlines established earlier
- Research agents that re-investigate topics they already covered because the findings fell out of context
LCM ensures these agents maintain full conversational continuity regardless of how long the interaction runs.
Combining LCM with QMD
LCM and QMD solve different problems and work best together:
- QMD improves how your agent searches its persistent memory files — better recall of stored knowledge
- LCM improves how your agent manages its active conversation — no information loss during long sessions
Together, they give your agent both excellent long-term memory (QMD hybrid search) and excellent short-term memory (LCM lossless context). Users who have enabled both report significantly more coherent and reliable agent behavior, especially in complex multi-step workflows.
How to Enable It
In your Bot It Out dashboard, go to your instance's Actions tab and toggle Lossless-Claw under Memory Backends. Your instance will be redeployed with the LCM plugin configured. Like QMD, LCM runs entirely on your instance — no external APIs, no data leaving your server.
The Bottom Line
Context window limitations are one of the most common reasons AI agents produce frustrating results in real-world use. LCM eliminates this limitation entirely. Every message preserved, every detail accessible, every conversation thread maintained — no matter how long the interaction runs.
If your agents handle anything more complex than short Q&A exchanges, Lossless-Claw is worth enabling.