From Theory Delta | Published 2026-02-27
The Graphiti README presents it as a drop-in temporal knowledge graph for Python applications. Mem0's documentation and 47.6k stars present it as a universal memory layer with hybrid vector + graph support for any LLM provider. The official MCP server-memory reference implementation is positioned as the starting point for agent memory via MCP.
Graphiti self-hosted has a critical async event loop conflict. Embedding graphiti-core directly in a FastAPI or LangGraph service — the most common production Python agent stack — produces RuntimeError: Future attached to a different loop under real async load. This failure is not documented in the README. It surfaces in production, not in development, because development rarely exercises the async concurrency paths that trigger it.
The fix requires running graphiti-core in its own subprocess with its own event loop, communicating via HTTP or a queue. This is a prerequisite architectural decision that the docs do not surface.
Mem0 OSS graph memory is locked to OpenAI. Issue #3711 documents that the graph pipeline hardcodes openai_structured, causing 401 errors with Anthropic, Groq, and other providers. Builders who chose mem0 because of its headline 47.6k stars and "any LLM provider" framing, then attempt to use graph features with Anthropic as their provider, hit a hard wall with no workaround except switching to OpenAI. Vector-only mode works with any provider; graph is OpenAI-only.
The official MCP server-memory reference has a race condition. Issues #1819 and #2577 document JSONL file corruption under concurrent reads/writes. Recovery requires manual file repair. The server is safe for single-agent, single-session scenarios only — despite being positioned as the reference implementation builders start from.
Zep Cloud (managed Graphiti) is the exception. The managed deployment avoids the async isolation problem. Teams willing to accept vendor dependency and data residency constraints get a plausibly production-ready temporal knowledge graph. The self-hosted and managed paths are not equivalent.
36.9% of multi-agent system failures are inter-agent memory misalignment. The MAST taxonomy (arXiv 2503.13657) classifies this as a coordination failure — agents operating on different views of shared memory without knowing it. No current memory tool surfaces a staleness signal to agents when a peer has updated shared state.
For most production use cases: Use mem0 self-hosted against Qdrant or PgVector for vector-only mode. Works with any LLM provider. Do not rely on graph features unless you are committed to OpenAI as your provider.
If temporal reasoning matters (facts change over time): Use Zep Cloud if you accept the vendor dependency. For self-hosted Graphiti, isolate graphiti-core in its own subprocess with its own event loop and communicate via HTTP or a queue. Do not embed it directly in FastAPI or LangGraph.
For MCP-first agent stacks: Wire to any of the established memory-as-MCP-server implementations (mem0-mcp, mcp-memory-service, memory-bank-mcp). Do not use the official server-memory reference for anything concurrent.
For parallel agents writing to shared memory: No framework provides distributed write coordination with CAS semantics. Use append-only writes (Letta memory_insert, LangGraph G-Set reducers) to avoid conflicts. Optimistic writes (Letta memory_replace) require the underlying store to provide compare-and-swap or you have a TOCTOU race.
| Tool | Version | Result |
|---|---|---|
| mem0ai/mem0 | v1.0.4 | Graph features fail 401 for non-OpenAI providers (Issue #3711); vector-only works with any provider |
| getzep/graphiti | Feb 2026 | RuntimeError: Future attached to a different loop in FastAPI/LangGraph; subprocess isolation required |
| modelcontextprotocol/servers server-memory | Feb 2026 | JSONL corruption under concurrent access (#1819, #2577) |
| Zep Cloud | Feb 2026 | Plausibly production-ready; managed Graphiti; requires vendor dependency |
Confidence: empirical — observed in 4 environments, validated 2026-02-26.
Open questions: Does the Graphiti async event loop conflict apply to all versions, or only specific graphiti-core releases? Has mem0 Issue #3711 shipped a fix after the Feb 2026 validation date? Does the MCP server-memory race condition affect the official server across all concurrent access patterns, or only specific tool-call sequences?
This claim would be disproved by observing: A release of graphiti-core that embeds correctly in FastAPI without subprocess isolation and runs cleanly under real concurrent async load. Or a mem0 release that removes the openai_structured hardcode and produces correct graph outputs with an Anthropic provider.
Seen different? Contribute your evidence
Tested this tool yourself? Contribute your evidence -- confirmation, contradiction, or a fix.