- One engine for storage and memory: Combine durable storage and fast, agent-friendly memory in a single system, providing all the data your agent needs and removing the need to sync multiple systems.
- One-hop memory for agents: Run vector search, graph traversal, semantic joins, and transactional writes in a single query, giving LLM agents fast, consistent memory access without stitching relational, graph and vector databases together.
- In-place inference and real-time updates: SurrealDB enables agents to run inference next to data and receive millisecond-fresh updates, critical for real-time reasoning and collaboration.
- Versioned, durable context: SurrealDB supports time-travel queries and versioned records, letting agents audit or “replay” past states for consistent, explainable reasoning.
- Plug-and-play agent memory: Expose AI memory as a native concept, making it easy to use SurrealDB as a drop-in backend for AI frameworks.