Mem0 gained 20,000 stars on Github in 30 days. It's a new memory layer for LLMs that allows you to directly add, update, and search memories in your models. It's crucial for AI systems that require persistent context, like customer support and personalized recommendations. Features: ▸ Multi-Level Memory ▸ Adaptive Personalization ▸ Developer-Friendly API ▸ Cross-Platform Consistency ↓ Are you technical? Check out https://AlphaSignal.ai to get daily email of breakthrough models, repos and papers in AI. Read by 180,000+ devs.
The amount of stars on GitHub proves the demand. However, given the quality of research around AI episodic memory, it is unlikely that people will use this in the long term. I don't want to spoil the excitement, but I have built something like this for LLM projects I am involved in, quite a long time ago. It is too basic to stop here...
The best thing about mem0 is their codebase. It's very well maintained. Under the hood it works like 1. They have a SQL lite database 2. They store the memories in there 3. Use function calling to understand when to retrieve or add memory. The best thing is on how they have unified function calling for all LLM providers into one format.
This is really cool, looking forward to testing it out. Is there any functionality to create use case specific contexts from memory? Something to change the context created based on user intent. For example, working with a chatbot on code iteration might only need memory of the last couple code blocks and a summary of changes, but a exploratory conversation might be better with a full summary of the conversation.
Here is how we handle context and customization: https://mltblog.com/3WcTS9C
Mem0 enhances AI systems by providing a new memory layer for LLMs, allowing direct addition, updating, and searching of memories. This is crucial for AI systems requiring persistent context, such as customer support and personalized recommendations. Key features include multi-level memory, adaptive personalization, a developer-friendly API, and cross-platform consistency.
Lior S. Is there an efficient way to load memory with this package? Right now I am simply clean and append all interactions to memory with a script when grafting; the data essentially becomes a LC prompt. I’m able to work with this pretty efficiently by using voice commands (or text queries), which enable agents to retrieve data from their knowledge and memory. One thing I do like about this is that the metadata appears to have assigned categories within it, which might be helpful down the line!
Persistent context is important for AI because it allows systems to retain and recall past interactions, enhancing decision-making, contextual understanding, and personalization. This is crucial for applications like customer support and personalized recommendations, where maintaining a continuous and coherent user experience is essential.
Amazing innovation! Mem0's memory layer for LLMs is a game-changer in AI systems. The features are impressive and developer-friendly. Well done! Lior S.
Love this post. Everything I needed for this weekend. For more than a year I have thought that a memory implementation of this type would be great. And here it is! I think it has infinite uses, to customize llms for personal uses, for clients, and the most interesting thing is that the llm has its own personal memory, maybe some pipeline that automates saving the llm's memories. One more step to reach AGI, little by little it is equaling human cognitive processing. It is still a long way off, but very well on its way. Thank you so much for sharing this!
Covering the latest in AI R&D • ML-Engineer • MIT Lecturer • Building AlphaSignal, a newsletter read by 200,000+ AI engineers.
4mohttps://github.com/mem0ai/mem0