Teaching LLMs how to understand users and content is a key to building hyper-personalized AI systems. This is some of the work we're doing at Spotify that is most exciting. World knowledge and reasoning capabilities (in this case, coming from a world-class frontier model like LLama) are a powerful starting point for AI products. An AI system that understands your user journey and can predict what information or actions you need -- that's magical. #spotify #llama #personalization #llms #generativeai
Working to better connect people with the audio they love with technology has been a humbling and deeply meaningful journey for me. I'm very excited to share our recent publication discussing our work in contextualized recommendations through personalized narratives using LLMs: https://lnkd.in/ekFNsDhJ A huge props to the incredible team of engineers, researchers, product managers, editors, curators, designers, and so, so many more that made this work possible. Authors: Praveen Chandar Marco De Nadai Sandeep Ghael Manizeh Khan Paul Bennett Tony Jebara Mounia Lalmas-Roelleke #LLMs #machinelearning #AI
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
2dSpotify's focus on user journey prediction through LLMs like Llama is fascinating, especially considering the potential for contextual bandit algorithms to optimize personalized recommendations in real-time. This raises the question: how do you envision incorporating reinforcement learning techniques into your LLM architecture to enable continuous adaptation and refinement of user models based on dynamic feedback loops?