Thomson Reuters is proud to have five papers with Foundational Research authors accepted at NeurIPS. Our Head of AI Research, Jonathan Richard Schwarz, is also hosting a workshop, Compositional Learning: Perspectives, Methods, and Paths Forward. Check out the list of presentations below. https://bit.ly/4iz89YH 📄 Online Adaptation of Language Models with a Memory of Amortized Contexts by Jihoon Tack, Jaehyung Kim · Eric Mitchell, Jinwoo Shin, Yee Whye Teh, Jonathan Richard Schwarz - Thu 12 Dec 7:30 p.m. EST 📄 CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training by David Brandfonbrener, Hanlin Zhang, Andreas Kirsch, Jonathan Richard Schwarz, Sham Kakade - Fri 13 Dec 7:30 p.m. EST 📄 UniTS: A Unified Multi-Task Time Series Model by Shanghua Gao, Teddy Koker, Owen Queen, Tom Hartvigsen, Theodoros Tsiligkaridis, Marinka Zitnik - Thu 12 Dec 2 p.m. EST 📄 Are Language Models Actually Useful for Time Series Forecasting? by Mingtian Tan, Mike Merrillm Vinayak Gupta, Tim Althoff, Tom Hartvigsen - Fri 13 Dec 7:30 p.m. EST 📄 BendVLM: Test-Time Debiasing of Vision-Language Embeddings by Walter Gerych, Haoran Zhang, Kimia Hamidieh, Eileen Pan, Maanas K. Sharma, Tom Hartvigsen, Marzyeh Ghassemi - Thu 12 Dec 7:30 p.m. EST Our team is at booth #49 today, and we're excited to meet you. Please stop by to say hello and check out our open roles. https://tmsnrt.rs/40BYolT #WorkingAtTR #NeurIPS #AIinCanada #AIresearch #MLresearch
Thomson Reuters’ Post
More Relevant Posts
-
𝐀𝐝𝐝𝐢𝐧𝐠 𝐜𝐨𝐧𝐭𝐞𝐱𝐭 𝐭𝐨 𝐮𝐬𝐞𝐫 𝐪𝐮𝐞𝐫𝐢𝐞𝐬 𝐢𝐬 𝐜𝐫𝐮𝐜𝐢𝐚𝐥 in our journey to minimize 𝐟𝐚𝐥𝐬𝐞 𝐩𝐨𝐬𝐢𝐭𝐢𝐯𝐞𝐬, 𝐰𝐡𝐢𝐜𝐡 𝐚𝐫𝐞 𝐧𝐨𝐰 𝐜𝐨𝐦𝐦𝐨𝐧𝐥𝐲 𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐭𝐨 𝐚𝐬 "𝐡𝐚𝐥𝐥𝐮𝐜𝐢𝐧𝐚𝐭𝐢𝐨𝐧𝐬" 𝐢𝐧 𝐭𝐡𝐞 𝐰𝐨𝐫𝐥𝐝 𝐨𝐟 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈. . 𝐓𝐡𝐞 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥 𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐭𝐞𝐜𝐡𝐧𝐢𝐪𝐮𝐞 has aptly recognized this need, marking a significant step forward in building Generative AI applications. 𝐈𝐧 2016, 𝐰𝐞 𝐞𝐦𝐛𝐚𝐫𝐤𝐞𝐝 𝐨𝐧 𝐚 𝐬𝐢𝐦𝐢𝐥𝐚𝐫 𝐞𝐱𝐩𝐥𝐨𝐫𝐚𝐭𝐢𝐨𝐧 𝐰𝐢𝐭𝐡𝐢𝐧 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥𝐦 𝐨𝐟 𝐜𝐨𝐦𝐩𝐮𝐭𝐞𝐫 𝐯𝐢𝐬𝐢𝐨𝐧, 𝐚𝐬 𝐝𝐞𝐭𝐚𝐢𝐥𝐞𝐝 𝐢𝐧 𝐨𝐧𝐞 𝐨𝐟 𝐨𝐮𝐫 𝐫𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐩𝐚𝐩𝐞𝐫𝐬 𝐩𝐮𝐛𝐥𝐢𝐬𝐡𝐞𝐝 𝐰𝐢𝐭𝐡 Hindawi Publishing. 𝐈𝐧 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫, 𝐰𝐞 𝐝𝐞𝐥𝐯𝐞𝐝 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐜𝐞 𝐨𝐟 𝐜𝐨𝐧𝐭𝐞𝐱𝐭𝐮𝐚𝐥𝐢𝐳𝐢𝐧𝐠 𝐮𝐬𝐞𝐫 𝐪𝐮𝐞𝐫𝐢𝐞𝐬 𝐰𝐢𝐭𝐡 𝐢𝐦𝐚𝐠𝐞 𝐝𝐚𝐭𝐚 𝐟𝐨𝐫 𝐜𝐨𝐧𝐭𝐞𝐧𝐭-𝐛𝐚𝐬𝐞𝐝 𝐢𝐦𝐚𝐠𝐞 𝐫𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥. Our approach involved classifying image queries based on their content and directing them to the relevant class. By doing so, we narrowed down the search scope, reducing false positives and improving precision. This methodical approach was meticulously detailed in our research article. 𝐅𝐨𝐫 𝐭𝐡𝐨𝐬𝐞 𝐢𝐧𝐭𝐞𝐫𝐞𝐬𝐭𝐞𝐝 𝐢𝐧 𝐞𝐱𝐩𝐥𝐨𝐫𝐢𝐧𝐠 𝐭𝐡𝐢𝐬 𝐟𝐮𝐫𝐭𝐡𝐞𝐫, 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐚𝐫𝐭𝐢𝐜𝐥𝐞 𝐢𝐬 𝐚𝐯𝐚𝐢𝐥𝐚𝐛𝐥𝐞 𝐭𝐡𝐫𝐨𝐮𝐠𝐡 𝐭𝐡𝐞 𝐟𝐨𝐥𝐥𝐨𝐰𝐢𝐧𝐠 𝐥𝐢𝐧𝐤. https://lnkd.in/dCG3BgBx #artificialintelligence #computervision #generatieveai #hallucination #precision #imagerecognition #informationtechnology #researchimpact #researchers #researchdevelopment
To view or add a comment, sign in
-
Multimodal foundation models are prone to hallucinations - generating outputs that contradict inputs or lack factual grounding. Our researchers at SCB10X/SCBX Potsawee Manakul and Kunat Pipatanakul in collaboration with University of Cambridge and Tsinghua University, proposed CrossCheckGPT, a reference-free ranking method that correlates highly with reference-based methods. By being reference-free, CrossCheckGPT enables hallucination benchmarking on any target task, and we introduce the first audio-visual hallucination benchmark to the AI community. Check out our paper at https://lnkd.in/gijW6riU to learn more!
To view or add a comment, sign in
-
As Salam O Alaikum Very interesting and knowledgeable article "What is the future of Artificial Intelligence : Exploring the Future Trajectory of Artificial Intelligence " by clicking on https://lnkd.in/egPQtYUG
"What is the future of Artificial Intelligence : Exploring the Future Trajectory of Artificial Intelligence "
paidforarticles.com
To view or add a comment, sign in
-
🚀 Just finished reading "Building LLMs for Production" by Louis-François Bouchard and Louie Peters, and it’s a game-changer for anyone in AI! 🤖 Here are the top insights I found invaluable: 1. Foundations of LLMs: A deep dive into the principles and evolution of large language models. 2. Prompt Engineering: Techniques to enhance performance and reliability with well-crafted prompts. 3. Retrieval-Augmented Generation (RAG): Combining retrieval-based methods with generative models for superior outputs. 4. Fine-Tuning: Best practices for adapting LLMs to specific tasks or domains. 5. Deployment Strategies: Key considerations for scaling, optimizing, and monitoring LLMs in production. 6. Hands-On Examples: Practical exercises and code snippets that bring theory to life. 7. Real-World Applications: Case studies showcasing the impact of LLMs across industries. 8. Expert Insights: Contributions from industry leaders validating these approaches. Highly recommend this book for anyone looking to stay ahead in the AI field! 📚✨ #AI #MachineLearning #LLM #ArtificialIntelligence #TechReads #Innovation
To view or add a comment, sign in
-
Glad that we have 7 𝐩𝐚𝐩𝐞𝐫𝐬 accepted at 𝐍𝐞𝐮𝐫𝐈𝐏𝐒 2024, with our innovative work related to 𝐋𝐋𝐌, 𝐓𝐢𝐦𝐞 𝐒𝐞𝐫𝐢𝐞𝐬, 𝐅𝐨𝐫𝐞𝐜𝐚𝐬𝐭𝐢𝐧𝐠, 𝐀𝐧𝐨𝐦𝐚𝐥𝐲 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧, 𝐆𝐍𝐍, etc. Here's a quick list and paper link of our 7 #NeurIPS2024 papers: [1] Time-FFM: Towards LM-Empowered Federated Foundation Model for Time Series Forecasting [arXiv]: https://lnkd.in/gsacXQjU [2] Time-MMD: A New Multi-Domain Multimodal Dataset for Time Series Analysis [arXiv]: https://lnkd.in/gHeizCUs [3] Task-oriented Time Series Imputation Evaluation via Generalized Representers [arXiv]: coming soon [4] Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective [arXiv]: https://lnkd.in/gwGGDwiw [5] Generative Semi-supervised Graph Anomaly Detection [arXiv]: https://lnkd.in/g2TGevtB [6] CulturePark: Boosting Cross-cultural Understanding in Large Language Models [arXiv]: https://lnkd.in/gTFVDxea [7] AutoSurvey: Large Language Models Can Automatically Write Surveys [arXiv]: https://lnkd.in/gr92PTQZ Thanks for all the co-authors. We will share more details of the paper as well as the corresponding open-source code later. #neurips #neurips2024 #ai #deeplearning #llm #gnn #timeseries #timeseriesanalysis #benchmarking #forecasting #anomalydetection
To view or add a comment, sign in
-
Anthropic Released Claude 3! Beats GPT-4 on Every Metric!! It's still only March 2024, and the AI race is already heating up! Anthropic's Claude 3 model family represents a significant advancement in AI technology, featuring three models with increasing capabilities: Haiku, Sonnet, and Opus. Opus, the most advanced model, outperforms other models in a variety of intelligence metrics, including expert knowledge, reasoning, mathematics, and more. It demonstrates near-human comprehension and fluency, especially in complex tasks. Furthermore, all Claude 3 models demonstrate improved performance in analysis, forecasting, content creation, code generation, and multilingual conversation, including Spanish, Japanese, and French. Quotes from Ben Blaiszik , a Leader in AI and data infrastructure for science at University of Chicago, Argonne National Laboratory, and Globus, on this model: "Spent 2 hours this morning with Claude 3, and it's the most intensely I've been shocked yet. The Claude 3 Opus understanding of complex scientific topics is far ahead of GPT-4 on my self-made qualitative evals. I'd guess mid to advanced PhD level understanding of the topics I've tried. Will post some more detailed examples soon if time..." "Right, but the thing that shocked me is that it was able to come up with the solution we found that took top tier chemists ~1 year to formulate through various in-lab failures. Claude did this in one shot - for 5 cents. So, it gets potentially much easier to find fruitful paths earlier." Below is a comparison of the Claude 3 models to those of other peers on multiple benchmarks of capability: Singularity Is Near... https://lnkd.in/dAm9PBNW #Anthropic #Claude3 #GPT4 #ArtificialIntelligence #Technology #Innovation #Opus #Haiku #Sonnet #Singularity
To view or add a comment, sign in
-
We finally published 🎉 Introducing ChunkRAG, a novel framework that fundamentally transforms how RAG systems process information. Our approach addresses a critical challenge in current RAG systems: their tendency to retrieve and process entire documents without discrimination, leading to noise and imprecise outputs. ChunkRAG introduces semantic chunking with LLM driven filtering, breaking documents into coherent sections and evaluating each chunk's relevance before generation. The system has a multi-stage process: initial semantic chunking, and advanced LLM-based relevance scoring with self-reflection mechanisms. The results are compelling. In experiments on the PopQA dataset, ChunkRAG achieved 64.9% accuracy. The impact is particularly evident in real-world applications. When asked about France's capital, traditional RAG systems might include tangential information about various French cities. ChunkRAG delivers precision. This leap in accuracy is achieved through our hybrid retrieval system combining semantic chunking, BM25 retrieval, and Cohere's reranking model to combat the "Lost in the middle" problem. Beyond benchmarks, this advancement opens new possibilities for enterprise knowledge systems, fact-checking applications, and complex decision support systems where precision is paramount. Find the paper here: https://lnkd.in/gKnmxxYA #AI #Innovation #RAG #Research #EnterpriseAI #MachineLearning
To view or add a comment, sign in
-
Are you an #LLM fan who happens to be working with time series data? Add this paper to your reading list 📚 Large language models #LLMs show robust capabilities in #patternrecognition and reasoning over complex sequences of text tokens. Now, imagine these capabilities are leveraged for #timeseries data 👀 . This is exactly what this week’s #openresearch paper is uncovering! 𝑷𝒂𝒑𝒆𝒓 𝑻𝒊𝒕𝒍𝒆: Time-Llm: Time Series Forecasting By Reprogramming Large Language Models 𝑨𝒖𝒕𝒉𝒐𝒓𝒔: Jin et al. 𝑷𝒖𝒃𝒍𝒊𝒄𝒂𝒕𝒊𝒐𝒏 𝑫𝒂𝒕𝒆: January 2024 Read it here 🔗 https://lnkd.in/dBhb3DqM 📘 Happy Reading! TIME-LLM adapts LLMs for time series forecasting by reprogramming time series data into text prototypes that are more natural for LLM input. The unique concept in TIME-LLM known as the Prompt-as-Prefix, allows to enrich the contextual information and guides the transformation of reprogrammed time series. The model approach can be summarized as: ✅ Input Embedding ✅ Patch Reprogramming ✅ Prompt-as-Prefix ✅ Output Projection 🤔💭 I'm keen to hear your views on how TIME-LLM might impact industry practices. Comment below to share your thoughts on the potential applications in your field. ♻️ #Share and contribute to spreading #AI knowledge. #technology #innovation #simpleexplanation #phd #research #innovation #trendingnow #academia #explainai #weekendreading
To view or add a comment, sign in
-
Excited to share the sneak preview of an essay that Leslie Allison and I wrote for the journal 𝘊𝘳𝘪𝘵𝘪𝘤𝘢𝘭 𝘈𝘐! This short piece focuses on the current generation of generative AI/LLM products being positioned as useful (and even superior) tools for internet search, through a student information literacy and research pedagogy lens. Through examination of the corporate rhetoric around AI for internet search and a probe of contemporary LLM search tools, we consider the higher ed implications of generative AI search experiences. The piece will be included in 𝘊𝘳𝘪𝘵𝘪𝘤𝘢𝘭 𝘈𝘐 2.2, the second part of a two-part special issue offering interdisciplinary perspectives on large language models. Please do also check out the introduction to the special issue (which I'll link below!), which offers a compelling and comprehensive history and analysis of the generative AI paradigm.
SNEAK PREVIEW: LESLIE ALLISON AND TIFFANY DEREWAL’S “WHERE KNOWLEDGE BEGINS? GENERATIVE SEARCH, INFORMATION LITERACY, AND THE PROBLEM OF FRICTION”
http://criticalai.org
To view or add a comment, sign in
-
With our Final User Day (we still have some spots if you want to participate - https://lnkd.in/esxAgFtm) approaching, we look at the blog output SELMA created in the last 3 years of the project. SELMA's homepage's blog feature shows contributions to artificial intelligence and human language technologies made by consortium partners. But what can you read about there? You can read about #Biases in #AI and Spoken Languages - Learn like a Child (about #CurriculumLearningMethods), get to know about Infoboxes & Knowledge Graphs, see What we do with people, places, and organizations (about #NamedEntityRecognition), get A Yummy Piece of Cake (about #MachineLearningAlgorithms). We show Why (Counting) Diversity Matters (about #DiversityInMedia), How to satisfy data-hungry machine learning (about #SelfSupervisedLearning), and what to do if you Need Computing Resources. Take a Queue Token! (about #scaling), “listen” to Who Spoke When? (about #SpeakerDiarization), embark On the Path to the Responsible AI (about #TrustworthyAI), learn Why does AI need labeled data?, dive into Introducing LeBenchmark: A Comprehensive Framework for Evaluating Self-Supervised Learning, use The SELMA #OpenSource Platform, and finally get personal insights with My Year with plain X (about #UserExperience and Acceptance). Are you curious? Check it out here 👉 https://lnkd.in/ercqUY5Q
Final User Event
https://selma-project.eu
To view or add a comment, sign in
1,779,212 followers