Li Yin’s Post

View profile for Li Yin, graphic

AdalFlow author | SylphAI founder | LLM&CV researcher | MetaAI

Classical ML is not going away. Even LLMs need them. I have seen data privacy researchers using classical models to detect if LLMs leak personal information in each output, and startups using classical models to detect hallucinations and found it more accurate, faster, and cheaper than using LLMs. Did you combine any classical model with your LLM applications? #artificialintelligence #machinelearning #llms

Dhruv Diddi

Get Solo Tech | Edge AI | SLMs | Computer Vision | XGBoost | Formerly at Google, Turo #OwnYourAI #PrivacyFirst #Offline

13h

For our clients, using small language models + tool calling XGBoost is more effective than using large language models ⚡️

Eduardo Ordax

🤖 Generative AI Lead @ AWS ☁️ (50k+) | Startup Advisor | Public Speaker

11h

Love the hallucination approach! Which technique are they using?

Annie Condon

Compassionate data science leader with big data energy

13h

This! It is super common. Especially for projects that use LLMs for steps like text embeddings and similarity scores, and then use traditional classification models to train a model using the similarity scores. LLMs have expanded the ways I'm able to use unstructured text data as inputs to a more traditional ML model.

Shubham Saboo

Building a community of 1M+ AI Developers | I share daily tips and tutorials on LLM, RAG and AI Agents

13h

The examples you mentioned about detecting data leakage and hallucinations are particularly interesting. Have you seen any specific benchmarks comparing the performance of classical ML versus LLM-based approaches for these security and validation tasks?

Dariusz Kacban

.NET Developer AI | C#, Azure AI, Azure OpenAI, Semantic Kernel, .NET

13h

Good question Li Yin. We combine LLMs with SLMs. What strategies do you suggest to detect hallucinations, or assess classification accuracy or groundedness made by LLMs on the fly?

Zakir Ullah

Machine Learning Engineer | Developing AI Solutions for Real-Time Data, Sensor Integration, and Robotics

13h

Hallucinations compound due to the model drift especially in the so called "reasoning models" I am working on a small Human modeling project and I'm thinking of fine tuning a sentiment analysis model to detect the hallucinations.

Marek Kulbacki, PhD

Entrepreneur & Solutionist | Principal Scientist | CEO, CSO, CTO | Engineering & Innovation Expert | Leading R&D at PJAIT & DIVEINAI

12h

Combining classical ML with LLMs has greatly enhanced our work, enabling efficient, context-driven insights across diverse fields. Classical ML techniques (e.g., SVM, PGM) pair with mathematical methods like PCA, DTW, and Kalman Filtering for state estimation and trajectory prediction. Feature descriptors such as HOG, KLT, and Gabor Filters enhance data preprocessing, structuring complex multimodal inputs (e.g., IMU, EMG, GRF, multispectral video). LLMs then add layers of contextual, adaptive guidance, making this approach highly effective across applications like medical diagnostics, surveillance, forensic analysis, battlefield decision-making, sports motion analysis, and AR/VR. Additionally, techniques like Sparse Coding, Wavelet Transform, Template Matching, FFT, and CNNs (for deep feature extraction and pattern detection) enable rapid identification of patterns, while LLMs helps to interpret these outputs within broader contextual frameworks. Key market-fit factors: reliability, cost efficiency, and system explainability- simply demand this architectural blend. But strong cash flow supports iterative brainstorming and hands-on experience, driving continuous innovation. It’s always a complex but necessary process.

Mihai Anton

AI tech lead • Consulting on AI and data engineering

13h

I find it amusing when non-tech people ask me about LLMs, only to find out that in their view AI = LLMs. Even if they are the current hype, there's so much more to AI and ML than language models. To your question, I've heard more and more people coupling LLMs with highly specialized vision/forecasting/probability models to build niche solutions.

Amin Boulouma

Elite Software Engineer @ Boulouma.com | Transforming Software Engineers into Top 1% Performers | Top LinkedIn Programming Voice | Author | Mentor

10h

I’m glad you agree! One example is in healthcare, where classical ML models are combined with LLMs to process medical records. Classical models help ensure data privacy by flagging sensitive information, while LLMs provide context and insights from vast datasets. Have you encountered any challenges when integrating both models into real-world applications?

Aditya Patange

Indie Hacker | Product Engineer | Tech Prototyping | Consultant

3h

While LLMs bring versatility, certain tasks still favor traditional ML models. Narrow classification problems, where specialized ML models outperform LLMs in accuracy, efficiency, and cost, are a key example. Similarly, ML models excel at LLM routing—selecting the best model for a given prompt. As AI evolves, combining LLMs with classical ML will yield better value.

See more comments

To view or add a comment, sign in

Explore topics