Classical ML is not going away. Even LLMs need them. I have seen data privacy researchers using classical models to detect if LLMs leak personal information in each output, and startups using classical models to detect hallucinations and found it more accurate, faster, and cheaper than using LLMs. Did you combine any classical model with your LLM applications? #artificialintelligence #machinelearning #llms
Love the hallucination approach! Which technique are they using?
This! It is super common. Especially for projects that use LLMs for steps like text embeddings and similarity scores, and then use traditional classification models to train a model using the similarity scores. LLMs have expanded the ways I'm able to use unstructured text data as inputs to a more traditional ML model.
The examples you mentioned about detecting data leakage and hallucinations are particularly interesting. Have you seen any specific benchmarks comparing the performance of classical ML versus LLM-based approaches for these security and validation tasks?
Good question Li Yin. We combine LLMs with SLMs. What strategies do you suggest to detect hallucinations, or assess classification accuracy or groundedness made by LLMs on the fly?
Hallucinations compound due to the model drift especially in the so called "reasoning models" I am working on a small Human modeling project and I'm thinking of fine tuning a sentiment analysis model to detect the hallucinations.
Combining classical ML with LLMs has greatly enhanced our work, enabling efficient, context-driven insights across diverse fields. Classical ML techniques (e.g., SVM, PGM) pair with mathematical methods like PCA, DTW, and Kalman Filtering for state estimation and trajectory prediction. Feature descriptors such as HOG, KLT, and Gabor Filters enhance data preprocessing, structuring complex multimodal inputs (e.g., IMU, EMG, GRF, multispectral video). LLMs then add layers of contextual, adaptive guidance, making this approach highly effective across applications like medical diagnostics, surveillance, forensic analysis, battlefield decision-making, sports motion analysis, and AR/VR. Additionally, techniques like Sparse Coding, Wavelet Transform, Template Matching, FFT, and CNNs (for deep feature extraction and pattern detection) enable rapid identification of patterns, while LLMs helps to interpret these outputs within broader contextual frameworks. Key market-fit factors: reliability, cost efficiency, and system explainability- simply demand this architectural blend. But strong cash flow supports iterative brainstorming and hands-on experience, driving continuous innovation. It’s always a complex but necessary process.
I find it amusing when non-tech people ask me about LLMs, only to find out that in their view AI = LLMs. Even if they are the current hype, there's so much more to AI and ML than language models. To your question, I've heard more and more people coupling LLMs with highly specialized vision/forecasting/probability models to build niche solutions.
I’m glad you agree! One example is in healthcare, where classical ML models are combined with LLMs to process medical records. Classical models help ensure data privacy by flagging sensitive information, while LLMs provide context and insights from vast datasets. Have you encountered any challenges when integrating both models into real-world applications?
While LLMs bring versatility, certain tasks still favor traditional ML models. Narrow classification problems, where specialized ML models outperform LLMs in accuracy, efficiency, and cost, are a key example. Similarly, ML models excel at LLM routing—selecting the best model for a given prompt. As AI evolves, combining LLMs with classical ML will yield better value.
Get Solo Tech | Edge AI | SLMs | Computer Vision | XGBoost | Formerly at Google, Turo #OwnYourAI #PrivacyFirst #Offline
13hFor our clients, using small language models + tool calling XGBoost is more effective than using large language models ⚡️