Prasad Yalamanchi’s Post

Current GenAI scaling law is losing its charm!  That is saying, models with ever more parameters trained on ever larger data using ever larger compute, do not necessarily scale as anticipated! More recently though, the focus has shifted to enhancing prediction accuracies through computation (and additional costs) at inference time. This technique modifies the scale law while helping LLMs march towards higher intelligence. Noteworthy is that the training data to support inference benefits from being closed domain, for improved accuracy and rapid convergence. On the other hand, domain specific knowledge graphs deployed today, effectively guardrail LLMs to accurate answers in an explainable way, and cost efficiently. Inference time compute is an interesting technique, yet by hinting ready comparisons may further boost interest in knowledge graphs! -- #knowledgegraph #ontology #genai #llm #nlp #semantictechnology #graph #semanticgraph #graphrag #w3c #rdf #sparql #owl #dl #deeplearning #textdistil #explainableai #explainability #governance

Christopher Adamo

Co-Founder & Managing Partner @ StrtupBoost

1w

Insightful take, Prasad! Balancing scale with precision is key, and your perspective on knowledge graphs offers a refreshing angle on enhancing AI accuracy. Keep sharing these thought-provoking insights!

Like
Reply

To view or add a comment, sign in

Explore topics