Wilson J. Tang’s Post

View profile for Wilson J. Tang, graphic

CEO / Chief Designer @ YumeBau Inc. - Spatial Game Studio+Lab, xKabam (yvr cofounder), xEA, xILM

TLDR: generative AI, LLM etc is probably asymptotic meaning at some point, more data only results in diminishing returns. https://lnkd.in/gJ4hCqtt

Has Generative AI Already Peaked? - Computerphile

https://www.youtube.com/

Sebastian Marino

CTO at MONUMENTAL Labs | Art, Media, Tech | Academy Award

7mo

I don't agree with this guy at all. It's very pop-sciency and he's essentially just fundamentally wrong. The latent space of an LLM is a differentiable function and our ability to extract more context from existing training data is feasible. To put it another way, it's definitely possible to learn from the training data - and at the end know more than just the sums of its parts. It's a bit naive to expect that extrapolations aren't possible and everything must be explicitly learned. At least that's my hot take.

Like
Reply
Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

7mo

The concept of asymptotic limits in generative AI, particularly with Large Language Models (LLMs), suggests a critical juncture where additional data yields diminishing returns. This notion mirrors historical trends in technological advancement, where exponential growth eventually plateaus. Considering this, have researchers explored alternative strategies beyond simply increasing data volume, such as refining model architectures or optimizing training methodologies? For instance, in the context of LLMs, could attention be directed towards enhancing contextual understanding rather than sheer data quantity to achieve more efficient and effective generative capabilities?

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics