Cory M.’s Post

View profile for Cory M., graphic

eLearning | Training Development | Content Development | Workflow Analysis

The fundamental problem with ChatGPT and other large language models (LLMs) is that they do not understand what words mean. They are very much like a young savant who can recite every word in all six volumes of the History of the Decline and Fall of the Roman Empire without comprehending any of the content. Without such comprehension, LLMs are not going to morph into artificial general intelligence (AGI) — the ability to perform any intellectual task that human beings can do. Many AI enthusiasts — including Tesla’s TSLA Elon Musk, Jensen Huang of Nvidia NVDA and pioneering AI researcher Ben Goertzel — nonetheless claim that AGI is just a few years away. This cheerleading certainly helps raise funds (just ask ChatGPT’s Sam Altman) and sell computer chips (just ask Nvidia) but it is increasingly recognized that the breathless hype is another case of Silicon Valley’s “fake-it-til-you-make-it” mentality.

Big Tech’s ‘fake-it-till-you-make-it’ hype is now infecting AI

Big Tech’s ‘fake-it-till-you-make-it’ hype is now infecting AI

marketwatch.com

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

6mo

You brought up a crucial point regarding the limitations of large language models (LLMs) like ChatGPT, likening them to a savant reciting without comprehension. This analogy resonates with historical debates on artificial general intelligence (AGI) and the complexities of true understanding. Considering this, how might we bridge the gap between mere data processing and genuine comprehension in AI systems, especially in light of the persistent hype surrounding AGI's imminent arrival propagated by figures like Elon Musk and Jensen Huang?

Like
Reply

To view or add a comment, sign in

Explore topics