Recently finished 'Co-Intelligence' by Ethan Mollick. This was a wild one... Highly recommend to anyone living on planet earth lol. Here's my big takeaways: 1. In the search for free data to train the AI models on most AI companies have used amateur romance novels available online and the Enron emails publicly available due to court case. I can't wait to find out how this could bite us in the behind some day... 2. We DO NOT understand how these LLM models like ChatGPT and Perplexity think. It's not like code with binary outcomes. We can't reverse engineer why they make things up sometimes and don't other times. 3. LLMs are different from regular computers in that they excel at creative tasks like writing a song bug often FAIL at things computers are usually good at like... rudimentary math. 4. While the author was quick to hedge, he seemed pretty optimistic on the whole that LLMs will make life better overall. Automating boring tasks few want to be doing anyway. For the record I'm optimistic as well. Although caution is warranted! #ai #LLMs
This was an insightful read. I read it at the beginning of the summer, and I have learned so much since then. GenAI has to be verified for correct, factual information. I am also cautiously optimistic. AI governance is needed to protect the integrity and privacy of the data it collects and uses
I authored this blog article, largely inspired by “Co-Intelligence.” Insightful and engaging book. Thank you, Ethan Mollick. https://www.learningideasconf.org/blog/ai-and-the-ragged-boundary-problem
Still in the process of reading it, this will definitely keep me motivated to finish it!
We don't yet understand how human brains work yet we trust others
@john Moore LLMs are orchestrators. They either have internal training on math or they can call another program that will execute the math.
What did you think about the end - the different scenarios for AI growth and improvement?
Very insightful read. I have recommended it numerous times on my platforms. Thank you for your wonderful rendition
How can I help?
2dBut you have to ask, why would a Large Language Model be expected to be good at math?!? That makes zero sense. Language is the focus—not math, 😂. Do we have any Large Math Models (LMMs) strictly for math?