Benjamin Blehm’s Post

View profile for Benjamin Blehm, graphic

Experienced Development Lead | Machine Learning & AI Enthusiast | Systems & Physics Background

Great series on neural nets, how they learn, and even GPTs and attention. Strongly recommend for anyone interested that likes clear, math-based explanations. It helped me understand that GPTs essentially take a tokenized language, which turns the tokens/words into dimensions in a high-dimensional space (like 128,000 dimensions), and then map meaning by taking "big, juicy, hamburger", hamburger would "point" a certain direction in that space, and then be subtly rotated in space by "big" and "juicy", leaving a single vector containing all the meaning of "big, juicy, hamburger". And the "pointing" would be directionally dependent on the directions of "big" and "juicy". Link is to 1st in series, but s3e5 is the GPT one, which is great. https://lnkd.in/eb-tFKNJ

But what is a neural network? | Deep learning chapter 1

https://www.youtube.com/

To view or add a comment, sign in

Explore topics