It seems a lot of people want a primer on AI. So here's a really good series of videos by a strong maths fellow explaining things in a very simple, very clear way. https://lnkd.in/ezHh4dct Three Blue One Brown, in case you're not familiar, does a lot of really good deep dives on all kinds of math problems, Tranformers, neural networks, deep learning, etc being only one small part of the channel's content. It will also give you a clue as to why I say machines are thinking now. A lot of people have simplified their understanding of LLMs as "next token predictors" which masks what's going on there. Markov chains are next token predictors. Transformers are, yes, at the end giving a list of tokens and the likelihood of each, but to get to that point requires a lot of sophisticated thought. To grok that, watch this video series and get a sense of the complexity that's hidden underneath transformers. This does not go into diffusion models which are thinking machines in a different sense. If I find a good video series on those, I'll post it.
Tim Ellis’ Post
More Relevant Posts
-
What if I told you Artificial Intelligence was just math? Well, Neural Networks, a fundamental component of AI, is somewhat like a giant mathematical model. Now you're probably expecting a giant, incomprehensible expression with more greek symbols and letters than numbers. Which is true, to an extent, but with a little work, it can be a lot simpler than that. If we look closer, and deeper, into the Perceptrons, made of Layers, made of Neurons, made of simple mathematical transformations of a single input, we find that on an atomic level, the simple operations involving inputs, weights, biases and activations are humanly understandable. Of course, in the modern world of computing, abstraction can be both a blessing and a curse. One may turn to Pytorch for its tensors and nth-dimenstional transformation methods, but if one truly wishes to break down the "black boxes" of Neural Nets, they must build it themselves. Step by step. From scratch. That being said, a good way to do this is with Andrej Karpathy's atomic-to-multicellular introduction to Neural Networks and Backpropogation: https://lnkd.in/ezAMbXXW
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
Well simplified, explained and complete video playlist for anyone who's interested to better understand the mathematical fundamentals that are powering a lot of the recent AI models. It starts off with the basics of a Neural Network and finishes up talking about Transformer models & Attention mechanisms. https://lnkd.in/gVBzk6Wt
Neural networks
youtube.com
To view or add a comment, sign in
-
🧠 Demystifying Neural Networks: A Deep Dive into the "Black Box" 🔍 Ever wondered how neural networks actually work? I've just published a comprehensive guide on Medium that breaks down these powerful algorithms into easy-to-understand concepts. In this article, you'll discover: • The basic structure of neural networks • How they process data, step-by-step • A simple, calculable example of a neural network in action • Real-world applications and limitations Whether you're a curious beginner or a seasoned pro looking to refresh your knowledge, this post offers valuable insights into the world of AI and machine learning. 👉 Read the full article here: https://lnkd.in/g2QNVPH2 #MachineLearning #ArtificialIntelligence #DataScience #NeuralNetworks Let's discuss: What's your experience with neural networks? How do you see them shaping the future of technology?
Neural Networks Explained: Inside the Black Box
link.medium.com
To view or add a comment, sign in
-
This interesting new tool may help us finally understand why neural networks make the decisions they do. It would be wonderful if the theory could catch up to the engineering behind these powerhouse ML systems.
Google DeepMind has a new way to look inside an AI’s “mind”
technologyreview.com
To view or add a comment, sign in
-
This interesting new tool may help us finally understand why neural networks make the decisions they do. It would be wonderful if the theory could catch up to the engineering behind these powerhouse ML systems.
Google DeepMind has a new way to look inside an AI’s “mind”
technologyreview.com
To view or add a comment, sign in
-
Thoroughly enjoyed learning about neural networks and exploring various models.
Mohideen Majeeth has successfully completed a project on Extraa Learn as a part of AIFL-International at Great Learning
olympus.mygreatlearning.com
To view or add a comment, sign in
-
I'm excited to share my latest Medium article: "Backpropagation Visuals: Neural Networks Don’t Have to Be a Black Box". In this post, I dive into how visualizing the backpropagation process can demonstrate neural networks and provide clearer insights into their inner workings.
Backpropagation Visuals: Neural Networks Don’t Have to Be a Black Box
link.medium.com
To view or add a comment, sign in
-
Great series on neural nets, how they learn, and even GPTs and attention. Strongly recommend for anyone interested that likes clear, math-based explanations. It helped me understand that GPTs essentially take a tokenized language, which turns the tokens/words into dimensions in a high-dimensional space (like 128,000 dimensions), and then map meaning by taking "big, juicy, hamburger", hamburger would "point" a certain direction in that space, and then be subtly rotated in space by "big" and "juicy", leaving a single vector containing all the meaning of "big, juicy, hamburger". And the "pointing" would be directionally dependent on the directions of "big" and "juicy". Link is to 1st in series, but s3e5 is the GPT one, which is great. https://lnkd.in/eb-tFKNJ
But what is a neural network? | Deep learning chapter 1
https://www.youtube.com/
To view or add a comment, sign in
-
This video gives a great introduction to Neural Network: https://lnkd.in/gjpckTYa
But what is a neural network? | Deep learning chapter 1
https://www.youtube.com/
To view or add a comment, sign in
-
Our soc med guy steps in front of the camera for the first time! For a primer on AI types. There's a lot of alphabet soup going around. LLMs, CNNs, GANs—if you're confused what it all means, this is for you. Stay tuned for Part 2 :) Nerdy note: Lots was simplified to fit in a 1 minute video. An oversimplification is that image generation AIs like Midjourney can be called Convolutional Neural Networks. They use CNN models for part of their construction, but they more commonly referred to as Diffusion models, since this is the final step of the process. Diffusion mean gradually refining an image starting from random noise. ◻️ ➡️ 🌇 We might do a deeper dive on diffusion later. Let us know if you want more formats like this.
To view or add a comment, sign in