Thoroughly enjoyed learning about neural networks and exploring various models.
Mohideen Majeeth’s Post
More Relevant Posts
-
What if I told you Artificial Intelligence was just math? Well, Neural Networks, a fundamental component of AI, is somewhat like a giant mathematical model. Now you're probably expecting a giant, incomprehensible expression with more greek symbols and letters than numbers. Which is true, to an extent, but with a little work, it can be a lot simpler than that. If we look closer, and deeper, into the Perceptrons, made of Layers, made of Neurons, made of simple mathematical transformations of a single input, we find that on an atomic level, the simple operations involving inputs, weights, biases and activations are humanly understandable. Of course, in the modern world of computing, abstraction can be both a blessing and a curse. One may turn to Pytorch for its tensors and nth-dimenstional transformation methods, but if one truly wishes to break down the "black boxes" of Neural Nets, they must build it themselves. Step by step. From scratch. That being said, a good way to do this is with Andrej Karpathy's atomic-to-multicellular introduction to Neural Networks and Backpropogation: https://lnkd.in/ezAMbXXW
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
The study paper from Microsoft Research about Self Learning and improving language model approach through post training LLM is very promising , using what they named « Direct Nash Optimization », the overall approach is what’s close to the kind of methods that will be a reference in the sector for sure. A similar approach adopting with in it Bayesian neural networks could be used though these kind of algorithmic reinforcement learning techniques 👀 The goal of any language models are to deal with the Qredence we give to a given outcome after all 😅 #DNO #MicrosoftResearchForum
To view or add a comment, sign in
-
Hello guys, I just started learning about Graph Convolutional Neural Networks with regard to studying the topic of anti-money laundering. I found this channel, and it is giving me a good explanation and is easy to understand about graph neural networks. I will attach the playlist link to this post. I hope this post can help you guys. Thanks https://lnkd.in/g_YBwT-E
Graph Neural Networks - YouTube
youtube.com
To view or add a comment, sign in
-
Learnt along this tutorial by Andrej Karpathy, the tutorial revolved around the basics of deep learning and how neural networks are built and worked with from ground up. tutorial - https://lnkd.in/gtZ2FH-S More details on my github repo - https://lnkd.in/gScRdzZp
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
It seems a lot of people want a primer on AI. So here's a really good series of videos by a strong maths fellow explaining things in a very simple, very clear way. https://lnkd.in/ezHh4dct Three Blue One Brown, in case you're not familiar, does a lot of really good deep dives on all kinds of math problems, Tranformers, neural networks, deep learning, etc being only one small part of the channel's content. It will also give you a clue as to why I say machines are thinking now. A lot of people have simplified their understanding of LLMs as "next token predictors" which masks what's going on there. Markov chains are next token predictors. Transformers are, yes, at the end giving a list of tokens and the likelihood of each, but to get to that point requires a lot of sophisticated thought. To grok that, watch this video series and get a sense of the complexity that's hidden underneath transformers. This does not go into diffusion models which are thinking machines in a different sense. If I find a good video series on those, I'll post it.
Neural networks
youtube.com
To view or add a comment, sign in
-
🌟 Curious about Neural Networks? Start Here! 🌟 If you've been wanting to dig into the world of neural networks but weren't sure where to start, I've created two videos 🎥 that break down the basics in a simple, easy-to-understand way. I've highlighted the key concepts and used clear, accessible language to make sure anyone can follow along. I'm confident that after watching, you'll have a strong grasp of neural networks. 🚀 Video 1: https://lnkd.in/gcEBdg6x Video 2: https://lnkd.in/g--jZn2V #NeuralNetworks #AI #MachineLearning #DeepLearning #TechEducation
Neural Networks for First-Timers: Simple and Clear
https://www.youtube.com/
To view or add a comment, sign in
-
This week, I focused on studying neural networks from the ground up. Building a neural network from scratch without using any libraries was an amazing experience. It made me realize how simpler units can combine into a complex structure that can solve significant problems. Here’s a summary of the learning process: 1. Initialize a neural network by assigning random values to weights and biases. 2. Forward pass : Given an input, determine the predicted value and the loss from the neural network. 3. Backpropagation : Calculate the derivative of the loss with respect to all the weights and biases. 4. Update weights and biases : Use the derivatives obtained in the previous step to adjust the weights and biases to reduce the loss function. 5. Repeat steps 1 to 4 until the loss is minimized. Special thanks to Andrej Karpathy for his incredible video on neural networks. Source Code: https://lnkd.in/dEptQmZv Video Link: https://lnkd.in/dPN_dshw #ml #AI #datascience #neuralnetwork #deeplearning
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
Machine learning models can be classified into two types – Generative and Discriminative models Generative Models: - These models learn P(X, Y) – the joint probability distribution of both input features X and labels Y. - Examples include generative adversarial networks (GANs), variational autoencoders (VAEs), and decoder part of transformers. Discriminative Models: - Discriminative models focus on learning the conditional probability distribution P(Y|X), which predicts labels Y given input features X. - Examples include logistic regression, encoder part of transformers, support vector machines (SVMs), and neural networks like feedforward or convolutional neural networks (CNNs).
To view or add a comment, sign in
-
In supervised learning, a model learns from labeled data, where input-output pairs are provided. The goal is to predict the output (target) for new, unseen inputs. Common algorithms include linear regression, decision trees, and neural networks
To view or add a comment, sign in