This week, I focused on studying neural networks from the ground up. Building a neural network from scratch without using any libraries was an amazing experience. It made me realize how simpler units can combine into a complex structure that can solve significant problems. Here’s a summary of the learning process: 1. Initialize a neural network by assigning random values to weights and biases. 2. Forward pass : Given an input, determine the predicted value and the loss from the neural network. 3. Backpropagation : Calculate the derivative of the loss with respect to all the weights and biases. 4. Update weights and biases : Use the derivatives obtained in the previous step to adjust the weights and biases to reduce the loss function. 5. Repeat steps 1 to 4 until the loss is minimized. Special thanks to Andrej Karpathy for his incredible video on neural networks. Source Code: https://lnkd.in/dEptQmZv Video Link: https://lnkd.in/dPN_dshw #ml #AI #datascience #neuralnetwork #deeplearning
Aniket Mishra’s Post
More Relevant Posts
-
As being new to AI/ML I discovered it's impossible to get idea of neural networks from 10 minute tutorials ❌ LUCKILY, during this learning process it is inevitable to discover Andrej Karpathy and his resources. He is the exact manifestation of this famous quote: "𝙄𝙛 𝙮𝙤𝙪 𝙘𝙖𝙣'𝙩 𝙚𝙭𝙥𝙡𝙖𝙞𝙣 𝙞𝙩 𝙩𝙤 𝙖 𝙨𝙞𝙭 𝙮𝙚𝙖𝙧 𝙤𝙡𝙙, 𝙮𝙤𝙪 𝙙𝙤𝙣'𝙩 𝙪𝙣𝙙𝙚𝙧𝙨𝙩𝙖𝙣𝙙 𝙞𝙩 𝙮𝙤𝙪𝙧𝙨𝙚𝙡𝙛." Boy he can explain it very very well. So if you are lost, confused and constantly avoid to start with AI, checkout this amazing video about neural networks. It's like 2 hour video which stands between having no idea and actually getting sense of neural networks. What other resources would you recommend to get started with ML? https://lnkd.in/dTdWmvPp #techcareergrowth #ai #machinelearning #learning #neuralnetworks #coding ---------------------------- 👍 Like | ♻️ Repost if you loved it. ➕ Follow me for daily content, Archil Sharashenidze
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
What is Forward Propagation in Neural Networks? "Feedforward neural networks stand as foundational architectures in deep learning. Neural networks consist of an input layer, at least one hidden layer, and an output layer. Each node is connected to nodes in the preceding and succeeding layers with corresponding weights and thresholds." -GeeksforGeeks Check my git repository to find the easiest example of Forward Propagation. https://lnkd.in/gxT_ynrU #MachineLearning #ML #NeuralNetwork #ForwardPropagation
To view or add a comment, sign in
-
I just finished Andrej Karpathy's incredible deep dive into neural networks and backpropagation : The spelled-out intro to neural networks and backpropagation: building micrograd 💡 💻 If you're looking to truly grasp the magic behind machine learning, this video is a must-watch! To make the experience even better, I took some notes and published them on Medium. Check them out, grab a cup of coffee, and enjoy! ☕️ A gentle introduction: https://lnkd.in/gWQQ5G8A It is automatic! : https://lnkd.in/gn4Ab3v5 Neural networks! : https://lnkd.in/g9eqfJ3M Let me know what you think of Karpathy's explanation! #neuralnetworks #backpropagation #machinelearning #deeplearning #artificialintelligence
To view or add a comment, sign in
-
What if I told you Artificial Intelligence was just math? Well, Neural Networks, a fundamental component of AI, is somewhat like a giant mathematical model. Now you're probably expecting a giant, incomprehensible expression with more greek symbols and letters than numbers. Which is true, to an extent, but with a little work, it can be a lot simpler than that. If we look closer, and deeper, into the Perceptrons, made of Layers, made of Neurons, made of simple mathematical transformations of a single input, we find that on an atomic level, the simple operations involving inputs, weights, biases and activations are humanly understandable. Of course, in the modern world of computing, abstraction can be both a blessing and a curse. One may turn to Pytorch for its tensors and nth-dimenstional transformation methods, but if one truly wishes to break down the "black boxes" of Neural Nets, they must build it themselves. Step by step. From scratch. That being said, a good way to do this is with Andrej Karpathy's atomic-to-multicellular introduction to Neural Networks and Backpropogation: https://lnkd.in/ezAMbXXW
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
Excited to share my latest YouTube video on Convolutional Neural Networks (CNNs)! 🎥 Whether you're a fresher diving into the world of AI/ML or an experienced professional looking to expand your knowledge, this video is for you. Click the link below to watch and don't forget to like, comment, and subscribe for more insightful content! #ai #ml #tensorflow #convolutionalneuralnetworks https://lnkd.in/dGYEX-W5
What is CNN, and how does it work | Easy explanation of Convolutional Neural Network
https://www.youtube.com/
To view or add a comment, sign in
-
🌟 Curious about Neural Networks? Start Here! 🌟 If you've been wanting to dig into the world of neural networks but weren't sure where to start, I've created two videos 🎥 that break down the basics in a simple, easy-to-understand way. I've highlighted the key concepts and used clear, accessible language to make sure anyone can follow along. I'm confident that after watching, you'll have a strong grasp of neural networks. 🚀 Video 1: https://lnkd.in/gcEBdg6x Video 2: https://lnkd.in/g--jZn2V #NeuralNetworks #AI #MachineLearning #DeepLearning #TechEducation
Neural Networks for First-Timers: Simple and Clear
https://www.youtube.com/
To view or add a comment, sign in
-
Have you ever wondered what really happens in those hidden layers of a neural network? I was curious to decipher the "black box". I visualized the inner workings of neural networks by plotting the layers and activation values of hidden layers. The patterns revealed were fascinating! Using the classic MNIST dataset of handwritten digits, I was able to glimpse how images and text are encoded and transformed through each layer. The MNIST dataset contains thousands of tiny 28x28 pixel images of the digits 0-9. Even with such simple inputs, the emerging features in the hidden layers showed how neural networks automatically learn representations of data at increasing levels of complexity. Source Code:https://lnkd.in/gGkSR-Mt Feel free to share your perspectives! Together we can better these insights. #NeuralNetworks #DeepLearning #DataVisualization #MachineLearning #MNISTDataset #ANN #CNN #DataScience #PatternRecognition #HiddenLayers #ModelInterpretation #AIInsights #ImageProcessing #FeatureExtraction #DataRepresentation
To view or add a comment, sign in
-
Unlock the Power of Convolutional Neural Networks! 🚀 As a data enthusiast, I'm always eager to dive deeper into the mechanics of cutting-edge technologies. Recently, I stumbled upon an exceptional resource that demystifies Convolutional Neural Networks (CNNs) in a visually stunning way. If you're a fellow visual learner, you'll appreciate the clarity and precision of this explanation. Take your understanding of CNNs to the next level and discover the secrets behind their remarkable image recognition capabilities! A huge thank you to the creator of this resource for making complex concepts accessible and engaging! Link: https://lnkd.in/gbFsF5X7 #AI #MachineLearning #DataScienceCommunity
To view or add a comment, sign in
-
Learnt along this tutorial by Andrej Karpathy, the tutorial revolved around the basics of deep learning and how neural networks are built and worked with from ground up. tutorial - https://lnkd.in/gtZ2FH-S More details on my github repo - https://lnkd.in/gScRdzZp
The spelled-out intro to neural networks and backpropagation: building micrograd
https://www.youtube.com/
To view or add a comment, sign in
-
🎥 Excited to share my latest video on the IBM Technology YouTube channel, where I explain the fundamentals of Recurrent Neural Networks (RNNs)! 🤖🧠 RNNs are a key building block in deep learning, especially when working with sequences like time-series data, language models, and more. Whether you’re new to neural networks or just looking to brush up, this video provides a high-level overview of how RNNs work and where they shine. 💡 Check it out and let me know your thoughts! 🔗 Watch here: https://lnkd.in/geq-fGpb #DeepLearning #RecurrentNeuralNetworks #RNN #AI #MachineLearning #IBM #TechExplained #NeuralNetworks #ArtificialIntelligence #TechEducation #IBMTech #ML
The Power of Recurrent Neural Networks (RNN)
https://www.youtube.com/
To view or add a comment, sign in