Implemented 10171259_credit_risk_prediction_based_on_DL_and_SMOTE Journal used for the implementation Journal title: Proposal of a model for credit risk prediction based on deep learning methods and SMOTE techniques for imbalanced dataset. This is an IEEE paper published in 2021 and this was implemented using neural network https://lnkd.in/dGyxWJvt
Pushkar Verma’s Post
More Relevant Posts
-
Recently, I’ve become interested in neural networks. Although I have studied them in past classes and used them within personal projects through machine learning libraries, I realized I didn’t really understand what was happening under the hood. So I built a neural network from scratch using only python. This was a great learning experience since it forced me to understand each aspect of a neural network in depth. https://lnkd.in/gViTdP5B The network I built has 4 layers (2 hidden layers) and is designed for multi-class classification on handwritten digits (0-9). I trained the network using the MNIST training dataset, which enabled the network to achieve 96% accuracy. Additionally, in order to implement backpropagation I had to compute the gradients for all the weights and biases of the network. For those interested, those calculations can be seen here: https://lnkd.in/gzUKTCqd Finally, as a practical exercise I decided to create a demo website where you can draw a digit and have the model predict what you drew. Feel free to try it out here: https://lnkd.in/gXNRcRNU The website’s model doesn’t quite achieve 96% accuracy since the MNIST dataset only provides examples of centered and handwritten digits. It’s not the best for a use case like this since users can draw anywhere in the frame, and have to use a trackpad/mouse instead of a pencil/pen. However, to help manage this I augmented the training data through various interpolations, which I discuss more in depth at the bottom of the readme. Overall, this was a great experience and I’ve learned that building things from scratch is a great way to learn things!
GitHub - anshvijay28/NeuralNetwork: Building a multi-layer Neural Network from scratch
github.com
To view or add a comment, sign in
-
I built a Neural Network using NumPy, as a means of building foundational knowledge in deep learning. Surprising to me, understanding the ins and outs of one is more mathematically dense than programming one is. Check out my article below if you're curious? https://lnkd.in/eqre2tqU
A Neural Network with Pure NumPy
medium.com
To view or add a comment, sign in
-
Building a Convolutional Neural Network (CNN) from scratch involves several steps: 1. *Import necessary libraries*: - NumPy for numerical operations - TensorFlow or PyTorch for deep learning operations 2. *Define the CNN architecture*: - Specify the number of convolutional layers, pooling layers, and fully connected layers - Define the number of filters, kernel size, and activation functions for each layer 3. *Implement the convolutional layer*: - Define the convolutional operation using NumPy or the deep learning library - Apply activation functions (e.g., ReLU, Sigmoid) 4. *Implement the pooling layer*: - Define the pooling operation (e.g., max pooling, average pooling) 5. *Implement the fully connected layer*: - Define the fully connected operation using NumPy or the deep learning library - Apply activation functions (e.g., ReLU, Sigmoid) 6. *Implement the softmax output layer*: - Define the softmax operation for multi-class classification 7. *Compile the model*: - Specify the loss function, optimizer, and evaluation metrics 8. *Train the model*: - Provide training data and labels - Train the model using the optimizer and loss function 9. *Evaluate the model*: - Provide testing data and labels - Evaluate the model's performance using metrics (e.g., accuracy, precision, recall) Here's a simple example using NumPy: ``` import numpy as np # Define the CNN architecture num_conv_layers = 2 num_pool_layers = 2 num_fc_layers = 1 # Implement the convolutional layer def conv2d(x, weights, bias): return np.convolve2d(x, weights, mode='same') + bias # Implement the pooling layer def max_pool2d(x, pool_size): return np.max(x.reshape(-1, pool_size, pool_size), axis=1) # Implement the fully connected layer def fc(x, weights, bias): return np.dot(x, weights) + bias # Implement the softmax output layer def softmax(x): return np.exp(x) / np.sum(np.exp(x), axis=0) # Define the model def cnn_model(x): # Convolutional layers for i in range(num_conv_layers): x = conv2d(x, weights[i], bias[i]) x = np.relu(x) x = max_pool2d(x, pool_size=2) # Fully connected layers for i in range(num_fc_layers): x = fc(x, weights_fc[i], bias_fc[i]) x = np.relu(x) # Softmax output layer x = softmax(x) return x # Compile the model loss_fn = np.mean((y_pred - y_true)**2) optimizer = np.optimizers.SGD(lr=0.01) # Train the model for epoch in range(10): for x, y in train_data: y_pred = cnn_model(x) loss = loss_fn(y_pred, y) optimizer.update(loss, weights, bias) ``` Note that this is a simplified example and real-world CNNs require more complex architectures, regularization techniques, and optimization methods.
To view or add a comment, sign in
-
a good way to choose what model of machine learning you have to use
Here's a crisp summary to learn how to choose the perfect Machine Learning (ML) algorithm for your project in under 3 minutes and 29 seconds using Python: Step 1: Define Your Problem (10 sec) Identify the type of problem you're trying to solve: classification, regression, clustering, etc. Step 2: Explore Your Data (30 sec) Understand your dataset's features, size, and distribution to determine the best algorithm. Step 3: Choose an Algorithm (1 min 15 sec) Select from popular Python libraries like Scikit-learn, TensorFlow, or PyTorch. Consider: * Supervised Learning: Linear Regression, Decision Trees, Random Forest, SVM, KNN * Unsupervised Learning: K-Means, Hierarchical Clustering, PCA * Deep Learning: Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) Step 4: Evaluate and Refine (45 sec) Use metrics like accuracy, precision, recall, and F1-score to evaluate your model's performance. Refine your model by tuning hyperparameters, feature engineering, or trying different algorithms. That's it! By following these quick steps, you can choose the perfect ML algorithm for your project using Python. Please reshare with your #LinkedIn network to help others learn and grow! ✅Follow Neelabh Pandey(2K+ Followers) to get more such Data Science and ML learning material
To view or add a comment, sign in
-
Time series data has become increasingly important due to its prevalence in various fields such as finance, economics, healthcare, weather forecasting, and much more . Interested in time series forecasting using TensorFlow? Check out this tutorial covering the basics and building different models, including Convolutional and Recurrent Neural Networks (CNNs and RNNs). These models are adept at processing sequential data, making TensorFlow a valuable tool for predicting future trends based on historical data across various fields. Whether you're new to TensorFlow or looking to expand your knowledge, this tutorial is a great resource to explore. Grab a coffee and dive in! You can run the code in Google Colab and experience how different models work in real-time here: https://lnkd.in/gycGP835. Follow the tutorial link to learn more: https://lnkd.in/gxixdWhe. #TensorFlow #TimeSeriesForecasting #NeuralNetworks #MachineLearning #DataScience
Time series forecasting | TensorFlow Core
tensorflow.org
To view or add a comment, sign in
-
📢 Article Alert! 📢 🚀 For Data Science & Machine Learning Enthusiasts! 🚀 Discover the relevance of custom dataset applications and hands-on CNN implementation fostering adaptability and continuous learning in a dynamic field. Title: Exploring Convolutional Neural Networks (CNN) on Custom Datasets: A Comprehensive Guide Why CNN? A: CNNs, short for Convolutional Neural Networks, are predominantly employed for image recognition tasks. Their efficacy lies in their ability to discern features within images while significantly reducing the number of parameters required compared to traditional fully connected networks. By leveraging local connectivity and parameter sharing, CNNs excel at learning hierarchical representations of visual data. Why Custom Data? A: While existing datasets like MNIST are invaluable for learning, they often mask the real-world challenges of data acquisition and pre-processing. Custom datasets provide a practical playground for understanding the end-to-end process of handling real-world data. From sourcing and pre-processing images to labelling and model training, working with custom datasets offers invaluable insights for real-life projects. Understanding CNN Layers: CNNs comprise various layers, each serving a distinct purpose. The convolutional layer, for instance, detects features within images by convolving learned filters over the input. These filters act as feature detectors, each specializing in capturing specific visual patterns, thus forming a collection of feature maps. Activation Functions and Pooling: ReLU (Rectified Linear Unit) serves as the activation function in CNNs, introducing non-linearity crucial for learning complex relationships within data. Max pooling, on the other hand, down samples feature maps by selecting the maximum value from each pool, aiding in invariant feature detection and reducing computational complexity. Implementation with TFlearn: Utilizing TFlearn, a high-level deep learning library built atop TensorFlow, we demonstrate the implementation of a CNN on custom datasets. Through code snippets, we cover data acquisition, pre-processing, model construction, training, and evaluation. 🔗 Clone the GitHub repository:https://lnkd.in/e_qFndu9 🔗 Read the full article here: https://lnkd.in/emV97HNJ 🔗 Additional Resources: TFlearn Documentation: http://tflearn.org/ Research Paper on Pooling Techniques: https://lnkd.in/e_5NjghW If you have any queries or suggestions, feel free to leave them in the comments section. Happy coding! #CNN #ConvolutionalNeuralNetwork #CustomDataset #MachineLearning #DataScience #TFlearn #TensorFlow #GoogleColab #ImageRecognition #DeepLearning #NeuralNetworks
CNN with custom dataset
medium.com
To view or add a comment, sign in
-
I've just finished my first course of Deep Learning Specialization This is the next level from the Machine Learning Specialization i have been taken before. What's this lecture teach me? It teaches me deeper about how to build Machine Learning model, especially in Neural Network and Deep Neural Network, teach me about the math behind forward and backward propagation with linear regression, logistic regression, linear algebra, statistic, calculus, etc, now i know how to mimic Tensorflow and PyTorch framework. The most difficult one is to understand Backward - Propagation which involving strong calculus, final test which programming assignment took me about 4 to 5 hours to finish. Thank you for Deeplearning.AI and Coursera, especially Prof. Andrew Ng that provides this course to me, this time he doesn't bring Stanford University but i found this one is far more difficult compared to Machine Learning Specialization. THIS IS NOT THE END, i still have to finish 4 remain courses to finish Deep Learning Specialization. After that, i would implement this, create some self project or team project (actually i am still looking for partners that can make Machine - Deep Learning Algorithm together. I still haven't found it yet, if you have interest you can contact me through this platform tho) , and then i will go to study Data Science and Data Engineering ~ Data Architect deeply. The journey wont end up soon, even if this learning road is so long that take years to finish, i have no problem to it. Notes: If you feel build Learning Algorithm is hard, especially in backward props that involves strong calculus, don't worry about it because you will getting used to it, even you would built function and tools that automate your work to build the algorithm. The hardest part of Machine Learning project is to gathering, engineering, and giving appropriate data to feed the model. Maybe 1/6 of the time is spent building the algorithm, but 1/2 is for engineering the data, and the remaining 1/3 is for testing and deploying.
Completion Certificate for Neural Networks and Deep Learning
coursera.org
To view or add a comment, sign in
-
I just finish the first step of deep learning which utilize Keras and TensorFlow, one of the most powerful tool for this topic. Trill to look forward what can I do next!
Sukrit Prapaitrakool's Statement of Accomplishment | DataCamp
datacamp.com
To view or add a comment, sign in
-
Neural Network Model with Dropout This example demonstrates how to build a neural network that includes dropout layers to prevent overfitting, ensuring the model generalizes well to unseen data. python Copy code import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.optimizers import Adam # Hypothetical dataset dimensions input_shape = (100,) # 100 features num_classes = 2 # Binary classification: normal(0) or anomaly(1) # Building the neural network model model = Sequential([ Dense(128, activation='relu', input_shape=input_shape), Dropout(0.3), # Dropout layer to prevent overfitting Dense(64, activation='relu'), Dropout(0.3), # Another dropout layer for additional regularization Dense(32, activation='relu'), Dropout(0.3), # Additional dropout layer Dense(num_classes, activation='softmax') # Output layer for classification ]) model.compile(optimizer=Adam(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # Summary of the model architecture model.summary() # Assuming x_train, y_train, x_val, y_val are predefined datasets # Example of fitting the model to the training data # model.fit(x_train, y_train, epochs=50, validation_data=(x_val, y_val)) Explanation of the Model Design Dense Layers: The model starts with a dense layer of 128 neurons, followed by layers with decreasing numbers of neurons (64 and 32). This architecture allows the model to learn from complex patterns in the data. Dropout Layers: Dropout layers with a rate of 0.3 are placed after each dense layer except the output layer. This means that during training, 30% of the neurons' outputs are randomly ignored. This randomness helps prevent the model from becoming too dependent on any specific set of neurons and encourages it to learn more robust features. Output Layer: The final dense layer uses the softmax activation function suitable for binary classification tasks. It outputs the probability distribution over the two classes (normal and anomaly). Compilation: The model uses the Adam optimizer with a learning rate of 0.001, and it's trained to minimize the sparse categorical cross-entropy loss, which is appropriate for binary classification tasks with integer labels. Training and Evaluation The model would then be trained on a dataset (x_train, y_train) and validated on a separate validation dataset (x_val, y_val). The training process involves adjusting the model weights to minimize the loss on the training data, while the validation data provides a check against overfitting. Model Summary The model.summary() call prints a summary of the model architecture, showing the layers, their types, output shapes, and the number of parameters. This summary helps verify the model design before training.
To view or add a comment, sign in
-
Introduction to Deep Learning with Keras
Luis Alonso Copete Copete's Statement of Accomplishment | DataCamp
datacamp.com
To view or add a comment, sign in
Congratulations Pushkar!!