Deep Learning Lab With Output
Deep Learning Lab With Output
Deep Learning Lab With Output
Program:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.utils import to_categorical
output
This code builds a simple feedforward neural network using TensorFlow and the Keras API.
The network consists of three fully connected (dense) layers. The Flatten layer flattens the
28x28 input images into a 1D array, and the subsequent dense layers process the flattened
data. The final output layer has 10 units (equal to the number of classes in Fashion MNIST)
with softmax activation for multi-class classification.
Remember that this is a basic example, and you can further enhance the model by
experimenting with different architectures, activation functions, optimizers, regularization
techniques, and hyperparameters.
Experiment 2: Design Artificial Neural Networks for Identifying and
Classifying an actor using Kaggle Dataset.
Designing an Artificial Neural Network (ANN) to identify and classify actors using a Kaggle
dataset involves several steps, including data preprocessing, model architecture design,
training, and evaluation. Here, I'll provide you with a general guide on how to approach this
task.
Assuming you have a Kaggle dataset of actor images labeled with their names, and you want
to build a classification model to identify and classify actors:
Dataset Preparation:
Download the actor dataset from Kaggle and unzip it if necessary.
Organize your dataset into train and test folders, where each actor's images are stored in
separate subfolders named after the actors.
Data Preprocessing:
Load and preprocess the images using libraries like TensorFlow or Keras.
Resize images to a consistent size (e.g., 224x224) to feed into the neural network.
Normalize pixel values to be between 0 and 1.
Data Augmentation:
Use data augmentation techniques to increase the diversity of your training data. This can
help improve the model's generalization.
Techniques may include random rotation, resizing, flipping, and more.
Build the Neural Network Model:
Choose a suitable pre-trained model as the base architecture. Common choices are VGG16,
ResNet, or Inception.
Customize the model's output layer to match the number of actor classes you want to classify.
Freeze the weights of the pre-trained layers to avoid overfitting on limited data.
Program:
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout
test_datagen = ImageDataGenerator(rescale=1.0/255.0)
# Data generators
batch_size = 32
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical'
)
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical'
)
Data Preparation:
Load and preprocess your image dataset. You can use libraries like TensorFlow's
ImageDataGenerator for data augmentation and preprocessing.
Split your dataset into training, validation, and test sets.
Build the CNN Architecture:
Design the architecture of your CNN. A common architecture pattern is: Convolutional layers
→ Pooling layers → Fully connected (Dense) layers.
Experiment with the number of convolutional layers, filter sizes, pooling sizes, and the
number of units in dense layers.
Hyperparameter Tuning:
Define a set of hyperparameters to tune. These may include learning rate, batch size, number
of filters, filter sizes, dropout rates, and more.
Use techniques like grid search or random search to explore different combinations of
hyperparameters.
Model Compilation:
Choose a suitable optimizer (e.g., Adam) and loss function (e.g., categorical crossentropy) for
your image classification task.
Training:
Train your model using the training data and validate it on the validation data.
Monitor metrics like accuracy and loss during training.
Evaluation:
Evaluate your model's performance on the test set to measure its generalization ability.
Program:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import cifar10
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
Dataset Preparation:
Choose or create a dataset of sequential data for your prediction task. This could be time
series data, text, stock prices, etc.
Preprocess the data by converting it into a suitable format, such as numerical sequences or
text tokens.
Data Preprocessing:
Transform the sequential data into input-output pairs. For example, if you're predicting the
next element in a time series, create sliding windows of input sequences and corresponding
target values.
Build the RNN Model:
Design the architecture of your RNN. Common RNN layers include SimpleRNN, LSTM
(Long Short-Term Memory), and GRU (Gated Recurrent Unit).
Experiment with the number of recurrent units, activation functions, and other
hyperparameters.
Compile the Model:
Choose an appropriate loss function and optimizer for your prediction task.
Training:
Train your RNN model using the prepared input-output pairs.
Monitor training loss and validation loss to prevent overfitting.
Evaluation:
Evaluate your trained RNN model on a separate test dataset to measure its performance.
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN, Dense
from sklearn.model_selection import train_test_split
# Make predictions
sample_input = np.array([range(sequence_length)]).reshape(1, sequence_length, input_dim)
predicted_output = model.predict(sample_input)
print('Predicted output:', predicted_output)
OUTPUT:
Train on 640 samples, validate on 160 samples
Epoch 1/50
640/640 [==============================] - 1s 2ms/sample - loss: 19900286.4000
- val_loss: 23145038.4000
Epoch 2/50
640/640 [==============================] - 0s 110us/sample - loss:
18497845.6000 - val_loss: 21199304.8000
...
Epoch 50/50
640/640 [==============================] - 0s 93us/sample - loss: 3016.9841 -
val_loss: 2549.8931
80/80 [==============================] - 0s 394us/sample - loss: 2013.9560
Test loss: 2013.9560302734375
Predicted output: [[444.25977]]
This example demonstrates a simple RNN for sequence prediction using synthetic data. You
can adapt this code to your own dataset and prediction task. Experiment with different RNN
architectures, hyperparameters, and preprocessing techniques to optimize the model's
performance.