Artificial Neural Networks - MiniProject

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16
At a glance
Powered by AI
The key takeaways are the introduction to artificial neural networks, their biological inspiration, architecture and learning mechanisms.

The main components of an artificial neuron are inputs, weights, summation function and activation function.

An artificial neural network learns by modifying the weights of connections between nodes through a process called training, where the network learns from examples.

MINI PROJECT ON

ARTIFICIAL NEURAL NETWORKS


CONTENTS
INTRODUCTION
BIOLOGICAL NEURON MODEL
ARTIFICIAL NEURON MODEL
ARTIFICIAL NEURAL NETWORK
NEURAL NETWORK ARCHITECTURE
LEARNING
BACKPROPAGATION ALGORITHM
APPLICATIONS
ADVANTAGES
CONCLUSION
INTRODUCTION
An artificial neuron is a mathematical function conceived as a model
of biological neurons . Artificial Neurons are the constitutive units in
an Artificial Neural Network.
Neural is an adjective for neuron, and network denotes a graph
like structure.
Artificial Neural Networks are also referred to as neural nets ,
artificial neural systems, parallel distributed processing systems,
connectionist systems.
BIOLOGICAL NEURON MODEL

Four parts of a typical nerve cell : -


DENDRITES: Accepts the inputs
SOMA : Process the inputs
AXON : Turns the processed inputs into
outputs.
SYNAPSES : The electrochemical
contact between the
neurons.
ARTIFICIAL NEURON MODEL
Inputs to the network are represented by the
x1
mathematical symbol, xn w1

Each of these inputs are multiplied by a w2 f(w1 x1 + + wnxn)


connection weight , wn x2
f
sum = w1 x1 + + wnxn
These products are simply summed, fed
through the transfer function, f( ) to generate wn
a result and then output. xn
ANALOGY

Biological Terminology Artificial Neural Network


Terminology

Neuron Node/Unit/Cell/Neurode

Synapse Connection/Edge/Link

Synaptic Efficiency Connection Strength/Weight

Firing frequency Node output


ARTIFICIAL NEURAL NETWORK
Artificial Neural Network (ANNs) are programs designed to solve
any problem by trying to mimic the structure and the function of our
nervous system.
Neural networks are based on simulated neurons, Which are joined
together in a variety of ways to form networks.
Neural network resembles the human brain in the following two
ways: -
* A neural network acquires knowledge through learning.
*A neural networks knowledge is stored within the
interconnection strengths known as synaptic weight.
CONTD

ARTIFICIAL NEURAL NETWORK MODEL

Hidden layers

connections Desired
output
Neural network
Input Including Actual
output layer connections output output
(called weights) Comp
between neuron are

Input layer

Fig 1 : artificial neural network model Figure showing adjust of neural network
LEARNING
Neurons in an animals brain are hard wired. It is equally obvious that
animals, especially higher order animals, learn as they grow.
How does this learning occur?
What are possible mathematical models of learning?
In artificial neural networks, learning refers to the method of modifying
the weights of connections between the nodes of a specified network.
The learning ability of a neural network is determined by its
architecture and by the algorithmic method chosen for training.
CONTD

SUPERVISED LEARNING UNSUPERVISED LEARNING

A teacher is available to indicate This is learning by doing.


whether a system is performing In this approach no sample
correctly, or to indicate the amount of outputs are provided to the
error in system performance. Here a
teacher is a set of training data. network against which it can
The training data consist of pairs of measure its predictive
input and desired output values that performance for a given vector of
are traditionally represented in data inputs.
vectors. One common form of
Supervised learning can also be unsupervised learning is
referred as classification, where we clustering where we try to
have a wide range of classifiers, categorize data in different
(Multilayer perceptron, k nearest clusters by their similarity.
neighbor..etc)
THE BACKPROPAGATION ALGORITHM

The backpropagation algorithm (Rumelhart and McClelland, 1986) is


used in layered feed-forward Artificial Neural Networks.
Back propagation is a multi-layer feed forward, supervised learning
network based on gradient descent learning rule.
we provide the algorithm with examples of the inputs and outputs we
want the network to compute, and then the error (difference between
actual and expected results) is calculated.
The idea of the backpropagation algorithm is to reduce this error,
until the Artificial Neural Network learns the training data.
The activation function of the artificial neurons in ANNs
implementing the backpropagation algorithm is a
weighted sum (the sum of the inputs xi multiplied by their
respective weights wji) Inputs, x

output
The most common output function is the sigmoidal
function:

Since the error is the difference between the actual and


the desired output, the error depends on the weights, and
we need to adjust the weights in order to minimize the
error. We can define the error function for the output of Weights, v weights, w
each neuron:
Fig: Basic Block of
Back propagation neural network
NEURAL NETWORK
APPLICATIONS
ADVANTAGES
It involves human like thinking.
They handle noisy or missing data.
They can work with large number of variables or parameters.
They provide general solutions with good predictive accuracy.
System has got property of continuous learning.
They deal with the non-linearity in the world in which we live.
CONCLUSION

You might also like