ARTIFICIAL NEURAL NETWORKS-moduleIII

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 61

ARTIFICIAL NEURAL

NETWORKS
MODULE-3

Module-3.
Counter Propagation Networks: Kohonen layer - Training
the Kohonen layer - Pre initializing the weight vectors statistical properties Training the Grossberg layer - Full
counter propagation network - Application

INTRODUCTION

Perceptron Training.
Back Propagation Networks.
Self-organizing Maps & Counter Propagation.
Unsupervised Training.

Self-organized

clustering may be defined as a

mapping through which N-dimensional pattern space


can be mapped to into a smaller number of points in an
output space.

The

matching is achieved autonomously without

supervision. i.e., clustering is done in a self-organized


manner

The term self-organizing refers to the ability to learn


and organize information without being given the
correct answer.

Self-organized network perform unsupervised


learning.

Competitive Networks

When more than one neuron in the out put layer


fires, an additional structure is included in the
network so that the net is forced to trigger only one
neuron.
This mechanism is termed as competition. When one
competition is completed only one neuron in the
competing group will have a non zero out put.
The competition is based on the winner take all
policy

Counter Propagation Networks.


Counter Propagation is a combination of two well
known algorithm :
Kohenen s Self organizing maps.
Gross bergs Outstar.
Counter Propagation is a network with high
representational power compared to single layer
perceptron.
It is having high speed of training.

KOHONEN SELF ORGANISING MAPS.


Kohonen self organizing maps assume a topological structure
among clustering unit. Kohonen network aims at using
kohonen learning to adjust weights and finally results in a
pattern.

Structure of Kohonen
There are m clustering units, arranged in a linear or two
dimensional array.
The input are arranged as n-tuples.
All input are given to all the neuron.
The weights of clustering unit will serve as an exempler of
input patteren.

Structure of Kohonen
There are

m clustering units, arranged in a linear or two

dimensional array.
The input are arranged as n-tuples.
All input are given to all the neuron.
The weights of clustering unit will serve as an exemplar of
input pattern.

Kohonen Network also follows Winner Takes All policy.


The network cluster unit ,whose weight vector matches
more closely with the input pattern is considered as
Winner.

The winning is usually decided based on the Euclidean distance


Euclidean distance, D=

( xi wij ) 2

Kohonens Squire grid clustering unit structure

*
*
*

* *

*
*

*
*

Kohonen Training Algorithm

Step 1: Initialize weights ,set learning rate and neighborhood


parameters
Step 2: While stopping condition is false do the following
steps(3 to 7) .
Step 3:For each input vector calculate the Euclidean distance.
D(j)=(wij-xi)2
Step 4: Locate the winner.
Step 5: Adjust the weights
wij(new)=wij(old)+(xi-wij(old))

Example (Refer Problem-1)


Consider a Kohonen net with two cluster units and five
input units. The weight vector for the cluster units are
w1=[1 0.8 0.6 0.4 0.2 ],W2=[1 0.5 1 0.5 1 ].Use the square of
the Euclidean distance to find the winning clustering unit for
input pattern X =[0.5 1 0.5 0.0 0.5 ].Find the new weight
for the winning neuron.assume learning rate =0.2.
Answer: Unit 1 D(1)=0.55,
Unit 2 D(2)=1.25
New weight for Unit 1(Winner)=[0.9 0.84 0.58 0.32 0.26]

Counter Propagation Network Structure


Counter propagation networks consists TWO layers
Kohonen Layer
Grossberg layer

w11
1

W21

Grossberg
Layer

Kohononen
Layer
K1`

V11

k2

3
w33

K3

G1

G2
V32
V33

G3

NORMAL OPERATION OF KOHONEN LAYER

NET=xi*wij
NET=X*W
The Kohonen Neuron with largest value of net is considered
as winner
NORMAL OPERATION OF GROSSBERG LAYER
If k1,K2be the kohonen layer output, then the Grossberg
layer net output is weighted kohonen layer output.

NETj=kivij
Y=KV
Where V=Grossberg Layer weight Matrix
K = Kohonen Layer weight Matrix
Y=Grossberg Layer output Vector.

Kohonen Training (Physical Interpretation)

Preprocessing of the input vectors


Need for preprocessing
Xi'=Xi/(X12+X22++Xn2)
Example: X1=[3,4 ]
X1=[3/5 4/5 ]

3/5
4/5

Representation of input vectors before and after normalization

Two Dimensional Unit Vectors on The Unit Circle

Training process of Kohonen Layer weights


(X-Wold)
X-Wold

Wold
Wnew
Xi

Pre initialization of weight vectors

All the weight vectors are to be set to initial values


before starts training.

Initial values are randomly selected and small values


arte selected.

For

Kohonen the initial training vectors should be

normalized.

The weights vectors must end up equal to normalized


input vectors

Pre normalization will shorten the training process.

Problems with randomizing Kohonen layer weights

It will uniformly distribute weight vectors around the


hypersphere.

Most

of the input vectors are grouped and

concentrated at relatively in small area.

Hence most of the Kohonen neuron may be wasted


due to zero output.

The remaining weights may be too few in number


to categorize the input into groups.

The most desirable solution is to distribute the weight


vectors according to density of input vectors that must be
separated. This is impractical to implement.

Method-I (convex combination method)


1
Set all weight to the same value
,where n is the number
n
of components of input vectors (hence weight vectors). All the
input xiare given a value equal to the X [1 ][ 1 ]
i
n
,where n is the number of inputs.

Assignment :No.2(A)
Explain the need of initialization of weight matrix in
Kohonen layer?
What are the different methods used?
Statistical property of The trained Network
Kohonen network has a useful and interesting ability to
extract statistical properties of the input data set.
It is shown by Kohonen that probability of randomly selected
input vectors closest to any given weight vector is 1/k,where
k is the number of Kohonen neuron.

Training
layer
trainingofisGrossberg
a supervised
training.
Grossberg
An input vector is applied ,from the output of Kohonen
Layer.

Output

from Grossberg layer is calculated and is

compared with the desired output.

The

amount of weight adjustment proportional to this

difference.

vij ,new vi j ,old ( y i vi j ,old )k i


ki=output from Kohonen Layer.
yj=desired output component.

The unsupervised Kohonen layer produces outputs at


indeterminate position .

These

are mapped into the desired outputs by the

Grossberg layer.

CPN

Full Counter Propagation

Forward Counter Propagation

x1

x2

y1

Y-input Layer

X-input Layer

FULL COUNTER PROPAGATION NETWORK

y2
z1

x3

yn

X1*
X2*
X3*

zn
y 1*
T

Y2*
Yn*

Y-output Layer

X-output Layer

zj

The Major aim of a full counter propagation network is to


provide an efficient means of representing a large number of
vector pairs X:Y by adaptively constructing a look up table.

It Produces an approximation X:Y


With X alone.
With Y alone
With X:Y

It uses winner Take All policy


Vectors are normalized
Learning Algorithms for Kohonen Layer are

vij ,new vi j ,old ( xi vi j ,old )


wij ,new wi j ,old ( xi wi j ,old )
Learning Algorithm for Grossberg are

t ij ,new t i j ,old a( y i t i j ,old )k i


u ij ,new u i j ,old b( y i u i j ,old )k i

A full counter propagation network to compliment the


function y=1/x

Applications of CPN

Vector Mapping
Data Compression
Image Compression

A CPN can be used to compress data before transmission.


The image to be transmitted is first divided into sub images.
Each sub image is further classified into pixels.
Each pixel represent either one (Light) or zero (dark).
If there are n pixels, n pixels are required to transmit this.
If some distortion can be tolerated, a fewer bits are
sufficient for transmission.
A CPN can perform vector quantization.
Only one neuron of the Kohonen layer output become 1.
The Grossberg layer will generate a code for that neuron,
and is transmitted.
At the receiving end an identical CPN accept the binary
code and produces the inverse function .

You might also like