back prop classify

3
BACKPROPCLASSIFY.M Call makebatches. I.CALCULATE THE TRANING MISCLASSIFICATION ERROR Initialize err_cr, counter to zero. w1-first layer weights(Vis-Hid) w2-second layer weights(Hid-Pen1) w3-third layer weights(Pen1-Pen2) w4-final layer weights(Pen2-Output) 1. For each batch of data: 1. First calculate the weight probabilities for each of the hidden layer and the final output layer For Hidden layer 1(Hidden) Weight probability1 (wp1 jk ) =1/(1+e -( dataik * w1ij + j) ) where data is the data in a single batch and w1 is the weight matrix of connections between visible neurons to neurons in the first hidden layer. For Hidden layer 2(Pen1) Weight probability2 (wp2 jk ) =1/(1+e -( wp1ik * w2ij + j) ); Similarly calculate wp3 for Hidden Layer 3(Pen 2) But the final output (Pen1 to visible output neurons) is calculated as: targetout= exp (wp3*w_class) e -( wp3ik * w_class+ j) ); where w_class is the matrix containing weights and biases for the pen2-output layer.

Upload: sakshimitra

Post on 20-Jul-2016

220 views

Category:

Documents


0 download

DESCRIPTION

Algo for hinton code

TRANSCRIPT

Page 1: Back Prop Classify

BACKPROPCLASSIFY.M

Call makebatches.

I.CALCULATE THE TRANING MISCLASSIFICATION ERROR

Initialize err_cr, counter to zero.

w1-first layer weights(Vis-Hid)

w2-second layer weights(Hid-Pen1)

w3-third layer weights(Pen1-Pen2)

w4-final layer weights(Pen2-Output)

1. For each batch of data:

1. First calculate the weight probabilities for each of the hidden layer and the final output layer

For Hidden layer 1(Hidden)Weight probability1 (wp1jk) =1/(1+e-(∑dataik * w1ij +ᶲj)) where data is the data in a single batch and w1 is the weight matrix of connections between visible neurons to neurons in the first hidden layer.

For Hidden layer 2(Pen1)Weight probability2 (wp2jk) =1/(1+e-(∑wp1ik * w2ij +ᶲj));Similarly calculate wp3 for Hidden Layer 3(Pen 2)

But the final output (Pen1 to visible output neurons) is calculated as:targetout= exp (wp3*w_class) e-(∑wp3ik * w_class+ᶲj)); where w_class is the matrix containing weights and biases for the pen2-output layer.target: expected output

For the matrix targetout, sum up each row and divide the entries of that row with the sum of that particular row.2. Find out the maximum values in a particular row along with their index for the matrix targetout and target and store them as I,J,I1,J1 respectively.

Page 2: Back Prop Classify

3. Now calculate the number of the similar entries in J,J1 and add this number to a counter.Calculate err_cr as: err_cr- ∑10

j=1∑100k=1 targetjk*log(targetoutjk)

2. Calculate train_error for that particular epoch(iteration) as:numcases*numbatches-counter ,where numcases: size of each batch (i.e. 100);numbatches: number of batches and counter was obtained from previous step.3. Calculate train_cerror for that particular epoch as:

err_cr/numbatches

II. CALCULATE THE TEST MISCLASSIFICATION ERRORRepeat the above steps but with training data replaced by test data and calculate test_err, test_cerror.The test_err and train_err represents the number of misclassified samples for a particular epoch(iteration).

III. COMBINE 10 MINIBATCHES INTO A LARGER BATCH

IV. PERFORM CONJUGATE GRADIENT 3 LINESEARCHES FOE EACH SUCH LARGER BATCH

1. If epoch(iteration)<6 then recalculate wp1,wp2 and wp3 and call minimize with CG_CLASSIFY_INIT and elements in w_class as arguments. Reshape the obtained result in w_class.

2. Else call minimize with CG_CLASSIFY and elements in w1,w2,w3 and w_class as arguments. Reshape the original result to get back w1,w2,w3 and w_class from it.

3. Save the weights and errors calculated in two separate files.

Page 3: Back Prop Classify