perceptron working

4
PERCEPTRON INTRODUCTION The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of feed forward neural network: a linear classifier. Definition The perceptron is a binary classifier which maps its input x (a real-valued vector) to an output value f(x) (a single binary value): Where w is a vector of real-valued weights, is the dot product (which here computes a weighted sum), and b is the 'bias', a constant term that does not depend on any input value. The value of f(x) (0 or 1) is used to classify x as either a positive or a negative instance, in the case of a binary classification problem. If b is negative, then the weighted combination of inputs must produce a positive value greater than | b | in order to push the classifier neuron over the 0 threshold. Spatially, the bias alters the position (though not the orientation) of the decision boundary. The perceptron learning algorithm does not terminate if the learning set is not linearly separable. The perceptron is considered the simplest kind of feed- forward neural network. Learning algorithm Below is an example of a learning algorithm for a single- layer (no hidden-layer) perceptron. For multilayer perceptrons, more complicated algorithms such as back propagation must be used. Alternatively, methods such as the delta rule can be used if the function is non-linear and differentiable, although the one below will work as well.

Upload: zarnigar-altaf

Post on 27-Dec-2014

179 views

Category:

Documents


3 download

DESCRIPTION

PERCEPTRON, neur

TRANSCRIPT

Page 1: Perceptron working

PERCEPTRON INTRODUCTION

The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of feed forward neural network: a linear classifier.

Definition

The perceptron is a binary classifier which maps its input x (a real-valued vector) to an output value f(x) (a single binary value):

Where w is a vector of real-valued weights, is the dot product (which here computes a weighted sum), and b is the 'bias', a constant term that does not depend on any input value.

The value of f(x) (0 or 1) is used to classify x as either a positive or a negative instance, in the case of a binary classification problem. If b is negative, then the weighted combination of inputs must produce a positive value greater than | b | in order to push the classifier neuron over the 0 threshold. Spatially, the bias alters the position (though not the orientation) of the decision boundary. The perceptron learning algorithm does not terminate if the learning set is not linearly separable.

The perceptron is considered the simplest kind of feed-forward neural network.

Learning algorithm

Below is an example of a learning algorithm for a single-layer (no hidden-layer) perceptron. For multilayer perceptrons, more complicated algorithms such as back propagation must be used. Alternatively, methods such as the delta rule can be used if the function is non-linear and differentiable, although the one below will work as well.

The learning algorithm we demonstrate is the same across all the output neurons, therefore everything that follows is applied to a single neuron in isolation. We first define some variables:

denotes the output from the perceptron for an input vector . is the bias term, which in the example below we take to be 0. is the training set of s samples, where:

o is the n-dimensional input vector. o is the desired output value of the perceptron for that input.

We show the values of the nodes as follows:

is the value of the ith node of the jth training input vector.

Page 2: Perceptron working

To represent the weights:

is the ith value in the weight vector, to be multiplied by the value of the ith input node.

An extra dimension, with index n + 1, can be added to all input vectors, with , in which case replaces the bias term. To show the time-dependence of , we use:

is the weight i at time t. is the learning rate, where .

Too high a learning rate makes the perceptron periodically oscillate around the solution. A possible enhancement is to use LRn starting with n=1 and incrementing it by 1 when a loop in learning is found.

The appropriate weights are applied to the inputs, and the resulting weighted sum passed to a function that produces the output y.

Learning algorithm steps

1. Initialise weights and threshold. Note that weights may be initialised by setting each weight node to 0 or to a small random value. In the example below, we choose the former.

2. For each sample in our training set , perform the following steps over the input and desired output :

2a. Calculate the actual output: 2b. Adapt weights: , for all nodes .

Step 2 is repeated until the iteration error is less than a user-specified error threshold , or a predetermined number of iterations have been completed. Note that the algorithm adapts the weights immediately after steps 2a and 2b are applied to a pair in the training set rather than waiting until all pairs in the training set have undergone these steps.

Page 3: Perceptron working

PERCEPTRON SOFTWARE WORKING

This perceptron required visual c++ (6.0 version) First user will make a file for which we have used the concept of filing.

For user understanding we have given a complete menu in the beginning of our software.

The user will give the following things.

File name Input Neurons Number of Neurons Target

It works with both binary and bipolar input.

The maximum number of epoch it takes is 1000.

You can also get the information of your file by one of the option in the menu.

User first have to create a file then give inputs and target, then user will train the net, and then finally test the functionality of neural net which was created.