it soft computing

41
 Index S.N o List of Experiments  Date Sign 1 Study of Biological Neuron & Artificial Neural Networks. 2 Study of Various activation functions & their Matlab implementations. 3 WAP in C++ & Matlab to implement Perceptron Training algorithm. 4 WAP in C++ & Matlab to implement Delta learning rule. 5 Write an algorithm for Adaline N/W with flowchart & Matlab program. 6 Write an algori thm for Madali ne N/W with flowchart & Matlab progra 7 WAP in C++ & Matlab to imple ment Error Back Propog ation Algor ith 8 Study of Genetic Algorithm.

Upload: kriti-bajpai

Post on 16-Oct-2015

128 views

Category:

Documents


0 download

DESCRIPTION

Practicals

TRANSCRIPT

  • Index

    S.No

    ListofExperiments Date Sign

    1 StudyofBiologicalNeuron&ArtificialNeuralNetworks.

    2 StudyofVariousactivationfunctions&theirMatlabimplementations.

    3 WAPinC++&MatlabtoimplementPerceptronTrainingalgorithm.

    4 WAPinC++&MatlabtoimplementDeltalearningrule.

    5 WriteanalgorithmforAdalineN/Wwithflowchart&Matlabprogram.

    6 WriteanalgorithmforMadalineN/Wwithflowchart&Matlabprogram.

    7 WAPinC++&MatlabtoimplementErrorBackPropogationAlgorithm.

    8 StudyofGeneticAlgorithm.

  • 9 StudyofMatlabneuralnetworktoolbox.

    10 StudyofMatlabFuzzylogictoolbox.

    11 WriteaMATLABprogramtoimplementFuzzySetoperation.

    12 WriteaprogramtoimplementcompositiononFuzzyandCrisprelations.

    13 Writeaprogramtofindunion,intersectionandcomplementoffuzzyssets.

    14 WriteaMATLABprogramformaximizingf(x)=x2usingGA.

    Remarks:Object:1 StudyofBiologicalNeuron&WriteaboutArtificialNeuralNetworks.BiologicalNeuronArtificial neural networks born after McCulloc and Pitts introduced a set of simplified neurons in 1943. These neurons were represented as models of biological networks into conceptual components for circuits that could perform computational tasks. The basic model of the artificial neuron is founded upon the functionality of the biological neuron. By definition, Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body

    The biological neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it. The dendrites and the axon end in presynaptic terminals. The cell body is the heart of the cell. It contains the nucleolus and maintains protein synthesis. A neuron has many dendrites, which look like a treestructure,receivessignalsfromotherneurons.

  • A single neuron usually has one axon, which expands off from a part of the cell body. This I called the axon hillock. The axon main purpose is to conduct electrical signals generated at the axon hillock down itslength.Thesesignalsarecalledactionpotentials.The other end of the axon may split into several branches, which end in a presynaptic terminal. The electrical signals (action potential) that the neurons use to convey the information of the brain are all identical. The brain can determine which type of information is being received based on the path of the signal.The brain analyzes all patterns of signals sent, and from that information it interprets the type of information received. The myelin is a fatty issue that insulates the axon. The noninsulated parts of the axon area are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated.Thisensuresthatthesignaltraveldowntheaxontobefastandconstant.The synapse is the area of contact between two neurons. They do not physically touch because they are separated by a cleft. The electric signals are sent through chemical interaction. The neuron sending the signaliscalledpresynapticcellandtheneuronreceivingtheelectricalsignaliscalledpostsynapticcell.The electrical signals are generated by the membrane potential which is based on differences in concentrationofsodiumandpotassiumionsandoutsidethecellmembrane.Biological neurons can be classified by their function or by the quantity of processes they carry out. When they are classified by processes, they fall into three categories: Unipolar neurons, bipolar neurons andmultipolarneurons.Unipolar neurons have a single process. Their dendrites and axon are located on the same stem. Theseneuronsarefoundininvertebrates.Bipolarneuronshavetwoprocesses.Theirdendritesandaxonhavetwoseparatedprocessestoo.Multipolar neurons: These are commonly found in mammals. Some examples of these neurons are spinalmotorneurons,pyramidalcellsandpurkinjecells.When biological neurons are classified by function they fall into three categories. The first group is sensory neurons. These neurons provide all information for perception and motor coordination. The second group provides information to muscles, and glands. There are called motor neurons. The last group, the interneuronal, contains all other neurons and has two subclasses. One group called relay or protection interneurons. They are usually found in the brain and connect different parts of it. The other groupcalledlocalinterneuronsareonlyusedinlocalcircuits.ArtificialNeuralNetworkAn artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform even so a software implementation ofaneuralnetworkcanbemadewiththeiradvantagesanddisadvantages.

    Advantages: Aneuralnetworkcanperformtasksthatalinearprogramcannot.

  • When an element of the neural network fails, it can continue without any problem by their parallelnature.

    Aneuralnetworklearnsanddoesnotneedtobereprogrammed. Itcanbeimplementedinanyapplication. Itcanbeimplementedwithoutanyproblem.

    Disadvantages:

    Theneuralnetworkneedstrainingtooperate.

    The architecture of a neural network is different from the architecture of microprocessors thereforeneedstobeemulated.

    Requireshighprocessingtimeforlargeneuralnetworks.Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system,aneuralnetworkisrelativelysimple.Artificial neural networks (ANN) are among the newest signalprocessing technologies in the engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view to the engineering perspective. In engineering, neural networks serve two important functions: as pattern classifiers and as nonlinear adaptive filters. We will provide a brief overview of the theory, learning rules, and applications of the most important neural network models. Definitions and Style of Computation An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase . After the training phase the Artificial Neural Network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase ). The Artificial Neural Network is built with a systematic stepbystep procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule . The input/output training data are fundamental in neural network technology, because they convey the necessary information to "discover" the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map, i.e., some Artificial Neural Networks are universal mappers.Thereisastyleinneuralcomputationthatisworthdescribing.

  • An input is presented to the neural network and a corresponding desired or target response set at the output (when this is the case the training is called supervised ). An error is composed from the difference between the desired response and the system output. This error information is fed back to the system and adjusts the system parameters in a systematic fashion (the learning rule). The process is repeated until the performance is acceptable. It is clear from this description that the performance hinges heavily on the data. If one does not have data that cover a significant portion of the operating conditions or if they are noisy, thenneural networktechnology is probably not the right solution. On the other hand, if there is plenty of data and the problem is poorly understood to derive an approximate model, then neural network technology is a good choice. This operating procedure should be contrasted with the traditional engineering design, made of exhaustive subsystem specifications and intercommunication protocols. In artificial neural networks, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase, but the system automatically adjusts the parameters. So, it is difficult to bring a priori information into the design, and when the system does not work properly it is also hard to incrementally refine the solution. But ANNbased solutions are extremely efficient in terms of development time and resources, and in many difficult problems artificial neural networks provide performance that is difficult to match with other technologies. Denker 10 years ago said that "artificial neural networks are the second best way to implement a solution" motivated by the simplicity of their design and because of their universality, only shadowed by the traditional design obtained by studying the physics of the problem. At present, artificial neural networks are emerging as the technology of choice for manyapplications,suchaspatternrecognition,prediction,systemidentification,andcontrol.NeuralNetworktopologiesIn the previous section we discussed the properties of the basic processing unit in an artificial neural network. This section focuses on the pattern of connections between the units and the propagation of data. As for this pattern of connections, the main distinction we can make is between: Feedforward neural networks, where the data ow from input to output units is strictly

    feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to inputsofunitsinthesamelayerorpreviouslayers.

    Recurrent neural networksthat do contain feedback connections. Contrary to feedforward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).

    Classical examples of feedforward neural networks are the Perceptron and Adaline. Examples ofrecurrentnetworkshavebeenpresentedbyAnderson(Anderson,1977),Kohonen(Kohonen,1977),andHopfield(Hopfield,1982).

  • TrainingofArtificialneuralnetworks

    Aneural networkhas to be configured such that the application of a set of inputs produces (either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to'train' the neural networkby feeding it teaching patterns and lettingitchangeitsweightsaccordingtosomelearningrule.Wecancategoriesthelearningsituationsintwodistinctsorts.Theseare:

    Supervised learningor Associative learning in which the network is trained by providing it with input and matching output patterns. These inputoutput pairs can be provided by an external teacher,orbythesystemwhichcontainstheneuralnetwork(selfsupervised).

    Unsupervised learningor Selforganization in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified rather the system mustdevelopitsownrepresentationoftheinputstimuli.

    Reinforcement LearningThis type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The self organizing neural learningmaybecategorizedunderthistypeoflearning.

    Object:2 StudyofVariousactivationfunctions&theirMatlabimplementations.

  • ActivationfunctionsThe activation function acts as a squashing function, such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or 1 and 1). In general, there are three types of activation functions, denoted by (.) . First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater thanorequaltothethresholdvalue.

    Secondly, there is the PiecewiseLinear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linearoperation.

    Thirdly, there is the sigmoid function. This function can range between 0 and 1, but it is also sometimes usefultousethe1to1range.Anexampleofthesigmoidfunctionisthehyperbolictangentfunction.

  • The artifcial neural networks which we describe are all variations on the parallel distributed processing (PDP) idea. The architecture of each neural network is based on very similar building blocks which perform the processing. In this chapter we first discuss these processing units and discuss diferent neural networktopologies.Learningstrategiesasabasisforanadaptivesystem

  • Object:3 ExplainPerceptronTrainingAlgorithmPerceptronThe perceptron is an algorithm for supervised classification of an input into one of two possible outputs. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector describing a given input. The learning algorithm for perceptrons is an online algorithm, in that it processes elements in the training set oneatatime.The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt.[1]In the context of artificial neural networks, the perceptron algorithm is also termed the singlelayer perceptron, to distinguish it from the case of a multilayer perceptron, which is a more complicated neural network. As a linear classifier, the (singlelayer) perceptron is the simplest kind of feedforward neuralnetwork.The perceptron is a binary classifier which maps its input (a realvalued vector) to an output value

    (asinglebinaryvalue):

    where is a vector of realvalued weights, is the dot product (which here computes a weighted sum),and isthe'bias',aconstanttermthatdoesnotdependonanyinputvalue.

    The value of (0 or 1) is used to classify as either a positive or a negative instance, in the case of a binary classification problem. If is negative, then the weighted combination of inputs must produce a positive value greater than in order to push the classifier neuron over the 0 threshold. Spatially, the bias alters the position (though not the orientation) of the decision boundary. The perceptron learning algorithmdoesnotterminateifthelearningsetisnotlinearlyseparable.PerceptronTrainingAlgorithmBelow is an example of a learning algorithm for a (singlelayer) perceptron. For multilayer perceptrons, where a hidden layer exists, more complicated algorithms such as backpropagation must be used. Alternatively, methods such as the delta rule can be used if the function is nonlinear and differentiable, althoughtheonebelowwillworkaswell.When multiple perceptrons are combined in an artificial neural network, each output neuron operates independentlyofalltheothersthus,learningeachoutputcanbeconsideredinisolation.Wefirstdefinesomevariables:

    denotestheoutputfromtheperceptronforaninputvector .

    isthebiasterm,whichintheexamplebelowwetaketobe0.

    isthetrainingsetof samples,where:

  • o isthe dimensionalinputvector.

    o isthedesiredoutputvalueoftheperceptronforthatinput.

    Weshowthevaluesofthenodesasfollows:

    isthevalueofthe thnodeofthe thtraininginputvector.

    .

    Torepresenttheweights: isthe thvalueintheweightvector,tobemultipliedbythevalueofthe thinputnode.

    An extra dimension, with index , can be added to all input vectors, with , in which case replacesthebiasterm.Toshowthetimedependenceof ,weuse:

    istheweight attime .

    isthelearningrate,where .

    Too high a learning rate makes the perceptron periodically oscillate around the solution unless additional stepsaretaken.

    The appropriate weights are applied to the inputs, and the resulting weighted sum passed to a function thatproducestheoutputy.

    Learningalgorithmsteps1. Initialise weights and threshold. Note that weights may be initialised by setting each weight node

    to0ortoasmallrandomvalue.Intheexamplebelow,wechoosetheformer.

    2. For each sample in our training set , perform the following steps over the input and desired output :

    2a.Calculatetheactualoutput:

    2b.Adaptweights:

    ,forallnodes .

  • Step 2 is repeated until the iteration error is less than a userspecified error threshold , or a predetermined number of iterations have been completed. Note that the algorithm adapts the weights immediately after steps 2a and 2b are applied to a pair in the training set rather than waiting until allpairsinthetrainingsethaveundergonethesesteps.Object:4 WriteaboutDeltalearningruleDeltaRuleDelta rule is a generalization of the perceptron training algorithm. It extends the technique to continuous input and outputs in perceptron training Algorithm a term delta is introduced which is the difference betweenthedesiredOrtargetoutputTandactualoutputaDelta=(TA)Hereifdelta=0,theo/piscorrect&nothingisdone

  • Ifdelta>0,theo/pisincorrect&is0,addeach I/ptoitscorrespondingwt.Ifdelta
  • Object:5 WriteanalgorithmforAdalineN/Wwithflowchart.AdalineNetworkTheAdalinenetworktrainingalgorithmisasfollows:

  • Step0:Weightsandbiasaresettosomerandomvaluesbutnotzero.Setthelearningrateparameter.

    Step1:Performstep26whenstoppingconditionisfalse.Step2:Performstep35foreachbipolartrainingpairs:t.Step3:SetactivationsforinputunitsI=1ton.

    xi=siStep4:Calculatethenetinputtotheoutputunit.

    Yin=b+xiwiStep5:UpdatetheweightsandbiasforI=1ton:

    Wi(new)=wi(old)+(tyin)xib(new)=b(old)+(tyin)

    Step6:Ifthehighestweightchangethatoccurredduringthetrainingissmallerthanaspecifiedtolerancethestopthetrainingprocess,elsecontinue.ThisisthetestforstoppingconditionofanetworkTestingAlgorithm:Step0:Initializedtheweights.Step1:Performstep24foreachbipolarinputunitstox.Step2:Settheactivationsoftheinputunitstox.Step3:Calculatethenetinputtotheoutputunit.

    Yin=b+xiwiStep4:Applytheactivationfunctionoverthenetinputcalculated:

    1ifyin>=0

    1ifyin

  • Step0:Initializedtheweights.Alsosetinitiallearningrate.Step1:Whenstoppingconditionisfalse,performstep23Step2:Foreachbipolartrainingpairs:t,performstep37Step3:Activateinputlayerunits.fori=1ton.

    xi=siStep4:CalculatethenetinputtoeachhiddenAdalineunit.:

    zinj=bj+xiwij,j=1tomi=1Step5:Calculatetheoutputtoeachhiddenunit:

    zj=f(zinj)Step6:Findtheoutputofthenet:

    Yin=bO+zivjJ=1

    Y=f(yin)Step7:Calculatetheerrorandupdatetheweights:

    1. Ift=y,noweightupdationisrequired.2. Iftyandt=+1,updateweightsonzj,wherenetinputisclosestto0(zero):

    bj(new)=bj(old)+(1zinj) wij(new)=wij(old)+(1zinj)xi

    3. Iftyandt=1,updateweightsonzkwhosenetinputispositive:

    wik(new)=wik(old)+(1zink)xi

    bk(new)=bk(old)+(1zink)

    Step8:Testforstoppingcondition.Object:7 WriteaprogramtoimplementErrorBackPropagationAlgorithm

    AlgorithmErrorforBackpropagation

  • Start with randomly chosen weight while Mse is inoat is factory and computational bounds are notexceeds,doForeachinputpatternn&desinedo/pvectodjComputehiddennodeo/px(1)Computethen/wo/pvectorojComputetheerrorbetweenojanddesiredo/pvectordjModifytheweightsbetweenhidden&o/pnodes.

    Wk,j(2,1)=(dkOk)Ok(1Ok)Xj(1)

    Modifytheweightbetweeni/p&hiddennodes Wj,j(1,0)=(dkOk)Ok(1Ok)Wk,j(2,1)Xj(1)(1Xj(1))Xj(0)

    EndforEndwhile.

    ProgramforBackpropagationAlgorithm

    ProgramCode:

    #include#include#includefloatx=0I=0,j=0,r=0floathard(floatI,floatm,floatw[20][20])floatp[5][5],floatt[5],floata{floatn=0,s=0,e=0,pt[50][50]for(intI=0,I

  • w[I+1][1]=w[1][2]x++if(x>(m1)){if(y==0){cout
  • cin>>p[i][j]}cout
  • GeneticAlgorithmProfessor John Holland in 1975 proposed an attractive class of computational models, called Genetic Algorithms (GA), that mimic the biological evolution process for solving problems in a wide domain. The mechanisms under GA have been analyzed and explained later by Goldberg, De Jong, Davis, Muehlenbein, Chakraborti, Fogel, Vose and many others. Genetic Algorithms has three major applications, namely, intelligent search, optimization and machine learning. Currently, Genetic Algorithms is used along with neural networks and fuzzy logic for solving more complex problems. Because of their joint usage in many problems, these together are often referred to by a generic name: softcomputing. A Genetic Algorithms operates through asimplecycleofstages:

    i)Creationofapopulationofstrings, ii)Evaluationofeachstring, iii)Selectionofbeststringsand iv)Geneticmanipulationtocreatenewpopulationofstrings.ThecycleofaGeneticAlgorithmsispresentedbelowEach cycle in Genetic Algorithms produces a new generation of possible solutions for a given problem. In the first phase, an initial population, describing representatives of the potential solution, is created to initiate the search process. The elements of the population are encoded into bitstrings, called chromosomes. The performance of the strings, often called fitness, is then evaluated with the help of some functions, representing the constraints of the problem. Depending on the fitness of the chromosomes, they are selected for a subsequent genetic manipulation process. It should be noted that the selection process is mainly responsible for assuring survival of the bestfit individuals. After selection of the population strings is over, the genetic manipulation process consisting of two steps is carried out. In the first step, the crossover operation that recombines the bits (genes) of each two selected strings (chromosomes) is executed. Various types of crossover operators are foundintheliterature.ThesinglepointandtwopointscrossoveroperationsareillustratedThe crossover points of any two chromosomes are selected randomly. The second step in the genetic manipulation process is termed mutation, where the bits at one or more randomly selected positions of the chromosomes are altered. The mutation process helps to overcome trapping at local maxima. The offsprings produced by the genetic manipulation processarethenextpopulationtobeevaluated.

  • Fig.:Mutationofachromosomeatthe5thbitposition.

    Example:The Genetic Algorithms cycle is illustrated in this example for maximizing a function f(x) = x2 in the interval 0 = x = 31. In this example the fitness function is f (x) itself. The larger is the functional value, the better is the fitness of the string. In this example, we start with 4 initial strings. The fitness value of the strings and the percentage fitness of the total are estimated in Table A. Since fitness of the second string is large, we select 2 copies of the second string and one each for the first and fourth string in the mating pool. The selection of the partners in the mating pool is also done randomly. Here in table B, we selected partner of string 1 to be the 2nd string and partner of 4th string to be the 2nd string. The crossover points for the firstsecond and secondfourth strings have been selected after oth and 2nd bit positions respectively in table B. The second generation of the population without mutation in the first generationispresentedintableC.TableA

    TableB:

  • TableC:

    A Schema (or schemata in plural form) / hyper plane or similarity template is a genetic pattern with fixed values of 1 or 0 at some designated bit positions. For example, S = 01?1??1 is a 7bit schema with fixed values at 4bits and don't care values, represented by ?, at the remaining 3 positions. Since 4 positions matter for this schema, we say that the schema contains 4genes.DeterministicExplanationofHolland'sObservation

    ToexplainHolland'sobservationinadeterministicmannerletuspresumethefollowingassumptions. i)Therearenorecombinationoralternationstogenes. ii) Initially, a fraction f of the population possesses the schema S and those individuals

    reproduceatafixedrater. iii)AllotherindividualslackingschemaSreproduceatarates

  • Forsmalltandf,theabovefractionreducestof(r/s)t,whichmeansthepopulationhavingtheschemaSincreasesexponentiallyatarate(r/s).Astochasticproofoftheabovepropertywillbepresentedshortly,videawellknowntheorem,calledthefundamentaltheoremofGeneticalgorithm.StochasticExplanationofGeneticAlgorithmsForpresentationofthefundamentaltheoremofGeneticAlgorithms,thefollowingterminologiesaredefinedinorder.Definition:TheorderofaschemaH,denotedbyO(H),isthenumberoffixedpositionsintheschema.Forexample,theorderofschemaH=?001?1?is4,sinceitcontains4fixedpositions.Forexample,theschema?1?001hasadefininglengthd(H)=40=4,whilethed(H)of???1??iszero.Definition:TheschemasdefinedoverLbitstringsmaybegeometricallyinterpretedashyperplanesinanLdimensionalhyperspace(abinaryvectorspace)witheachLbitstringrepresentingonecornerpointinanndimensionalcube.

    Object:9StudyofMatlabneuralnetworktoolbox.

    Matlabneuralnetworktoolbox. The Matlab neural network toolbox provides a completeset of functions and a graphical user

    interfaceforthedesign,implementation,visualization,andsimulationofneuralnetworks. It supports the most commonly used supervised and unsupervised network architectures and a

    comprehensivesetoftrainingandlearningfunctions.KEYFEATURES Graphicaluserinterface(GUI)forcreating,training,andsimulatingyourneuralnetworks. Supportforthemostcommonlyusedsupervisedandunsupervisednetworkarchitectures. Acomprehensivesetoftrainingandlearningfunctions. A suite of Simulink blocks, as well as documentation and demonstrations of control system

    applications. AutomaticgenerationofSimulinkmodelsfromneuralnetworkobjects. Routinesforimprovinggeneralization.

    GENERALCREATIONOFNETWORK

  • net=networknet=network(numInputs,numLayers,biasConnect,inputConnect,layerConnec,outputConnect,targetConnect)Description NETWORK creates new custom networks. It is used to create networks that are then customized by functionssuchasNEWP,NEWLIN,NEWFF,etc.NETWORKtakestheseoptionalarguments(shownwithdefaultvalues): numInputsNumberofinputs,0. numLayersNumberoflayers,0. biasConnectNumLayersby1Booleanvector,zeros. inputConnect NumLayersbynumInputsBooleanmatrix,zeros. layerConnectNumLayersbynumLayersBooleanmatrix,zeros. outputConnect1bynumLayersBooleanvector,zeros. targetConnect1bynumLayersBooleanvector,zeros,andreturns, NET Newnetworkwiththegivenpropertyvalues.TRAINANDADAPT

    1. Incremental training : updating the weights after the presentation of each single training sample.

    2. Batchtraining:updatingtheweightsaftereachpresentingthecompletedataset.When using adapt, both incremental and batch training can be used . When using train on the

    other hand, only batch training will be used, regardless of the format of the data. The big plus of train is that it gives you a lot more choice in training functions (gradient descent, gradient descent w/ momentum,LevenbergMarquardt,etc.)whichareimplementedveryefficiently.The difference between train and adapt: the difference between passes and epochs. When using adapt, the property that determines how many times the complete training data set is used for training the network is called net.adaptParam.passes. Fair enough. But, when using train, the exact same property is nowcallednet.trainParam.epochs.>>net.trainFcn='traingdm'>>net.trainParam.epochs=1000>>net.adaptFcn='adaptwb'>>net.adaptParam.passes=10TRAININGFUNCTIONSThereareseveraltypesoftrainingfunctions:

  • 1. Supportedtrainingfunctions,2. Supportedlearningfunctions,3. Transferfunctions,4. Transferderivativefunctions,5. Weightandbiasinitializefunctions,

    Weightderivativefunctions.SUPPORTEDTRAININGFUNCTIONStrainb Batchtrainingwithweightandbiaslearningrulestrainbfg BFGSquasiNewtonbackpropagationtrainbr Bayesianregularizationtrainc Cyclicalorderincrementalupdatetraincgb PowellBealeconjugategradientbackpropagationtraincgf FletcherPowellconjugategradientbackpropagationtraincgp PolakRibiereconjugategradientbackpropagationtraingd Gradientdescentbackpropagationtraingda Gradientdescentwithadaptivelearningratebackpropagationtraingdm Gradientdescentwithmomentumbackpropagationtraingdx Gradientdescentwithmomentum&adaptivelinearackpropagationtrainlm LevenbergMarquardtbackpropagationtrainoss Onestepsecantbackpropagationstrainr Randomorderincrementalupdatetrainrp Resilientbackpropagation(Rprop)trains Sequentialorderincrementalupdatetrainscg ScaledconjugategradientbackpropagationSUPPORTEDLEARNINGFUNCTIONSlearncon Consciencebiaslearningfunctionlearngd Gradientdescentweight/biaslearningfunctionlearngdm Gradientdescentwithmomentumweight/biaslearningfunction learnh Hebbweightlearningfunctionlearnhd Hebbwithdecayweightlearningrulelearnis Instarweightlearningfunctionlearnk Kohonenweightlearningfunctionlearnlv1 LVQ1weightlearningfunctionlearnlv2 LVQ2weightlearningfunctionlearnos Outstarweightlearningfunctionlearnp Perceptronweightandbiaslearningfunctionlearnpn Normalizedperceptronweightandbiaslearningfunctionlearnsom Selforganizingmapweightlearningfunctionlearnwh WidrowHoffweightandbiaslearningruleTRANSFERFUNCTIONS

  • compet Competitivetransferfunction.hardlim Hardlimittransferfunction.hardlims Symmetrichardlimittransferfunction.logsig Logsigmoidtransferfunction.poslin Positivelineartransferfunction.purelin Lineartransferfunction.radbas Radialbasistransferfunction.satlin Saturatinglineartransferfunction.satlins Symmetricsaturatinglineartransferfunction.softmax Softmaxtransferfunction.tansig Hyperbolictangentsigmoidtransferfunction.tribas Triangularbasistransferfunction.TRANSFERDERIVATIVEFUNCTIONSDhardlim Hardlimittransferderivativefunction.dhardlms Symmetrichardlimittransferderivativefunctiondlogsig Logsigmoidtransferderivativefunction.dposlin Positivelineartransferderivativefunction.dpurelin Hardlimittransferderivativefunction.dradbas Radialbasistransferderivativefunction.dsatlin Saturatinglineartransferderivativefunction.dsatlins Symmetricsaturatinglineartransferderivativefunction. dtansig Hyperbolictangentsigmoidtransferderivativefunction dtribas Triangularbasistransferderivativefunction.WEIGHTANDBIASINITIALIZATIONFUNCTIONSinitcon Consciencebiasinitializationfunction.initzero Zeroweight/biasinitializationfunction.midpoint Midpointweightinitializationfunction.randnc Normalizedcolumnweightinitializationfunction.randnr Normalizedrowweightinitializationfunction.rands Symmetricrandomweight/biasinitializationfunctionWEIGHTDERIVATIVEFUNCTIONSddotprod Dotproductweightderivativefunction.NEURALNETWORKTOOLBOXGUI

    1. The graphical user interface (GUI) is designed to be simple and user friendly. This tool lets you importpotentiallylargeandcomplexdatasets.

    2. The GUI also enables you to create, initialize, train, simulate, and manage the networks. It has theGUINetwork/DataManagerwindow.

  • 3. The window has its own work area, separate from the more familiar command line workspace. Thus, when using the GUI, one might "export" the GUI results to the (command line) workspace.Similarlyto"import"resultsfromthecommandlineworkspacetotheGUI.

    4. Once the Network/Data Manager is up and running, create a network, view it, train it, simulate it and export the final results to the workspace. Similarly, import data from the workspace for useintheGUI.

  • Agraphicaluserinterfacecanthusbeusedto

    1. Createnetwork,2. Createdata,3. Trainthenetworks,4. Exportthenetworks,5. Exportthedatatothecommandlineworkspace.

    CONCLUSIONThepresentationhasgivenanoverviewofNeuralNetworktoolboxinMATLAB.

  • Object:10StudyofMatlabFuzzylogictoolbox.MatlabFuzzylogictoolbox. FuzzylogicinMatlabcanbedealtveryeasilyduetotheexistingnewFuzzyLogicToolbox. Thisprovidesacompletesetoffunctionstodesignanimplementvariousfuzzylogicprocesses. Themajorfuzzylogicoperationsinclude:

    fuzzification, defuzzification, fuzzyinference.

    These all are performed by means of various functions and even can be implemented using the GraphicalUserInterface.Thefeaturesare:

    It provides tools to create and edit Fuzzy Inference Systems(FIS). It allows integrating fuzzy systemsintosimulationwithSIMULINK.

  • ItispossibletocreatestandaloneCprogramsthatcallonfuzzysystemsbuiltwithMATLAB.

    TheToolboxprovidesthreecategoriesoftools: Commandlinefunctions, Graphicalorinteractivetools, Simulinkblocks.

    COMMANDLINEFISFUNCTIONSAddmf AddmembershipfunctiontoFISaddrule AddruletoFIS.addvar AddvariabletoFIS.defuzz Defuzzifymembershipfunction.evalfis Performfuzzyinferencecalculationevalmf Genericmembershipfunctionevaluation.gensurf GenerateFISoutputsurface.getfis Getfuzzysystemproperties.mfstrtch Stretchmembershipfunction.newfis CreatenewFIS.plotfis DisplayFISinputoutputdiagram.plotmf Displayallmembershipfunctionsforonevariable.readfis LoadFISfromdisk.rmmf RemovemembershipfunctionfromFISrmvar RemovevariablefromFIS.Setfis Setfuzzysystemproperties.showfis DisplayannotatedFIS.Showrule DisplayFISruleswritefis SaveFIStodisk.MEMBERSHIPFUNCTIONSDsigmf Differenceoftwosigmoidmembershipfunctions.gauss2mf TwosidedGaussiancurvemembershipfunction.gaussmf Gaussiancurvemembershipfunction.gbellmf Generalizedbellcurvemembershipfunction.pimf Pishapedcurvemembershipfunction.psigmf Productoftwosigmoidmembershipfunctions.smf Sshapedcurvemembershipfunction.sigmf Sigmoidcurvemembershipfunction.trapmf Trapezoidalmembershipfunction.trimf Triangularmembershipfunction.zmf Zshapedcurvemembershipfunction.GRAPHICALUSERINTERFACEEDITORS(GUITOOLS)anfisedit ANFIStrainingandtestingUItool.

  • findcluster ClusteringUItool.fuzzy BasicFISeditor.mfedit Membershipfunctioneditor.ruleedit Ruleeditorandparser.ruleview Ruleviewerandfuzzyinferencediagram.surfview Outputsurfaceviewer.FISEDITOR(MAMDANI)FISEDITOR(SUGENO)

  • FISMEMBERSHIPFUNCTIONEDITORFISRULEEDITOR

  • FISRULEVIEWER

    FIS

    SURFACEVIEWER

  • SIMULINKBLOCKSOnce fuzzy system is created using GUI toolsor some other method,can be directlyembedded into SIMULINK usingthe FuzzyLogicControllerblock.

    MEMBERSHIPSIMULINKBLOCKS

  • This Toolbox includes 11 builtin membership function types, built from several basic functions:piecewise linear functions (triangularand trapezoidal),the Gaussian distribution function(gaussian curves and generalized bell), the sigmoid curve, andquadraticand cubicpolynomialcurves(Z,S,andPicurves).

    ADVANCEDTECHNIQUESanfis TrainingroutineforSugenotypeFIS(MEXonly).fcm Findclusterswithfuzzycmeansclustering.genfis1 GenerateFISmatrixusinggenericmethod.genfis2 GenerateFISmatrixusingsubtractiveclustering.subclust Estimateclustercenterswithsubtractiveclustering.CONCLUSIONThepresentationhasgivenanoverviewoffuzzylogictoolboxinMATLAB.Object:11WriteaMATLABprogramtoimplementFuzzySetoperation&properties.ProgramforfuzzysetwithpropertiesandoperationsClearallClcdisp(Fuzzysetwithpropertiesandoperation)a=[010.50.40.6]b=[00.50.70.80.4]c=[0.30.90.201]phi=[000001]disp(Unionofaandb)au=max(a,b)disp(Interactionofaandb)iab=min(a,b)disp(Unionofbanda)

  • bu=max(b,a)if(au==bu)disp(Commutativelawissatisfied)elsedisp(Commutativelawisnotsatisfied)enddisp(Unionofbandc)cu=max(b,c)disp(aU(bUc))acu=max(a,cu)disp((aUb)Uc))auc=max(au,c)if(acu=auc)disp(Associatedlawissatisfied)elsedisp(Associatedlawisnotsatisfied)enddisp(intersectionofbandc)ibc=min(b,c)disp(aU(bIc))dls=max(a,ibc)disp(Unionofaandc)uac=max(a,c)disp((aUb)I(aUc))drs=min(au,uac)if(dls==drs)disp(Distributedlawissatisfied)elsedisp(Distributedlawisnotsatisfied)enddisp(aUa)idl=max(a,a)if(idl==a)disp(Idempotencylawissatisfied)elsedisp(Idempotencylawisnotsatisfied)endif(idtl==phi)disp(identitylawissatisfied)elsedisp(identitylawisnotsatisfied)enddisp(complementof(aIb))fori=1:5ciab(i)=1iab(i)

  • endciabdisp(complementofa)fori=1:5ca(i)=1a(i)endcadisp(complementofb)fori=1:5cb(i)=1b(i)endcbdisp(complementUbcomplient)ifa==ccadisp(evolutionlawissatisfied)elsedisp(evolutionlawissatisfied)endObject:12.WriteaprogramtoimplementcompositiononFuzzyandCrisprelations.ProgramforcompositiononFuzzyandCrisprelationsclearallclcdisp(compositiononCrisprelation)a=[0.20.6]b=[0.30.5]c=[0.60.7]for=1:2r(i)=a(i)*b(i)s(i)=b(i)*c(i)end

  • rsirs=min(r,s)disp(crispcompositionofrandsusingmaxmincomposition)crs=max(irs)fori=1:2prs(i)=r(i)*s(i)endprsdisp(disp(crispcompositionofrandsusingmaxproductcomposition)mprs=max(prs)disp(fuzzycomposition)firs=minr,sdisp(fuzzycompositionofrandsusingmaxmincomposition)frs=max(firs)fori=1:2fprs(i)=r(i)*s(i)endfprsdisp(compositionofrandsusingmaxproductcomposition)fmprs=max(fprs)

    Object:13.Considerthefollowingfuzzysets

    A=

    B=

    Programtofindunion,intersectionandcomplementoffuzzyssets

    %EnterthetwoFuzzysets

  • u=input(enterthefirstfuzzysetA)

    v=input(enterthesecondfuzzysetB)

    disp(UnionofAandB)

    w=max(u,v)

    disp(IntersectionofAandB)

    p=min(u,v)

    [m]=size(u)

    disp(ComplementofA)

    q1=ones(m)u

    [n]=size(v)

    disp(ComplementofB)

    q2=ones(n)v

    Output

    enterthefirstfuzzysetA[10.40.60.3]

    enterthesecondfuzzysetB[0.30.20.60.5]

    UnionofAandB

    w=

    1.00000.40000.60000.5000

    IntersectionofAandB

    p=

    0.30000.20000.60000.3000

    ComplementofA

    q1=

    00.60000.40000.7000

    ComplementofB

    q2=

    0.70000.80000.40000.5000

  • Object:15.WriteaMATLABprogramformaximizingf(x)=x2usingGA.Wherexisrange

    from0to31.Perform5iterationsonly.ProgramforGeneticalgorithmtomaximizethefunction

    f(x)=xsquareclearall

  • clc%xrangefrom0to312power5=32%fivebitsareenoughtorepresentxinbinaryrepresentationN=input(Enterno.ofpopulationineachiteration)Nit=input(Enterno.ofiterations)%Generatetheinitialpopulation[oldchrom]=initbp(n,5)%ThepopulationinbinaryisconvertedtointegerFieldD=[50310011]fori=1:nitphen=bindecod(oldchrm,FieldD,3)%phengivetheintegervalueofthepopulation%obtainfitnessvalueSqx=phen.^2Sumsqx=sum(sqx)avsqx=sumqx/nhsqx=max(sqx)pselect=sqx./sumqxsumpselect=sum(pselect)avpselect=sumpselect/nhpselect=max(pselect)%applyroulettewheelselectionFitnV=sqxNsel=4Newchrix=selrws(FitnV,Nsel)Newchron=oldchron(newchrix,:)%performCrossoverCrossover=1Newchromc=recsp(newchrom,crossrate)%newpopulationaftercrossover%PerformmutationVlub=0:31Mutrate=0.001Newchrom=mutrandbin(newchromc,vlub,0.001)%newpopulationaftermutationdisp(Foriteration)idisp(population)oldchromdisp(X)phendisp((X))sqxoldchrom=newchrommend