neural networks ai – week 21 sub-symbolic ai one: neural networks lee mccluskey, room 3/10 email...

17
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email [email protected] http://scom.hud.ac.uk/scomtlm/cha2 555/

Upload: alexandrina-marsh

Post on 12-Jan-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Neural Networks

AI – Week 21Sub-symbolic AI One: Neural Networks

Lee McCluskey, room 3/10

Email [email protected]

http://scom.hud.ac.uk/scomtlm/cha2555/

Page 2: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Aoccdrnig to rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

Neural Networks

Page 3: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Neural Networks

Up to now: Symbolic AI

• Knowledge Representation is explicit and composite – features

(eg objects, relations ..) of the representation map to feature of the world

• Processes often based on heuristic search, matching, logic reasoning, constraints handling

• Good for simulating “high level cognitive” tasks such as reasoning, planning, problem solving, high level learning, language and text processing ..

OnTop(A,B)m

A

BThe WorldThe Representation

Page 4: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Neural Networks

Up to now: Symbolic AI

Benefits:

-AI/KBs can be engineered and maintained like in software engineering

-Behaviour can be predicted and explained eg using logic reasoning

Problems:

•Reasoning tends to be “brittle” – easily broken by incorrect / approximate data

•Not so good for simulating low level (reactive) animal behaviour where the inputs are noisy / incomplete

Page 5: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Neural Networks• Neural Networks (NNs) are networks of neurons, for example, as

found in real (i.e. biological) brains. Artificial Neurons are crude approximations of the neurons found in brains. They may be physical devices, or purely mathematical constructs.

• Artificial Neural Networks (ANNs) are networks of Artificial Neurons, and hence constitute crude approximations to parts of real brains. ANNs =~ a parallel computational system consisting of many simple processing elements connected together in a specific way in order to perform a particular task.

BENEFITS:• Massive parallelism makes them very efficient• They can learn and generalize from training data – so

there is no need for knowledge engineering or a complex understanding of the problem.

• They are fault tolerant – this is equivalent to the “graceful degradation” found in biological systems, and noise tolerant – so they can cope with noisy inaccurate inputs

Page 6: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Learning in Neural NetworksThere are many forms of neural networks. Most operate by

passing neural ‘activations’ – processed firing states through a network of connected neurons.

One of the most powerful features of neural networks is their ability to learn and generalize from a set of training data. They adapt the strengths/weights of the connections between neurons so that the final output activations are correct. (e.g. like catching a ball, learning to balance)

We will consider:

1. Supervised Learning (i.e. learning with a teacher) 2. Reinforcement learning (i.e. learning with limited

feedback)

Page 7: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Neural Networks

BRAINS VS COMPUTERS

1. There are approximately 10 billion neurons in the human cortex, compared with 10s of thousands of processors in the most powerful parallel computers.

2. Each biological neuron is connected to several thousands of other neurons, similar to the connectivity in powerful parallel computers.

3. Lack of processing units can be compensated by speed. The typical operating speeds of biological neurons is measured in milliseconds (10-3 s), while a silicon chip can operate in nanoseconds (10-9 s).

4. The human brain is extremely energy efficient, using approximately 10-16 joules per operation per second, whereas the best computers today use around 10-6 joules per operation per second.

5. Brains have been evolving for tens of millions of years, computers have been evolving for tens of decades.

“My Brain is a Learning Neural Network” Terminator 2

Page 8: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Very Very Simple Model of an Artificial Neuron (McCulloch and Pitts 1943)

• A set of synapses (i.e. connections) brings in activations (inputs) from other neurons.

• A processing unit sums the inputs x weights, and then applies a transfer function using a “threshold value” to see if the neuron “fires”.

• An output line transmits the result to other neurons (output can be binary or continuous). If the sum does not reach the threshold, output is 0.

Page 9: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk
Page 10: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

NNs: we don’t have to design them, they can learn their weights

Consider the simple Neuron Model:

1. Supply a set of values for the input (x1 … xn) 2. An output is achieved and compared with the known target

(correct/desired) output (like a “class” in learning from example).

3. If the output generated by the network does not match the target output, the weights are adjusted.

4. The process is repeated from step 1 until the correct output is generated.

This is like supervised learning / learning from examples

Page 11: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Real Example: Pattern Recognition Pixel Grid

Dimension: n = 5 x 8 = 40

1 output node indicates two classes.

What’s missing here?

Page 12: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Simple Example: Boolean Functions

Learn

Page 13: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Example viewed as a Decision Problem

x1

x2

Separating line (decision boundary).

Page 14: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

One Layer Neuron not very powerful …!

Page 15: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

XOR – Linearly Non-separable

x1

x2

Classes cannot be separated by a single decision boundary.

Page 16: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Perceptrons

value thresholdk theis thk

To determine whether the jth output node should fire, we calculate the value

If this value exceeds 0 the neuron will fire otherwise it will not fire.

j

n

iiji xw

1,sgn

Page 17: Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Neural Networks

Conclusions

• The McCulloch-Pitts / Perceptron neuron models are crude approximations to real neurons that performs a simple summation and threshold function on activation levels.

• NNs are particularly good at Classification Problems where the weights are learned

• Powerful NNs can be created using multi-layers – next term

Next week – Reinforcement Learning