image compression using neural networks vishal agrawal (y6541) nandan dubey (y6279)

Post on 22-Dec-2015

217 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Image Compression Using Neural Networks

Vishal Agrawal (Y6541)Nandan Dubey (Y6279)

Overview

Introduction to neural networksBack Propagated (BP) neural networkImage compression using BP neural networkComparison with existing image compression techniques

What is a Neural Network?

An artifi cial neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships.Can perform "intell igent" tasks similar to those performed by the human brain.

Neural Network Structure

A neural network is an interconnected group of neurons

A Simple Neural Network

Neural Network Structure

An Artificial Neuron

Activation Function

Depending upon the problem variety of Activation function is used: Linear Activation function like step

function Nonlinear Activation function like

sigmoid function

Typical Activation Functions

F(x) = 1 / (1 + e -k ∑ (wixi) )Shown for k = 0.5, 1 and 10

Using a nonlinear function which approximates a linear threshold allows a network to approximate nonlinear functions using only small number of nodes.

What can a Neural Net do?

Compute a known functionApproximate an unknown functionPattern RecognitionSignal Processing

Learn to do any of the above

Learning Neural Networks

Learning/Training Neural Networks means adjustment of the weights of the connections such that the cost function is minimized. Cost function:

Ĉ = (∑(xi – xi’)2)/NWhere xi’s are desired output and

xi’s are the output of the neural network.

Learning Neural Network:Back Propagation

Main Idea: distribute the error function across the hidden layers, corresponding to their effect on the outputWorks on feed-forward networks

Back Propagation

Repeat: Choose training pair and copy it to input

layer Cycle that pattern through the net Calculate error derivative between output

activation and target output Back propagate the summed product of the

weights and errors in the output layer to calculate the error on the hidden units

Update weights according to the error on that unit

Until error is low or the net settles

Back Propagation: Sharing the Blame

We update the weights of each connection in the neural network.Done using Delta Rule.

Delta Rule

ΔWji = η * δj* xi

δj = (tj – yj) *f’(hj)Where η is the learning rate of the neural network, tj and yj are targeted and actual output of the jth neuron, hj is the weighted sum of the neuron’s inputs and f’ is the derivate of the activation function f.

Delta Rule for Multilayer Neural Networks

Problem with Multilayer Network is that we don’t know the targeted output value for the Hidden layer neurons.This can be solved by a trick:

δi = ∑(δk*Wki)*f’(hi)The first factor in parenthesis involving the sum over k is an approximation to (ti-ai) for the hidden layers when we don't know ti.

Image Compression using BP Neural Network

Future of Image Coding(analogous to our visual system)Narrow Channel K-L transformThe entropy codingof the state vectorhi’s at the hidden layer.

Image Compression using continued…

A set of image samples is used to train the network. This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer.

Image Compression using continued…

Transform coding with multilayer Neural Network: The image to be subdivided into non-overlapping blocks of n x n pixels each. Such block represents N-dimensional vector x, N = n x n, in N-dimensional space. Transformation process maps this set of vectors into

y=W (input) output=W-1y

Image Compression continued…

The inverse transformation need to reconstruct original image with minimum of distortions.

Analysis

The bit rate can be defined as follows:

(mKT + NKt)/ mN bits/pixel where input images are divided into

m blocks of N pixels , t stand for the number of bits used to encode each hidden neuron output and T for each coupling weight from the hidden layer to the output layer. NKt is small and can be ignored.

Output of this Compression Algorithm

Other Neural Network Techniques

Hierarchical back-propagation neural networkPredictive CodingDepending upon weight function we haveHebbian learning-based image compression

Wi (t + 1)= {W(t) + αhi(t)X(t)}/||Wi (t) + αhi(t)X(t)||

References

Neural networks Wikipedia (http://en.wikipedia.org/wiki/Neural_network)

Ivan Vilovic' : An Experience in Image Compression Using Neural Networks

Robert D. Dony, Simon Haykin: Neural Network Approaches to Image Compression

Constantino Carlos Reyes-Aldasoro, Ana Laura Aldeco: Image Segmentation and compression using Neural Networks

Image compression with neural networks - A survey --J. Jiang*

Questions??

Thank You

top related