m.e cs-1 lab manual.doc

Upload: selvaece2010

Post on 31-Oct-2015

107 views

Category:

Documents


11 download

TRANSCRIPT

SYLLABUS:

78

EX.NO: 1FIR ADAPTIVE FILTER DESIGN USING LMS ALGORITHM

Date:

AIM

To design a FIR adaptive filter using least mean square algorithm.APPARATUS USED/TOOLS RERQUIRED

MATLAB 2009 SoftwareTHEORY

There are number of techniques for processing nonstationary signals using adaptive filters. These techniques have been exclusively used in a variety of applications including system identification, signal modeling, spectrum estimation, noise cancellation and adaptive equalization. The FIR adaptive filters are designed in such a way to minimize the mean square error (n)=E |e(n)| between a desired process d(n) and an estimate of this process that is formed by filtering another process x(n).Since the gradient of (n) involves an expectation of e(n)x*(n),this approach requires knowledge of statistics of x(n) and d(n) and therefore is of limited one in practice. Next if we replace the ensemble average E{|e(n)|} with the instantaneous squared error |e(n)| this leads to the LMS algorithm, a simple and often effective algorithm, that does not require any ensemble average to be known. For wide sense stationary processes, the LMS algorithm converges in the mean, if the step size is positive and not larger than 2/max (where max is the maximum Eigen value of autocorrelation matrix Rx) and it converges in the mean-square. If the step size is positive and no larger than 2/tr (Rx) there are several types of LMS algorithm available. The first is the normalized LMS algorithm, which simplifies the selection of the step size to ensure that the coefficients converge. Next is the leaky LMS algorithm, which is useful in overcoming the problems that occur when the autocorrelation matrix of the input process is singular. There are also block LMS algorithm and the sign algorithms, which are designed to increase the efficiency of the LMS algorithm. In the block LMS algorithm, the filter coefficients are held constant over blocks of length L, which allows for the use of fast convolution algorithm to compute the filter output .The sign algorithm, on the other hand ,achieve their simplicity by replacing e(n) with sgn{e(n)} or x(n) with sgn{x(n)} or both. A lattice filter can be used as an alternative structure for adaptive filter. Due to the orthogonalization of the input process, the gradient adaptive lattice filter converges more rapidly than the LMS adaptive filter and tends to be less sensitive to Eigen value spread in the autocorrelation matrix of x (n).ALGORITHM

1. Generate the desired output signal and plot it.

2. Generate noise sequence and plot it.

3. Add the noise to the desired signal and to get the observed signal.4. Initialize the step size in such a way that the mean square error is minimized.

5. Noise is minimized using the filter coefficient update equation using LMS algorithm.

PROGRAM

clc;clear all; close all; n=0:150;d=sin(.125*pi*n);subplot(2,2,1);plot(n,d);xlabel('n'); ylabel('d(n)'); title('desired signal'); v=.75*rand(size(n));subplot(2,2,2);plot(n,v);xlabel('n'); ylabel('v(n)'); title('noise signal'); x=sin(.125*pi*n)+.75*rand(size(n));subplot(2,2,3);plot(n,x);xlabel('n'); ylabel('x(n)'); title('observed signal');N=21;mu=0.01;M=length(x)y=zeros(1,M)h=zeros(1,N)for n=N:M x1=x(n:-1:n-N+1); y=h*x1' e=d(n)-y; h=h+mu*e*x1;enddisp(h);y1=conv(x,h);n1=0:150;y1=y1(:,1:151);subplot(2,2,4);plot(n1,y1); xlabel('n'); ylabel('y(n)'); title('filtered signal');OUTPUT

RESULT

EX.NO: 2RECURSIVE LEAST SQUARES ALGORITHM

Date:

AIM:-

To stimulate the MATLAB program for construction of the filters for Recursive least squares algorithm.

SOFTWARE REQUIREMENT:- MAT LABTHEORY:-

The Recursive least squares (RLS) adaptive filter is an algorithm which recursively finds the filter coefficients that minimize a weighted linear least squares cost function relating to the input signals. This is in contrast to other algorithms such as the least mean squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithm they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence. However, this benefit comes at the cost of high computational complexity, and potentially poor tracking performance when the filter to be estimated (the "true system") changesPROCEDURE:-

Log on to the windows.

Open the MATLAB R2008a.

In the MATLAB R2008a in the menu bar go to (File ->new-> M-file).

Enter the coding for RLS algorithm. After complete the coding go to menu (File ->Save or (Ctrl + s) save the experiment as (Filename.m). Then go to menu bar (Debug -> Run or (f5)) to execute the program.

Go the command window in the in the MATLAB and enter the input.

If there is any error, get the error code and try to correct the error.

If there is no error get the output.

Then the experiment is successfully completed. BOCK DIAGRAM:-

Output:-

Figure 1: SYSTEM OUTPUT

PROGRAM:-

clear all

close all

hold off

% Number of system points

N=2000;

inp = randn(N,1);

n = randn(N,1);

[b,a] = butter(2,0.25);

Gz = tf(b,a,-1);

sysorder = 10 ;

h=[0.097631 0.287310 0.335965 0.220981 0.096354 0.017183 -0.015917 -0.020735 -0.014243 -0.006517 -0.001396 0.000856 0.001272 0.000914 0.000438 0.000108 -0.000044 -0.00008 -0.000058 -0.000029];

h=h(1:sysorder);

y = lsim(Gz,inp);

%add some noise

n = n * std(y)/(10*std(n));

d = y + n;

totallength=size(d,1);

%Take only 70 points for training ( N - systorder 70 = 80 - 10 )

N=80 ;

%begin of the algorithm

%forgetting factor

lamda = 0.9995 ;

%initial P matrix

delta = 1e10 ;

P = delta * eye (sysorder ) ;

w = zeros ( sysorder , 1 ) ;

for n = sysorder : N

u = inp(n:-1:n-sysorder+1) ;

phi = u' * P ;

k = phi'/(lamda + phi * u );

y(n)=w' * u;

e(n) = d(n) - y(n) ;

w = w + k * e(n) ;

P = ( P - k * phi ) / lamda ;

% Just for plotting

Recordedw(1:sysorder,n)=w;

Figure 2: ERROR CURVE

Figure 3: COMPARISON OF THE FILTER WEIGHTS AND ESTIMATED WEIGHTS

end

%check of results

for n = N+1 : totallength

u = inp(n:-1:n-sysorder+1) ;

y(n) = w' * u ;

e(n) = d(n) - y(n) ;

end

hold on

plot(d)

plot(y,'r');

title('System output') ;

xlabel('Samples')

ylabel('True and estimated output')

figure

semilogy((abs(e))) ;

title('Error curve') ;

xlabel('Samples');

ylabel('Error value');

figure

plot(h, 'r+')

hold on

plot(w, '.')

legend('filter weights','Estimated filter weights');

title('Comparison of the filter weights and estimated weights') ;

figure

plot(Recordedw(1:sysorder,sysorder:N)');

title('Estimated weights convergence') ;

xlabel('Samples');

ylabel('Weights value');

axis([1 N-sysorder min(min(Recordedw(1:sysorder,sysorder:N)')) max(max(Recordedw(1:sysorder,sysorder:N)')) ]);

hold offRESULT:-

EX.NO: 3LINEAR BLOCK CODES

Date:

AIM

To generate linear block code for data transmission and identify the errors in received message.APPARATUS USED/TOOLS REQUIRED

MATLAB 2009 softwareFORMULA USED:

In matrix form,

C=UG

C: Codeword

U: Message vector

G: Generator matrix

THEORY

Block coding is a special case of error-control coding. Block coding techniques map a fixed number of message symbols to a fixed number of code symbols. A block coder treats each block of data independently and is a memory less device.The Binary Linear Encoder block creates a binary linear block code using a generator matrix that you specify. If K is the message length of the code, then the Generator matrix parameter must have K rows. If N is the codeword length of the code, then Generator matrix must have N columns.

The input must contain exactly K elements. If it is frame-based, then it must be a column vector. The output is a vector of length N.

The Binary Linear Decoder block recovers a binary message vector from a binary codeword vector of a linear block code.

The input must contain exactly N elements. If it is frame-based, then it must be a column vector. The output is a vector of length K.

The decoder tries to correct errors, using the Decoding table parameter. Decoding table must be a 2N-K-by-N binary matrix. The rth row of this matrix is the correction vector for a received binary codeword whose syndrome has decimal integer value r-1. The syndrome of a received codeword is its product with the transpose of the parity-check matrix.

ALGORITHM

1. Initialize code length n and message vector length k

2. Specify the k bit message vectors to be transmitted and the generator matrix of order k*n

3. Encode the message in linear format to get n bit code vector

4. Add a random noise to code vector

5. Decode the noisy code at receiver

6. The message vectors have to be received, displayed and those which are corrupted with noise has to be specified.

PROGRAM

clc;clear all;close all;n=6,k=4msg=[1 1 0 0;1 1 1 1 ;0 1 1 0] % input messagegenmat=[1 0 0 0 1 0;0 1 0 0 1 1 ;0 0 1 0 0 1;0 0 0 1 0 0] % generator matrixcode=encode(msg,n,k,'linear',genmat) noisycode1=rem(code+randerr(3,6,[0 1]),4)noisycode=sign(noisycode1);[newmsg, err]=decode(noisycode,n,k,'linear',genmat)err_word = find(err~=0)OUTPUT

n = 6

k= 4

msg =

1 1 0 0

1 1 1 1

0 1 1 0

genmat =

1 0 0 0 1 0

0 1 0 0 1 1

0 0 1 0 0 1

0 0 0 1 0 0

code =

1 1 0 0 0 1

1 1 1 1 0 0

0 1 1 0 1 0

noisycode1 =

1 1 0 0 0 1

1 1 1 1 0 1

0 1 1 0 1 0

newmsg =

1 1 0 0

1 1 0 1

0 1 1 0

err =

0

1

0

err_word =

2RESULT

EX.NO: 4CYCLIC CODES

Date:

AIM

To generate a cyclic block code for transmission and to detect the errors in the received message

APPARATUS USED/TOOLS RERQUIRED

MATLAB 2009 Software

THEORY

Block coding is a special case of error-control coding. Block coding techniques map a fixed number of message symbols to a fixed number of code symbols. A block coder treats each block of data independently and is a memory less device.

Cyclic codes are subset of linear block codes with additional property that every cyclic shift of a codeword is also a codeword. If X={X1,X2,.Xn)is a codeword ,then the shifted version is X=[Xn,X1,..,X(n-1) ) is also a codeword..They have algebraic properties that allow a polynomial to determine the coding process completely. This so-called generator polynomial is a degree-(n-k) divisor of the polynomial xn-1.

The cyclpoly function produces generator polynomials for cyclic codes. Cyclpoly represents a generator polynomial using a row vector that lists the polynomial's coefficients in order of ascending powers of the variable.

Cyclic codes are usually easier to encode and decode than all other linear block codes. The encoding operation is performed using shift registers, and decoding is more practical due to the algebraic structure.ALGORITHM:

1. Initialize cyclic code length n and message vector length k

2. Generate generator polynomial from a cyclic code of length n and message length k.

3. Specify the k bit message vectors to be transmitted

4. Encode the message in cyclic format to get n bit code vector

5. Add a random noise to code vector

6. Decode the noisy code at receiver

7. The message vectors have to be received, displayed and those which are corrupted with noise need to be specified.

OUTPUT

n=7

k=3

genpoly =

1 0 1 1 1

msg =

1 1 0

1 1 1

0 1 0code =

0 1 0 1 1 1 0

0 0 1 0 1 1 1

1 1 1 0 0 1 0

noise =

0 0 0 1 0 1 0

0 0 0 0 0 0 0

0 0 1 0 1 0 0noisycode =

0 1 0 0 1 0 0

0 0 1 0 1 1 1

1 1 0 0 1 1 0rxmsg =

1 0 1

1 1 1 1 1 0PROGRAM

clc;

clear all;

close all;

n=7;

k=3;

genpoly = cyclpoly(7,3) % generator polynomialmsg=[1 1 0 ;1 1 1 ;0 1 0] % input messagecode = encode(msg,n,k,'cyclic',genpoly)

noise=randerr(3,n,[0,2]) % 3rows n columns of noise matrix

noisycode=xor(code,noise) % noise addition[rxmsg,nerr] = decode(noisycode,n,k,'cyclic',genpoly)

chk=isequal(noisycode,code);

if chk==1

display ('no error in codes');

else display('ther is error in code');

end

err_codes = find(nerr~=0)

nerr =

2

0

2There is error in codeerr_codes =

1

3

RESULT

EX.NO: 5IMPLEMENTATION OF DISCRETE COSINE TRANSFORM

Date:

AIM

To stimulate the MATLAB program for construction of the discrete cosine transform.

APPARATUS USED/TOOLS RERQUIRED

MATLAB 2009 SoftwareTHEORY

DCT is one of the most frequently used transformations for the image compression. It is obtained by the following ,

g(x,y,u,v)=h(x,y,u,v) =(u)(v) cos[(2x+1)u/2N]cos[(2y+1)v/2N]

where

the picture were defined (i.e)obtained by diiding the original image into subimage of size 88 representing each subimage using on efo the transforms i.e DCT and DFT truncation 50% of the the resulting coefficient and taking the inverse transform of a coefficient and taking the inverse transform of a truncated coefficient arrays. The transform coefficient masking function is defined as,

compare to the other input independent transform it has the advantage of having been implemented in a single integrated circuits packing and minimizes the block like appearance boundaries between subimage become visible because the boundary pixels of the subimage assume the mean value of discontinuos this effects,because its implicit 2n input peridiocity doesnt inherent produce boundary discontinuities.PROCEDURE:- Log on to the windows.

Open the MATLAB R2008a.

In the MATLAB R2008a in the menu bar go to (File ->new-> M-file).

Enter the coding for discrete cosine transform.. After complete the coding go to menu (File ->Save or (Ctrl + s) save the experiment as (Filename.m). Then go to menu bar (Debug -> Run or (f5)) to execute the program.

Go the command window in the in the MATLAB and enter the input.

If there is any error, get the error code and try to correct the error.

If there is no error get the output. Then the experiment is successfully completed

PROGRAM

%Image compression for color images

clear all;

close all;

clc;

% Reading an image file

[imagefile1 , pathname]= uigetfile('*.bmp;*.BMP;*.tif;*.TIF;*.jpg','Open An image');

if imagefile1 ~= 0

cd(pathname);

I=imread(char(imagefile1));

end;

X=I;

% inputting the decomposition level and name of the wavelet

n=input('Enter the decomposition level:');

OUTPUT:-

Figure 1:- DECOMPOSITION LEVEL 4

wname = 'haar';

x = double(X);

TotalColors = 255;

map = gray(TotalColors);

x = uint8(x);

%Conversion of RGB to Grayscale

x = double(X);

xrgb = 0.2990*x(:,:,1) + 0.5870*x(:,:,2) + 0.1140*x(:,:,3);

colors = 255;

x = wcodemat(xrgb,colors);

map = pink(colors);

x = uint8(x);

% A wavelet decomposition of the image[c,s] = wavedec2(x,n,wname);

% wdcbm2 for selecting level dependent thresholds

alpha = 1.5; m = 2.7*prod(s(1,:));

[thr,nkeep] = wdcbm2(c,s,alpha,m)

% Compression

[xd,cxd,sxd,perf0,perfl2] = wdencmp('lvd',c,s,wname,n,thr,'h');

disp('Compression Ratio');

disp(perf0);

% Decompression

R = waverec2(c,s,wname);

rc = uint8(R);

% Plot original and compressed images.

subplot(221), image(x);

colormap(map);

title('Original image')

subplot(222), image(xd);

colormap(map);

title('Compressed image')

% Displaying the results

xlab1 = ['2-norm rec.: ',num2str(perfl2)];

xlab2 = [' % -- zero cfs: ',num2str(perf0), ' %'];

xlabel([xlab1 xlab2]);

subplot(223), image(rc);

colormap(map);

title('Reconstructed image');Figure 2:- OUTPUT OF DISCRETE COSINE TRANSFORM

RESULT:-EX.NO: 6M-ary PHASE SHIFT KEYING

Date:

AIM

To obtain bit error performance curve for M-ary PSK for various values of M.

APPARATUS USED/TOOLS REQUIRED

MATLAB 2009 software

THEORY

Phase Shift Keying (PSK) is a method of digital communication in which the phase of a transmitted signal is varied to convey information. In M-ary or Multiple Phase Shift Keying (MPSK), there are more than two phases, usually four (0,+90,-90 and 180 degrees) or eight (0,+45,-45,+90,-90,+135 and 180 degrees) . If there are four phases (m=4), the MPSK mode is called Quadrature Phase-Shift Keying. Multi-level modulation techniques permit high data rates within fixed bandwidth constraints. A convenient set of signals for M-ary PSK is,

i(t) = A ct+i) 0