lab manual 2014 1to 10

Upload: raviudeshmukh

Post on 09-Oct-2015

39 views

Category:

Documents


0 download

DESCRIPTION

Manual for BE Comp Pune University

TRANSCRIPT

LAB MANUAL

For

COMPUTER LABORATORY-1Subject Code: 410446

By: Prof. Helly Patel Prof. Arun Hattarge

Laboratory Plan

Department: ComputerAcademic Year: 2012-2013

Semester: ISubject: Computer laboratory I

Class: BESubject Code: 410446

Name of faculty: Helly H. PatelPractical Hrs/week: 4hrs/week

No. of students in batch: 19Division &Batch number: BE -- B1/B2/B3/B4

Sub-batch1234

Roll nos. 4301 - 43194320 43384339 43574358 79

1. Case Study of LEX.2. Write a LEX program for Word count, character count, Line Count, digit count and space count.3. Write a LEX program to count the number of commented and non-commented lines.4. Write a program that reads input file, replaces every occurrence of word Username to the username of the person who has logged in currently. 5. Write a LEX program to get the cipher text by displaying third alphabet in place of input text. It must support wrapping around to a on reaching z. 6. Implement a Lexical analyzer for subset of C.7. Case Study of YACC 8. Write a YACC program for calculator.9. Write a YACC program for NLP.10. Write an ambiguous CFG to recognize an infix expression and implement a parser that recognizes the infix expression using YACC.11. Write an ambiguous CFG to recognize IF statements and implement a parser that recognizes it and generate equivalent 3-address code.12. Write a code to optimize the generated equivalent 3-address code in an infix expression.13. Generate the target code for the generated equivalent 3-address code in an infix expression.

Assignment No 01NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:-CASE Study of LEX..

Relevant Theory / Literature Survey: In short explain what is LEX? 1) LEX Specifications :- Structure of the LEX Program is as follows ----------------------------- Declaration Part ----------------------------- %% ----------------------------- Translation Rule ----------------------------- %% ----------------------------- Auxiliary Procedures -----------------------------Declaration part :- Contains the declaration for variables required for LEX program and C program.Translation rules:- Contains the rules like Reg. Expression { action1 } Reg. Expression { action2 } Reg. Expression { action3 } ------------------------------------------- Reg. Expression { action-n }Auxiliary Procedures:- Contains all the procedures used in your C Code.2) Built-in Functions i.e. yylex() , yyerror() , yywrap() etc.1) yylex() :- This function is used for calling lexical analyzer for given translation rules.2) yyerror() :- This function is used for displaying any error message.3) yywrap() :- This function is used for taking i/p from more than one file.3) Built-in Variables i.e. yylval, yytext, yyin, yyout etc.1) yylval :- This is a global variable used to store the value of any token.2) yytext :- This is global variable which stores current token.3) yyin :- This is input file pointer used to change value of input file pointer. Default file pointer is pointing to stdin i.e. keyboard.4) yyout :- This is output file pointer used to change value of output file pointer. Default output file pointer is pointing to stdout i.e. Monitor.4) How to execute LEX program :-

For executing LEX program follow the following steps

1) Compile *.l file with lex command

# lex *.l

It will generate lex.yy.c file for your lexical analyzer.

2) Compile lex.yy.c file with cc command

# cc o out_file lex.yy.c -ll

Here o option create executable file named out_file.out and ll will link LEX program with lexical library. 3) Execute the *.out file to see the output

# ./out_file sample.cWhich will separate the tokens from sample.c file and display that tokens in token table format.

Conclusion: Thus we have studied LEX Specifications, built-in functions and built-in variables that will help us to write lexical analyzers.

Assignment No. 02NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:Write a LEX program for Word, character, line, digit and space count.

Relevant Theory / Literature Survey:As seen in the case study of LEX, a lex specification consists three sections : a definition section, a rules section and a user subroutine section.

The 3 sections for the program of word, character, line, digit and space count is as follows :-

Declaration Section :%{ #includeint word_count=0, line_count=0, char_count=0, digit_count=0 and space_count=0;%}

The section bracketed in %{ and %} is a C code that is copied as it is into the lexer. In declaration section we have included the C header file stdio.h and we have declared the variables to store the value of word, digit, character, space and line, and initialized them to 0.

Translation Rules :The rules section consists the patterns(regular expressions) and actions that specify the lexical analyzer.[0-9] { digit_count++; }[a-zA-Z] { char_count++; }[ \t] { space_count++; word_count++; }\n { line_count++; word_count++; }. { char_count++; }

As seen above, on encountering any digits between 0-9 we increment the digit count.On encountering any alphabet we increment the character count.On encountering any white space we increment the space_count variable.Finally, on encountering new line character we increment the word count as well as increment the line count.

Auxillary Procedure :Main(){ yylex(); printf(Word Count=%d,word_count ); printf(Digit count=%d,digit_count ); printf(character Count=%d, char_count); printf(White space count=%d,space_count ); printf(Total line count=%d, line_count);}In auxillary procedure section we have our main() function. Inside the main function we first call the yylex() function and then call the print function to print the value of all the count variables.

Conclusion: Thus we have learned the Lex specification to get the count of total number of words, character, digit, space and line count.

Assignment No 03NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:Write a LEX program to count the number of commented and non-commented lines.

Relevant Theory / Literature Survey:As seen in the case study of LEX, a lex specification consists three sections : a definition section, a rules section and a user subroutine section.

The 3 sections for the program of word, character, line, digit and space count is as follows :-Declaration Section :%{ #includeint cl_count=0, ncl_count=0 ;int c_flag=0, c_closeflag=0,sc_flag=0 ;%}

In declaration section we have included the C header file stdio.h and we have declared the variables to store the commented and non-commented line count and initialized them to 0.Three Flags c_flag, sc_flag and c_closeflag are used to implement the logic of commented and non-commented line count and initialized to 0.

Translation Rules :The rules section consists of patterns (regular expressions) and actions that specify the lexical analyzer. As we are mainly concerned to find out the commented and non-commented lines, hence we have to search for the comment start and end patters as shown below :/* { if(sc_flag==0) c_flag=1; }The above regular expression detects the start of multiline comment. We set the commented flag. Before setting the flag we check whether the flag not set by the single line comment.*/ { c_flag=0; c_closeflag=1; }The above pattern identifies the end of the multiline comment. As actions we reset the c_flag as the comment portion is finished. And we set the c_close flag to 1. This is done to resolve the error in commented line count(when new line character is encountered) due to c_flag being reset.

// { if(c_flag==0) { sc_flag=1; cl_count++; } }The above indicated the single line comment. We just increment the comment line count and set the single line comment encountered flag to 1.\n { if(c_flag==1) { cl_count++; } else if(mc_closeflag==1) { cl_count++; mc_closeflag=0; } Else { sc_flag=0; ncl_count++; } }On encountering new line character we do 3 things, we increment the line count if the multiline comment flag i.e. c_flag is set indicating that this is a part of multiline comment.Otherwise we check whether the closing of multiline comment flag is set in order to count the last line of the multiline comment in the commented line as the commented flag (c_flag) has been rest.Lastly we do nothing but just reset the single line comment flag and increment the non-commented line count by 1.. { }On encountering any other input, we do nothing.

Auxillary Procedure :main(){ yylex(); printf(Commented Lines are =%d,cl_count); printf(Non-commented lines are =%d,ncl_count );}In auxillary procedure section we have our main() function. Inside the main function we first call the yylex() function and then call the print function to print the value of all the count variables.

Conclusion: Thus we have learned the Lex specification to get the count of total number of commented and non-commented lines.

Assignment No 04NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:Write a LEX program that reads input file, replaces every occurrence of word Username to the username of the person who has logged in currently.

Relevant Theory / Literature Survey:As seen in the case study of LEX, a lex specification consists three sections: a definition section, a rules section and a user subroutine section.

The 3 sections for the program of word, character, line, digit and space count is as follows :-Declaration Section :%{ #include%}

In declaration section we have included the C header file stdio.h.

Translation Rules :The rules section consists of patterns (regular expressions) and actions that specify the lexical analyzer. We have to find the term username in the file and replace it with the currently logged in user. The specification for the same can be given as below:

username { fprintf(yyout,%s,getlogin()); }Here, on encountering username in the input we replace it with currently logged in user that we get from the function getlogin.

Auxillary Procedure :main(int argc, char* argv){ if(argc>1) { yyin=fopen(argv[1],r); } yyout=fopen(output.txt,w); yylex(); fclose(yyin); fclose(yyout);}Here we are taking input from the file if it is passed as an argument at the time of execution otherwise we directly take the input from the terminal and then call the function yylex().But, write to the yyout file as seen in rules section, hence we must link some file to which yyout FILE pointer can point. If we do not do so then the output will be displayed on the terminal.

At the end we close both the file pointers yyin and yyout.

Conclusion: Thus we have learned how to write Lex specification to find a word and replace it with another.

Assignment No 05NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:Write a LEX program to get the cipher text by displaying third alphabet in place of input text. It must support wrapping around to a on reaching z.

Relevant Theory / Literature Survey:As seen in the case study of LEX, a lex specification consists three sections: a definition section, a rules section and a user subroutine section.

The 3 sections for the program of word, character, line, digit and space count is as follows :-Declaration Section :%{ #includechar ch;%}In declaration section we have included the C header file stdio.h. We are declaring a character ch as each character has to be converted into its equivalent cipher text and this variable can be used for temporary storage.Translation Rules :The rules section consists of patterns (regular expressions) and actions that specify the lexical analyzer. The requirement is to detect the alphabets and replace it with the next 3rd alphabet. Just by adding 3 to the existing number will only work for characters from a to w whereas for character x, y and z , the characters a, b and c should be displayed. Hence we divide the pattern to recognize the alphabets into 2 parts.1) a to w and 2) x to z. The specifications are as follows:

[a-wA-W] { ch= yytext[0]; ch=ch+3; Fprintf(yyout,%c, ch); }On encountering alphabets from a to w, we simply add 3 to the value and thus we can easily obtain the third character from the existing alphabet.And finally we write the cipher text alphabet by alphabet to the yyout file.[x-zX-Z] { ch=yytext[0]; ch=ch-23; Fprintf(yyout,%c,ch); }When we wncounter x, y and z their cipher text should be a, b and c respectively as after reaching z we wrap around to a. The difference between x and a is 23. Also, the difference between y and b is 23. Similarly the difference between z and c is also 23.Hence on encountering x, y or z we subtract 23 from the existing character and write it to the output file.Auxillary Procedure :main(int argc, char* argv){ if(argc>1) { yyin=fopen(argv[1],r); } yyout=fopen(output.txt,w); yylex(); fclose(yyin); fclose(yyout);}Here we are taking input from the file if it is passed as an argument at the time of execution otherwise we directly take the input from the terminal and then call the function yylex().But, write to the yyout file as seen in rules section, hence we must link some file to which yyout FILE pointer can point. If we do not do so then the output will be displayed on the terminal.At the end we close both the file pointers yyin and yyout.

Conclusion: Thus we have studied how to write Lex program to generate the cipher text for the input text.

Assignment No 06Title of Assignment: Write a LEX program for subset of C.

Relevant Theory / Literature Survey:As seen in the case study of LEX, a lex specification consists three sections: a definition section, a rules section and a user subroutine section. The 3 sections for the program of word, character, line, digit and space count is as follows :-Declaration Section :%{#include #include

%}

In declaration section we declare all the header files. Here we also declare a structure variable which will store all the encountered lexemes and corresponding tokens so that that structure can be used to display the list of all lexemes and tokens at the end. The static variable count is declared to maintain a count of entries made in the structure.Also, the display function is used that displays content of structure code (which has all the list of lexemes and tokens).

Translation Rules :The rules section consists of patterns (regular expressions) and actions that specify the lexical analyzer. The requirement is to detect all the tokens recognized by C language. Hence we have to write expressions/patterns for C keywords, operators, header files, in-built functions, comments, etc. The patterns used in program are as follows:

void|if |else |switch|case|default|do|while|include--is for keywords

int |char |float |double -- is for datatypes

"("|")"|"{"|"}"|";"|","|":"-- is for delimiters

""|"+"|"-"|"*"|"/"|"!"|"~"|"=="|"||"--is for operators

-?[0-9]+ -- is for integers

(-?[0-9]*\.[0-9]+)(eE]-?[0-9]+)? -- Is for floating point values

"=" -- is assignment statement

^printf|^scanf|^getch\(\)|^main\(\) --is for in-built functions

\"[^\"]*\" -- is for in-built message[a-z]*\.h -- is for header files.

[ \t]*"/*"[^"*/"]*\*\/ --is for multiline comment

[a-zA-Z_][a-zA-Z0-9]* -- is for identifiers

[ \t]*"//"[^"\n"]* -- is for single line comment

. --is for all the invalid inputs

Auxillary Procedure :In auxillary procedure, we directly call the yylex() function.

Conclusion: Thus we have studied all the specifications for C language in a Lex program.

Assignment No 07Title of Assignment:Case Study of YACC

Relevant Theory / Literature Survey: Yacc stands for "yet another compiler-compiler. For generating parsers lexical analyzer is also required. The construction of the lexical analyzer we have seen in the last assignment, here we will give more focus on YACC programs 1) YACC Specifications :- Structure of the YACC Program is as follows ----------------------------- Declaration Section ----------------------------- %% ----------------------------- Translation Rule Section ----------------------------- %% ----------------------------- Auxiliary Procedures Section -----------------------------Declaration Section :- The definition section can include a literal block, C code copied verbatim to the beginning of the generated C file, usually containing declaration and #include lines. There may be %union, %start, %token, %type, %left, %right, and %nonassoc declarations. (See "%union Declaration," "Start Declaration," "Tokens," "%type Declarations," and "Precedence and Operator Declarations.") It can also contain comments in the usual C format, surrounded by "/*" and "*/". All of these are optional, so in a very simple parser the definition section may be completely empty.

Translation rule Section :- Contains the rules / grammars Production { action1 } Production { action2 } Production { action3 } --------------------------------------- Production { action-n }Auxiliary Procedure Section :- Contains all the procedures used in your C Code.

2) Built-in Functions i.e. yyparse() , yyerror() , yywrap() etc.1) yyparse() :- This is a standard parse routine used for calling syntax analyzer for given translation rules.2) yyerror() :- This is a standard error routine used for displaying any error message.3) yywrap() :- This function is used for taking i/p from more than one file.3) Built-in Types i.e. %token , %start , %prec , %nonassoc etc.1) %token Used to declare the tokens used in the grammar. The tokens that are declared in the declaration section will be identified by the parser. Eg. :- %token NAME NUMBER2) %start :- Used to declare the start symbol of the grammar. Eg.:- %start STMT3) %left Used to assign the associatively to operators. Eg.:- %left + - - Assign left associatively to + & with lowest precedence. %left * / - Assign left associatively to * & / with highest precedence.4) %right :- Used to assign the associatively to operators. Eg.:- 1) %right + - - Assign right associatively to + & with lowest precedence 2) %right * / Assign right left associatively to * & / with highest precedence. 5) %nonassoc :- Used to un associate. Eg.:- %nonassoc UMINUS

6) %prec :- Used to tell parser use the precedence of given code. Eg. :- %prec UMINUS 7) %type :- Used to define the type of a token or a non-terminal of the production written in the rules section of the .Y file. Eg.:- %type exp 4) How to execute YACC and LEX programs :-

For executing YACC program follow the following steps1) Compile *.y file with yacc # yacc d *.y It will generate y.tab.c and y.tab.h files.

2) Compile *.l file with lex command # lex *.l It will generate lex.yy.c file for your lexical analyzer.

3) Compile and link lex.yy.c and y.tab.c file with cc command # cc o out_file lex.yy.c y.tab.c ll Here o option create executable file named out_file.out and ll will link LEX program and YACC program with lexical library. 4) Execute the *.out file to see the output # ./out_file Which will ask for input from keyboard & perform the task and display output.

Conclusion: Thus we have studied YACC Specifications, built-in functions and built-in variables. Which will help us to write lexical analyzer.

Assignment No 08NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:Write a YACC program for Calculator.

Relevant Theory / Literature Survey: As studied in the case study, a parser needs tokens as input. Hence we have to write a code for Lexical Analyzer. Therefore, whenever we write a YACC file we have to create a LEX file that will recognize the tokens and pass them to the parser which will match the tokens with the productions and thus perform syntax check.Let us see both LEX and YACC specification for writing a program for calculator.

LEX Specification : (lex.l file)1) Declaration Section :%{#include "y.tab.h"#includeextern int yylval;%}Here, we include the header file that is generated while executing the .y file. We also include math.h as we will be using a function atoi (that type casts string to integer).Lastly, when a lexical analyzer passes a token to the parser, it can also pass a value for the token. In order to pass the value that our parser can use (for the passed token), the lexical analyser has to store it in the variable yylval. Before storing the value in yylval we have to specify its data type. In our program we want to perform mathematical operations on the input, hence we declare the variable yylval as integer.

2) Rules Section :[0-9]+{ yylval = atoi(yytext); return NUMBER; }[ \t];/* ignore white space */\nreturn 0;/* logical EOF */.return yytext[0];In rules section, we match the pattern for numbers and pass the token NUMBER to the parser. As we know the matched string is stored in yytext which is a string, hence we type cast the string value to integer.We ignore spaces, and for all other input characters we just pass them to the parser.

YACC Specification : (yacc.y file)1) Declaration Section:%token NUMBER%left '+' '-'%left '/' '*'%right '^'%nonassoc UMINUS In declaration section we declare all the variables that we will be using through out the program, also we include all the necessary files. Apart from that, we also declare tokens that are recognized by the parser. As we are writing a parser specification for calculator, we have only one token that is NUMBER. To deal with ambiguous grammar, we have to specify the associativity and precedence of the operators. As seen above +,-,* and / are left associative whereas the Unary minus and power symbol are non-associative.The precedence of the operators increase as we come down the declaration. Hence the lowest precedence is of + and and the highest is of Unary minus.2) Rules Section:The rules section consists of all the production that perform the operations. One example is given as follows :expression:expression '+' expression { $$ = $1 + $3; } |NUMBER{ $$ = $1; } ;

When a number token is returned by the lexical analyser, it converts it into expression and its value is assigned to expression (non-terminal).When addition happens the value of both the expression are added and are assigned to the expression that has resulted from the reduction.3) Auxillary Function Section:In main we just call function yyparse.We also have to define the function yyerror(). This function is called when there is a syntactic error.Conclusion: Thus we have studied the specifications for lexical analyzer and syntax analyzer for a calculator.

Assignment No 09NOTE : Code written in RED color must be pasted from your program.Title of Assignment:Write a YACC program for Natural Language program.

Relevant Theory : As studied in the case study, a parser needs tokens as input. Hence we have to write a code for Lexical Analyzer. Therefore, whenever we write a YACC file we have to create a LEX file that will recognize the tokens and pass them to the parser which will match the tokens with the productions and thus perform syntax check.

Let us see both LEX and YACC specification for writing a program for Natural language program. Here, we need to know whether the input is a simple sentence or a compound sentence. Hence in lex specification we have to write a pattern that recognizes basic language tokens i.e. Noun, Pronoun,Verb and Conjunctions. On recognizing them it has to pass them to the parser. But we have a very large database for Noun, pronoun, verb and Conjunction and it would be not feasible to write all.

Hence we follow the following things :-i. When we execute the program, the user can choose either the data entry state or the analysis phase. Data Entry means, whatever the user enters will be saved our dictionary (data structure) with a type as Noun or Pronoun or verb or Conjuction.ii. In analysis phase, the users input will be checked against the existing database entries (i.e. structure entries) and if the input text is present, then the corresponding TOKEN type will be passed to the parser.

LEX Specification : (lex.l file)Declaration Section :%{#include "y.tab.h"#include#includestruct sym{char name[30];int type;}ST[30];void add(char *n,int t);int search(char *a);int ptr=0,state=0,type;/*0 : noun1 :pro2 :verb3 : con*/%}Here, we include the header files. Also, we declare the structure where we will be storing the input text along with its type. As seen above, 0 type is for noun, 1 is for pronoun, 2 is for verb and 3 is for conjunction.We also declare 2 functions for adding the text in the database and second function is for searching the text that was stored by the first function in the database and returning its type.

3) Rules Section :^NAM {state=1;type=0;}^PRO {state=1;type=1;}^VERB {state=1;type=2;}^CON {state=1;type=3; }According to the above rules, if the input statement is starting with NAM, VERB, PRO or CON than the states will be changed to data entry i.e. the input text will be stored in the database with its corresponding type.[\n] {if (state==0) return 0;state=0;} The above pattern recognizes the newline character and resets the state to 0 that is analysis state.

4) Auxillary Function section :Here we define the two functions i. int search(char *a)It searches for the passed string in the database and returns the type if the entry for the input string exists otherwise it returns -1.ii. void add(char *n,int t)It adds the string with its corresponding type to the database.

YACC Specification : (yacc.y file)1) Declaration Section:%token NAM PRO VERB CONIn the declaration section all the files are included. And the necessary tokens are declared as seen above.

2) Rules Section:The rules section has the production that recognizes simples sentences and compound sentences. At the end when a simple sentence or a compound sentence is reduced to statement, we print whether the input was a simple or a compound sentence.expr : simple_sent { printf("simple sentence\n");}| compound_sent { printf("Compound sentence\n");} ;

Compound_sent : simple_sent CON simple_sent | compound_sent CON simple_sent ;

Simple_sent : sub VERB sub ;sub : NAM| PRO;

3) Auxillary Function Section:In main we just call function yyparse.We also have to define the function yyerror(). This function is called when there is a syntactic error.

Conclusion: Thus we have studied the specifications for lexical analyzer and syntax analyzer for a Natural language Program.

NOTE : Code written in RED color must be pasted from your program.

Assignment No 10NOTE : Code written in RED color must be pasted from your program.

Title of Assignment:Write a YACC program to generate 3-address code for input ambiguous CFG to recognize an infix expression

Relevant Theory : Our requirement is to generate 3-address code on encountering the code.Here we will be recognizing an infix expression and we will be generating corresponding three address. Hence we write a production, and with the actions associated with the productions we write generate the three address code.

Rules for the three address code.1) Only one operator is allowed apart from assignment = operator.2) The result of each three-address code must be stored in a temporary variable.3) It can have less than 3 operands.

Let us see both LEX and YACC specification for writing a program for generating the 3-address code.

LEX Specification : (lex.l file)As we are generating three address codes, we are concerned with variable names and not the values of the variables.Hence, on encountering any variable, we pass the corresponding variable name with the token. Now as we are passing variable name with the token, the type of the token should be string which is integer by default. This type conversion is done at the parser i.e. in yacc specification file.#include y.tab.h#include #include%%

[a-zA-Z]+ { strcpy(yylval.sym,yytext); return ID; }[0-9]+{ strcpy(yylval.sym,yytext);return NUMBER;}[ \t]+{;}\n{ }. { return yytext[0];}%%As seen above, in the declaration section the header files are included. In the rules section, we write the pattern to recognize variables and integers. On encountering variables and integers, we copy the variable name to the yylvals string variable and pass the token ID (for identifier) or NUMBER (for digits).

YACC Specification : (yacc.y file)1) Declaration Section: In yacc.y file, we create a structure which will store the generated three address code. The structure is as follows :-struct T{char op1[10];char op2[10];char opa;char result[10];}Code[50];

Now, we have to change the type of the tokens to string, hence we declare the type as follows in the declaration section :%union{char sym[10]; }

Now the type sym can be assigned to any token or any non-terminal as follows :%token ID NUMBER%typeexpr statement

2) Rules Section :f_statement: f_statement statement { } | statement { } ;statement: expr '=' expr ';' { strcpy($$,AddToTable($1,$3,'='));}| expr ';';expr: expr '+' expr { strcpy($$,AddToTable($1,$3,'+'));}| expr '-' expr { strcpy($$,AddToTable($1,$3,'-'));}| expr '*' expr { strcpy($$,AddToTable($1,$3,'*'));}| expr '/' expr { strcpy($$,AddToTable($1,$3,'/'));}| '(' expr ')' { strcpy($$,$2); }| ID { strcpy($$,$1); }| NUMBER{ strcpy($$,$1); };

At each infix production, we call the Addtotable function which simply adds the value of operand1, operand2, operator into the structure that stores generated 3-address code and also it auto-generates a temporary variable that will store the result of the operation.

For an assignment operation, only the 1st operand is filled with the RHS whereas the result variable is filled with the LHS.

3) Auxillary Function:Here, main is written, which calls yyparse function, and once the parsing is done, it calls the threeaddresscode() function that simply displays the generated 3-adddress code that was stored in the Code structure during parsing.

Conclusion: Thus we have successfully studied how to generate 3-address code for the infix expressions by specifying proper semantic actions.