question bank unit-ii 1. what are the … … · what are the disadvantages in using a ripple carry...

26
EC2303-COMPUTER ARCHITECTURE AND ORGANIZATION QUESTION BANK UNIT-II 1. What are the disadvantages in using a ripple carry adder? (NOV/DEC 2006) The main disadvantage using ripple carry adder is time delay. 2. Draw the full adder circuit (AUC NOV’07) 3. Define temporal expansion. (AUC APR’08) In this uses one copy of the m-bit ALU chip in the manner of a serial adder to perform an operation on km bit words in k consecutive steps/clock cycles. In each step the ALU processes the separate m-bit slice of each operand. This processing is called multicycle (multiple precision processing) 4. Define underflow and overflow. (AUC APR’08, APR’11) Overflow: In the single precision, if the number requires a exponent greater than +127 or in a double precision, if the number requires an exponent form the overflow occurs. Underflow: In a single precision, if the number requires an exponent less than -26 or in a double precision, if the number requires an exponent less than -1022 to represent its normalized form the underflow occurs. 5. Define spatial expansion. (AUC APR’08) K bits of m-bit ALU are connected in the manner of ripple carry adder to form a single ALU and capable of processing km bit words directly. The resulting array is called bit sliced ALU because each component ALU concurrently processes a separate ”slice” of from m-bits each km bit operand.

Upload: lamkiet

Post on 06-Sep-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

EC2303-COMPUTER ARCHITECTURE AND ORGANIZATION

QUESTION BANK

UNIT-II

1. What are the disadvantages in using a ripple carry adder? (NOV/DEC 2006)

The main disadvantage using ripple carry adder is time delay.

2. Draw the full adder circuit (AUC NOV’07)

3. Define temporal expansion. (AUC APR’08)

In this uses one copy of the m-bit ALU chip in the manner of a serial adder to

perform an operation on km bit words in k consecutive steps/clock cycles.

In each step the ALU processes the separate m-bit slice of each operand. This

processing is called multicycle (multiple precision processing)

4. Define underflow and overflow. (AUC APR’08, APR’11)

Overflow: In the single precision, if the number requires a exponent greater than +127 or

in a double precision, if the number requires an exponent form the overflow occurs.

Underflow: In a single precision, if the number requires an exponent less than -26 or in a

double precision, if the number requires an exponent less than -1022 to represent its

normalized form the underflow occurs.

5. Define spatial expansion. (AUC APR’08)

K bits of m-bit ALU are connected in the manner of ripple carry adder to form a

single ALU and capable of processing km bit words directly.

The resulting array is called bit sliced ALU because each component ALU

concurrently processes a separate ”slice” of from m-bits each km bit operand.

6. Define coprocessor (AUC NOV’11)

Coprocessor is a separate instruction set processor means it has own instruction set

supporting the special complex function. Coprocessor is closely coupled to the CPU and

whose instruction and registers are direct extensions of the CPU.

7. What is carry look ahead adder? (AUC APR’11)

A carry-lookahead adder (CLA) is a type of adder used in digital logic A carry-

lookahead adder improves speed by reducing the amount of time required to

determine carry bits. It can be contrasted with the simpler, but usually slower,

. The carry-lookahead adder calculates one or more carry bits before the sum, which

reduces the wait time to calculate the result of the larger value bits.

8. What are the two approaches to reduce the delay in the adders? (AUC NOV’12)

To use the fastest electronic technology for implementing the ripple carry adder.

To use an augumented logic gate network structure

9. What are two operations to speed up the multiplication operation? (AUC NOV’12)

The two techniques used for spreading up the multiplication process are

1)Bit pair recording or modified Booth algorithm

2)Carry save addition of summands.

10. What is ripple-carry adder?

A cascaded connection of n full adder blocks can be used to add two n-bit

numbers. Since the carries must propagate or ripple, through the cascade, the

configuration is called n bit ripple carry adder.

11. What are the advantages of Booth algorithm? 1. It handles both positive and negative multipliers uniformly.

2. It achieves some efficiency in the no. of additions required When the multiplier has a

few large blocks of 1‟s.

12. List out the rules for mul /div of floating point number?

o Multiply rule:

1. Add the exponent and subtract 127,

2. Multiply the mantissa and determine the sign of the result. 3. Normalise the

resulting value, if necessary.

o Divide rule:

1. Subtract the exponents and add 127,

2. Divide the mantissa and determine the sign of the result, 3. Normalise the

resulting value, if necessary.

13. What is the principle of booth multiplication? Booth multiplication is nothing but addition of properly shifted multiplicand patterns. It is

carried out by following steps:

a) Start from LSB. Check each bit one by one.

b) Change the first one as -1.

c) Skip all exceeding one’s (record them as zeros) till you see a zero. Change this zero

as one.

d) Continue to look for next one without disturbing zeros, precede using rule

14. What is data path?

A processor consists of the data path unit and control unit. Data path unit performs

the arithmetic and logic operations.

15. Write format for floating point in IEEE single–precision format.

.

PART-B

1. Draw the diagram of a carry look ahead adder and explain the carry look ahead adder principle.

(AUC NOV’06, APR’08,NOV’07)

Design of Carry Look ahead Adders :

To reduce the computation time, there are faster ways to add two binary numbers by using carry

lookahead adders. They work by creating two signals P and G known to be Carry Propagator

and Carry Generator. The carry propagator is propagated to the next level whereas the carry

generator is used to generate the output carry , regardless of input carry. The block diagram of

a 4-bit Carry Lookahead Adder is shown here below -

The number of gate levels for the carry propagation can be found from the circuit of full

adder. The signal from input carry Cin to output carry Cout requires an AND gate and an OR

gate, which constitutes two gate levels. So if there are four full adders in the parallel adder, the

output carry C5 would have 2 X 4 = 8 gate levels from C1 to C5. For an n-bit parallel adderr,

there are 2n gate levels to propagate through.

Design Issues :

The corresponding Boolean expressions are given here to construct a carry look ahead adder.

In the carry-look ahead circuit we need to generate the two signals carry propagator(P) and

carry generator(G),

Pi = Ai ⊕ Bi

Gi = Ai · Bi

The output sum and carry can be expressed as

Sumi = Pi ⊕ Ci

Ci+1 = Gi + ( Pi · Ci)

Having these we could design the circuit. We can now write the Boolean function for the carry

output of each stage and substitute for each Ci its value from the previous equations:

C1 = G0 + P0 · C0

C2 = G1 + P1 · C1 = G1 + P1 · G0 + P1 · P0 · C0

C3 = G2 + P2 · C2 = G2 P2 · G1 + P2 · P1 · G0 + P2 · P1 · P0 · C0

C4 = G3 + P3 · C3 = G3 P3 · G2 P3 · P2 · G1 + P3 · P2 · P1 · G0 + P3 · P2 · P1 · P0 · C0 .

2. Explain the booth algorithm for multiplication of signed two’s complement numbers.

(AUC NOV 07’12)

Booth's multiplication algorithm is an algorithm which multiplies 2 signed integers in 2's

complement. The algorithm is depicted in the following figure with a brief description.

This approach uses fewer additions and subtractions than more straightforward

algorithms.

The multiplicand and multiplier are placed in the m and Q registers respectively. A 1 bit register

is placed logically to the right of the LSB (least significant bit) Q0 of Q register. This is denoted

by Q-1. A and Q-1 are initially set to 0. Control logic checks the two bits Q0 and Q-1. If the twi

bits are same (00 or 11) then all of the bits of A, Q, Q-1 are shifted 1 bit to the right. If they are

not the same and if the combination is 10 then the multiplicand is subtracted from A and if the

combination is 01 then the multiplicand is added with A. In both the cases results are stored in

A, and after the addition or subtraction operation, A, Q, Q-1 are right shifted.

The shifting is the arithmetic right shift operation where the left most bit namely, An-1 is not only

shifted into An-2 but also remains in An-1. This is to preserve the sign of the number in A and Q.

The result of the multiplication will appear in the A and Q.

Design Issues:

Booth's algorithm can be implemented in many ways. This experiment is designed using a

controller and a data path. The operations on the data in the data path are controlled by the

control signal received from the controller. The data path contains registers to hold multiplier,

multiplicand, intermediate results, data processing units like ALU, adder/subtractor etc.,

counter and other combinational units.

Following is the schematic diagram of the Booth's multiplier which multiplies two 4-bit numbers

in 2's complement of this experiment. Here the adder/subtractor unit is used as data

processing unit, Q, A are 4-bit and Q-1 is a 1-bit register. M holds the multiplicand, Q holds the

multiplier, A holds the results of adder/subtractor unit. The counter is a down counter which

counts the number of operations needed for the multiplication.

The data flow in the data path is controlled by the five control signals generated from the

controller. These signals are load (to load data in registers), add (to initiate addition operation),

sub (to initiate subtraction operation), shift (to initiate arithmetic right shift operation), dc (this is

to decrement counter). The controller generates the control signals according to the input

received from the data path. Here the inputs are the least significant Q0 bit of Q register, Q-1

bit and count bit from the down counter.

3. Explain the representation of floating point numbers in detail (AUC MAY’07)

There are posts on representation of floating point format. The objective of this article is to

provide a brief introduction to floating point format.

The following description explains terminology and primary details of IEEE 754 binary floating

point representation. The discussion confines to single and double precision formats.

Usually, a real number in binary will be represented in the following format,

ImIm-1…I2I1I0.F1F2…FnFn-1

Where Im and Fn will be either 0 or 1 of integer and fraction parts respectively.

A finite number can also represented by four integers components, a sign (s), a base (b), a

significand (m), and an exponent (e). Then the numerical value of the number is evaluated as

(-1)s x m x be ________ Where m < |b|

Depending on base and the number of bits used to encode various components, the IEEE

757standard defines five basic formats. Among the five formats, the binary32 and the binary64

formats are single precision and double precision formats respectively in which the base is 2.

Table – 1 Precision Representation

Precision Base Sign Exponent Significand

Single precision 2 1 8 23+1

Double precision 2 1 11 52+1

Single Precision Format:

As mentioned in Table 1 the single precision format has 23 bits for significand (1 represents

implied bit, details below), 8 bits for exponent and 1 bit for sign.

For example, the rational number 9÷2 can be converted to single precision float format as

following,

9(10) ÷ 2(10) = 4.5(10) = 100.1(2)

The result said to be normalized, if it is represented with leading 1 bit, i.e. 1.001(2) x 22. (Similarly

when the number 0.000000001101(2) x 23 is normalized, it appears as 1.101(2) x 2-6). Omitting

this implied 1 on left extreme gives us the mantissa of float number. A normalized number

provides more accuracy than corresponding de-normalized number. The implied most

significant bit can be used to represent even more accurate significand (23 + 1 = 24 bits) which

is called subnormal representation. The floating point numbers are to be represented in

normalized form.

The subnormal numbers fall into the category of de-normalized numbers. The subnormal

representation slightly reduces the exponent range and can’t be normalized since that would

result in an exponent which doesn’t fit in the field. Subnormal numbers are less accurate, i.e.

they have less room for nonzero bits in the fraction field, than normalized numbers. Indeed, the

accuracy drops as the size of the subnormal number decreases. However, the subnormal

representation is useful in filing gaps of floating point scale near zero.

In other words, the above result can be written as (-1)0 x 1.001(2) x 22 which yields the integer

components as s = 0, b = 2, significant (m) = 1.001, mantissa = 001 and e = 2. The

corresponding single precision floating number can be represented in binary as shown below,

Where the exponent field is supposed to be 2, yet encoded as 129 (127+2) called biased

exponent. The exponent field is in plain binary format which also represents negative

exponents with an encoding (like sign magnitude, 1’s compliment, 2’s complement, etc.). The

biased exponent is used for representation of negative exponents. The biased exponent has

advantages over other negative representations in performing bitwise comparing of two floating

point numbers for equality.

A bias of (2n-1 – 1), where n is # of bits used in exponent, is added to the exponent (e) to get

biased exponent (E). So, the biased exponent (E) of single precision number can be obtained

as

E = e + 127

The range of exponent in single precision format is -126 to +127. Other values are used for

special symbols.The following diagram shows the floating point structures.

4. Design a 4-bit binary adder/ subtractor and explain its functions. (AUC APR’08)

To be able to perform arithmetic, you must first be familiar with numbers. Therefore, although

we give a few helping examples, this article is not about binary numerals..

Half Adder

Let's start with a half (single-bit) adder where you need to add single bits together and get

the answer. The way you would start designing a circuit for that is to first look at all of the

logical combinations. You might do that by looking at the following four sums:

0 0 1 1

+ 0 + 1 + 0 + 1

0 1 1 10

That looks fine until you get to 1 + 1. In that case, you have a carry bit to worry about. If you

don't care about carrying (because this is, after all, a 1-bit addition problem), then you can

see that you can solve this problem with an XOR gate. But if you do care, then you might

rewrite your equations to always include 2 bits of output, like this:

0 0 1 1

+ 0 + 1 + 0 + 1

00 01 01 10

Now you can form the logic table:

1-

bit

Ad

der

wit

h

Ca

rry-

Ou

tA

B Q C

0 0 0 0

0 1 1 0

1 0 1 0

1 1 0 1

By looking at this table you can see that you can implement the sum Q with an XOR gate and

C (carry-out) with an AND gate.

Fig. 1: Schematics for half adder circuit

Full adder:

If you want to add two or more bits together it becomes slightly harder. In this case, we

need to create a full adder circuits. The difference between a full adder and a half adder we

looked at is that a full adder accepts inputs A and B plus a carry-in (CN-1) giving outputs

Q and CN. Once we have a full adder, then we can string eight of them together to create a

byte-wide adder and cascade the carry bit from one adder to the next. The logic table for a

full adder is slightly more complicated than the tables we have used before, because now

we have 3 input bits. The truth table and the circuit diagram for a full-adder is shown in Fig.

2. If you look at the Q bit, it is 1 if an odd number of the three inputs is one, i.e., Q is the

XOR of the three inputs. The full adder can be realized as shown below. Notice that the full

adder can be constructed from two half adders and an OR gate.

One-bit Full Adder with Carry-In & Carry-Out

CN-

1

0

A

0

B

0

Q

0

CN

0 0 0 1 1 0

0 1 0 1 0

0 1 1 0 1

1 0 0 1 0

1 0 1 0 1

1 1 0 0 1

1 1 1 1 1

5. Give the algorithm for multiplication of signed 2’s complement numbers and illustrate

with example.. (AUC APR’08,NOV’11)

Consider two unsigned binary numbers X and Y . We want to multiply these numbers. The basic

algorithm is similar to the one used in multiplying the numbers on pencil and paper. The main

operations involved are shift and add.

There are two algorithms are used

1. Robertson algorithm

2 .Booth algorithm

Recall that the `pencil-and-paper' algorithm is in that each product term (obtained by

multiplying each bit of the multiplier to the multiplicand) has to be saved till all such product

terms are obtained. In machine implementations, it is desirable to add all such product terms to

form the partial product. Also, instead of shifting the product terms to the left, the partial product

is shifted to the right before the addition takes place. In other words, if Pi is the partial product

after i steps and if Y is the multiplicand and X is the multiplier, then

Pi Pi + xj Y

and

Pi+1 Pi 2 1

and the process repeats.

Note that the multiplication of signed magnitude numbers requires a straightforward

extension of the unsigned case. The magnitude part of the product can be computed just as in

the unsigned magnitude case. The sign p0 of the product P is computed from the signs of X and

Y as

p0 x0 y0

Two's complement Multiplication - Robertson's Algorithm

Consider the case that we want to multiply two 8 bit numbers X = x0x1:::x7 and Y = y0y1:::y7.

Depending on the sign of the two operands X and Y , there are 4 cases to be considered :

x0 = y0 = 0, that is, both X and Y are positive. Hence, multiplication of these numbers is

similar to the multiplication of unsigned numbers. In other words, the product P is

computed in a series of add-and-shift steps of the form

Pi Pi + xj Y

Pi+1 Pi 2 1

Note that all the partial product are non-negative. Hence, leading 0s are introduced during

right shift of the partial product.

x0 = 0; y0 = 1, that is, X is positive and Y is negative. In this case, the partial product is

positive and hence leading 0s are shifted into the partial product until the rst 1 in X is

encountered. Multiplication of Y by this 1, and addition to the result causes the partial

product to be negative, from which point on leading 1s are shifted in (rather than 0s).

x0 = 1; y0 = 1, that is, both X and Y are negative. Once again, leading 1s are shifted into

the partial product once the rst 1 in X is encountered. Also, since X is negative, the

correction step (subtraction as the last step) is also performed.

Booth's Algorithm

Recall that the preceding multiplication algorithms (Robertson's algorithm) involves scanning the

multiplier from right to left and using the current multiplier bit xi to determine whether the

multiplicand Y be added, subtracted or add 0 (do nothing) to the partial product. In Booth's

algorithm, two adjacent bits xixi+1 are examined in each step. If xixi+1 = 01, then Y is added to

the partial product, while if xixi+1 = 10, Y is subtracted from Pi (partial product). If xixi+1 = 00 or

11, then neither addition nor subtraction is performed. Thus, booth's algorithm e ectively skips

over sequences of 1s and 0s in X. As a result, the total number of addition/subtraction steps

required to multiply two numbers decrease (however, at the cost of extra hardware).

The process of inspecting the multiplier bits required by booth's algorithm can be viewed as

encoding the multiplier using three digits 0, 1 and 1, where 0 means shift the partial product to

the right (that is, no addition or subtraction is performed), while 1 means add multiplicand before

shifting and 1 means subtract multiplicand from the partial product before shifting. The number

thus produced is called a signed digit number and this process of converting a multiplier X into a

signed digit form is called as multiplier recoding. To generate X from X, append the number X

with a 0 to the right (that is, start with X = x0x1:::xn 10). Then use the following table to generate

X from X:

xi

xi

+

1 xi

0 0 0

0 1 1

1 0 1

1 1 0

Booth's algorithm results in reduction in the number of add/subtract steps needed (as

compared to the Robertson's algorithm) if the multiplier contains runs (or sequences) of 1s or

0s. The worst case scenario occurs in booth's algorithm if X = 010101::01, where there are

n=2 isolated 1s, which forces n=2 subtractions and n=2 additions. This is worse than the

standard multiplication algorithm - which contains only n=2 additions.

The basic booth's algorithm can be improved by detecting an isolated 1 in the multiplier

and just performing addition at the corresponding point in the multiplication. Similarly, an

isolated 0 corresponds to a subtraction. This is called as modified booth's algorithm, which

always requires fewer addition/subtractions as compared to other multiplication algorithms.

Note that the basic booth's algorithm can be implemented by examining two adjacent bits

xixi+1 of the multiplier. The modified booth's algorithm can be implemented by identifying

isolated 1s and 0s. This is achieved by using a mode ip- op F , which is set to 1 when a run

of two or more 1s is encountered, and is reset to 0 when the run of 1s end with two or more

0s.

Analogous to the basic booth's algorithm recoding technique, the multiplier recoding

scheme that takes isolated 0s and 1s into account is called a canonical signed digit recoding.

The basic steps for canonical recoding are as follows:

First x 1 = x0 is appended to the left end and a 0 is appended to the right end of the

number x0x1x2 :::xn 1 to create X = x 1x0x1:::xn 10.

X is the scanned from right to left and the pair of bits xi 1xi are used to determine bit x of

i

the number X using the following algorithm:

xi 1 xi f xi f

0 0 0 0 0

0 1 0 1 0

1 0 0 0 0

1 1 0 1 1

0 0 1 1 0

0 1 1 0 1

1 0 1 1 1

1 1 1 0 1

The above conversion table can be easily derived from the basic multiplier recoding table.

There are two special cases we need to consider:

1. xi 1xi xi+1 = 101. This is the situation when an isolated 1 is encountered. Here, we

just want to perform a subtraction. Hence, set xi = 1 and f = 1.

2. xi 1xi xi+1 = 010. This is the situation which we want to treat as a sequence of 0s,

Hence, we want to perform the addition corresponding to the isolated 1 and set the f

ag to 0. In other words, xi = 1 and f = 0.

3. The rest of the entries of the table can be derived by treating f to be equal to xi+1, xi

to be equal to xi of the previous table (the basic multiplier recoding table) and treating

xi+1 as the look ahead. The value of the look ahead (that is, xi 1) can be used to

determine the new value of f .

6. Design of an ripple carry adder.:

Arithmetic operations like addition, subtraction, multiplication, division are basic operations

to be implemented in digital computers using basic gates likr AND, OR, NOR, NAND etc.

Among all the arithmetic operations if we can implement addition then it is easy to perform

multiplication (by repeated addition), subtraction (by negating one operand) or division

(repeated subtraction).

Half Adders can be used to add two one bit binary numbers. It is also possible to create a

logical circuit using multiple full adders to add N-bit binary numbers. Each full adder inputs

a Cin, which is the Cout of the previous adder. This kind of adder is a Ripple Carry Adder,

since each carry bit "ripples" to the next full adder. The first (and only the first) full adder

may be replaced by a half adder. The block diagram of 4-bit Ripple Carry Adder is shown

here below -

The layout of ripple carry adder is simple, which allows for fast design time; however, the

ripple carry adder is relatively slow, since each full adder must wait for the carry bit to be

calculated from the previous full adder. The gate delay can easily be calculated by inspection

of the full adder circuit. Each full adder requires three levels of logic.In a 32-bit [ripple carry]

adder, there are 32 full adders, so the critical path (worst case) delay is 31 * 2(for carry

propagation) + 3(for sum) = 65 gate delays.

Design Issues :

The corresponding boolean expressions are given here to construct a ripple carry adder. In

the half adder circuit the sum and carry bits are defined as

MAHALAKSHMI

ENGINEERING COLLEGE

TIRUCHIRAPALLI – 621213

EC2303-Computer Architecture and Organization-A. Joyce-AP/CSE Page 19

sum = A ⊕ B

carry = AB

In the full adder circuit the Sum and Carry output is defined by inputs A, B and Carry in as

Sum=ABC + ABC + ABC + ABC

Carry=ABC + ABC + ABC + ABC

Having these we could design the circuit .But, we first check to see if there are any logically

equivalent statements that would lead to a more structured equivalent circuit.

With a little algebraic manipulation, one can see that

Sum= ABC + ABC + ABC + ABC

= (AB + AB) C + (AB + AB) C

= (A ⊕ B) C + (A ⊕ B) C

=A ⊕ B ⊕ C

Carry= ABC + ABC + ABC + ABC

= AB + (AB + AB) C

= AB + (A ⊕ B) C

7. Explain the floating point adder pipeline diagram with neat block diagram.(AUC NOV’11)

Pipeline arithmetic units are usually found in very high speed computers. Pipelining is a

well known method for improving the performance of digital systems. Pipelining exploits

concurrency in combinational logic in order to improve system throughput.

They are used to implement floating-point operations, multiplication of fixed-point

numbers, and similar computations encountered in scientific problems

For the input the exponent of the number may be dissimilar. And dissimilar exponent can’t

be added directly. So the first problem is equalizing the exponent. To equalize the

exponent the smaller number must be increased until it equals to that of the larger

number. Then significant are added. Because of fixed size of mantissa and exponent of

the floating-point number cause many problems to arise during addition and subtraction.

The second problem associated with overflow of mantissa.

Example for floating-point addition and subtraction

Inputs are two normalized floating-point binary numbers

X = A x 2^a

Y = B x 2^b

MAHALAKSHMI

ENGINEERING COLLEGE

TIRUCHIRAPALLI – 621213

EC2303-Computer Architecture and Organization-A. Joyce-AP/CSE Page 21

A and B are two fractions that represent the mantissas a and b are the exponents

Try to design segments are used to perform the “add” operation.

Steps:

Compare the exponents

Align the mantissas

Add or subtract the mantissas

Normalize the result

Step 1 Compare the exponents of two numbers for (or ) and calculate the

absolute value of difference between the two exponents. Take the larger

exponent as the tentative exponent of the result.

Step 2 Shift the significant of the number with the smaller exponent, right

through a number of bit positions that is equal to exponent difference. Two of the

shifted out bits of the aligned significand are retained as guard (G) and Round R)

bits. So for p bit significands, the effective width of aligned significand must be p

+ 2 bits. Append a third bit, namely the sticky bit (S), at the right end of the

aligned significand. The sticky bit is the logical OR of all shifted out bits.

Step 3 Add/subtract the two signed-magnitude significands using a p + 3 bit

adder. Let the result of this is SUM. Step 4 Check SUM for carry out (Cout) from

the MSB position during addition. Shift SUM right by one bit position if a carry out

is detected and increment the tentative exponent by 1. During subtraction, check

SUM for leading zeros. Shift SUM left until the MSB of the shifted result is a 1.

Subtract the leading zero count from tentative exponent.Evaluate exception

conditions, if any.

Step 5 Round the result if the logical condition R”(M0+ S’’) is true, where M0 and

R’’represent the pthand (p + 1)st bits from the left of the normalized significand.

New sticky bit (S’’) is the logical OR of all bits towards the right of the R’’bit. If the

rounding condition is true, a 1 is added at the pthbit (from the left side) of the

normalized significand.

X = 0.9504 x 103 and Y = 0.8200 x 102

o The two exponents are subtracted in the first segment to obtain 3-2=1

o The larger exponent 3 is chosen as the exponent of the result

o Segment 2 shifts the mantissa of Y to the right to obtain Y = 0.0820 x 103

o The mantissas are now aligned

o Segment 3 produces the sum Z = 1.0324 x 103

o Segment 4 normalizes the result by shifting the mantissa once to the right and

incrementing the exponent by one to obtain Z = 0.10324 x 104 .

8. Explain in detail about sequential ALU and combinational ALU.

The various circuits used to execute data processing instructions are usually combined

into a single circuit called an arithmetic logic unit (ALU).

MAHALAKSHMI

ENGINEERING COLLEGE

TIRUCHIRAPALLI – 621213

EC2303-Computer Architecture and Organization-A. Joyce-AP/CSE Page 23

The complexity of ALU is determined by the way in which its arithmetic instructions are

realized.

Two types of ALU:

1. Combinational ALU

2. Sequential ALU

It combines the functions of a two’s-complement adder-subtracter with those of a circuit

that generates word-based logic functions of the form f(X,Y), for example, AND, XOR,

NOT.

It implements most of a CPU’s fixed point data-processing instructions.

The minterms of f(xi, yi) are

The sum-of-product expression is

For n-bit words it is

MAHALAKSHMI

ENGINEERING COLLEGE

TIRUCHIRAPALLI – 621213

EC2303-Computer Architecture and Organization-A. Joyce-AP/CSE Page 25

Addition

AC:=AC+DR

Subtraction

AC:=AC-DR

Multiplication

AC.MQ:=DR X MQ

Division

AC.MQ:=MQ/DR

AND

AC:=AC and DR

OR

AC:=AC or DR

EX-OR

AC:=AC xor DR

NOT

AC:=not(AC)

Three one-word registers: AC, DR, MQ

AC and MQ are organized as a single register AC.MQ capable of left and right shifting.

DR can serve as a memory data register to store data addressed by an instruction

address field ADR. Then DR can be replaced by M(ADR).

Register is selected by:

RA (number) selects the register to put on busA (data)

RB (number) selects the register to put on busB (data)

RW (number) selects the register to be written via busW (data) when Write Enable is

Clock input (CLK)

The CLK input is a factor ONLY during write operation.During read operation,

behaves as a combinational logic block:RA or RB valid => bus A or busB valid after

“access time.”