[acm press the 33rd annual conference - las vegas, nevada, united states (1996.06.03-1996.06.07)]...

6
33rd Design Automation Conference Permission to make digital/hard copy of all or part of this work forpersonal or class-room use is granted without fee provided that copiesare not made or distributed for profit or commercial advantage, thecopyright notice, the title of the publication and its date appear,and notice is given that copying is by permission of ACM, Inc. Tocopy otherwise, to republish, to post on servers or to redistribute tolists, requires prior specific permssion and/or a fee. DAC 96 - 06/96 Las Vegas, NV, USA 1996 ACM, Inc. 0-89791-833-9/96/0006..$3.50

Upload: olivier

Post on 18-Dec-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [ACM Press the 33rd annual conference - Las Vegas, Nevada, United States (1996.06.03-1996.06.07)] Proceedings of the 33rd annual conference on Design automation conference - DAC '96

33rd Design Automation Conference Permission to make digital/hard copy of all or part of this work forpersonal or class-room use is granted without fee provided that copiesare not made or distributed for profit or commercial advantage, thecopyright notice, the title of the publication and its date appear,and notice is given that copying isby permission of ACM, Inc. Tocopy otherwise, to republish, to post on servers or to redistribute tolists, requires prior specific permssion and/or a fee.DAC 96 - 06/96 Las Vegas, NV, USA 1996 ACM, Inc. 0-89791-833-9/96/0006..$3.50

On Solving Covering Problems

Olivier Coudert

Synopsys Inc., 700 East Middle�eld Rd.

Mountain View, CA 94043, USA

Abstract

The set covering problem and the minimum cost as-signment problem (respectively known as unate and bi-nate covering problem) arise throughout the logic syn-thesis ow. This paper investigates the complexity andapproximation ratio of two lower bound computation al-gorithms from both a theoretical and practical point ofview. It also presents a new pruning technique that takesadvantage of the partitioning.

1 Introduction

The (unate or binate) covering problem is a well knownintractable problem. It has several important applica-tions in logic synthesis, such as two-level logic minimiza-tion, two-level Boolean relation minimization, three-levelNAND implementation, state minimization, exact encod-ing, and DAG covering [10, 6, 2].

Although from the practical point of view people aremore interested in low cost heuristic algorithms that pro-duce approximate solutions, solving covering problemsexactly is the only way to evaluate the performance ofthese heuristic algorithms. Moreover, improving exactsolvers can help �nding better heuristics.

Since a branch-and-bound based covering problem solverhas to look at a search space that is potentially expo-nential, one has to prevent the solver from exploring un-successful branches. This relies on two aspects: lowerbound computation and pruning techniques. We have in-troduced better lower bound computation techniques [4]as well as original pruning methods [5] that dramaticallyimproved the e�ciency of covering problem solvers.

This paper presents an extension of this work. Section 2reviews some concepts related to binate covering prob-lems. Section 3 addresses the lower bound computa-tion problem. It discusses the quality of achievable lowerbounds from a theoretical point of view. It shows that an

y1 y2 y3 y4y1 + y3 + y4 1 1 1y1 + y2 + y4 0 1 0y2 + y3 + y4 1 0 1y2 + y3 + y4 0 1 0

Figure 1: A binate covering matrix.

original lower bound computation method has a betterapproximation ratio than the widely used independentset based method. Section 4 introduces a new pruningtechnique that takes advantage of the partitioning. Sec-tion 5 presents experimental evidences that demonstratethe e�ectiveness of these methods on unate and binatecovering problems.

2 Binate Covering Problem

This section de�nes the binate covering problem andthe notations that will be used in the sequel.

We denote v an element of f0; 1g, and y a Boolean vari-able. We denote yv the assignment of y to v. Letf(y1; : : : ;yn) be a Boolean function from f0; 1gn intof0; 1g. Let Cost be a function that associates a posi-tive cost with the assignment of variable yk to 0 or 1.The cost of a n-tuple (v1; : : : ; vn) of f0; 1g

n is de�ned asPn

k=1Cost(yvkk ).

De�nition 1 The binate covering problem consists of�nding a minimum cost n-tuple that evaluates f to 1.

LetQm

i=1 si be a product-of-sums representation off(y1; : : : ;yn). The covering matrix associated with abinate covering problem is a matrix M made of m rowslabeled with the sums si, and of n columns labeled withthe variables yj . An elementM [i; j] of the matrix is equal

to 1 if (yj ) si), to 0 if (yj ) si), and to 2 otherwise1.A n-tuple (v1; : : : ; vn) 2 f0; 1g

n covers a row si i� thereexists a j such that M [i; j] = vj. The binate coveringproblem consists of �nding a minimum cost n-tuple thatcovers all the rows of the matrix. Fig. 1 shows a coveringmatrix2 for the function y2y3+(y1�y4). Assuming the

1Assuming there is no sum that contains a variable and itsnegation.

2We omit the 2's when showing covering matrix.

1

Page 2: [ACM Press the 33rd annual conference - Las Vegas, Nevada, United States (1996.06.03-1996.06.07)] Proceedings of the 33rd annual conference on Design automation conference - DAC '96

x1 1 1 1 1 1x2 1 1 1 1 1x3 1 1 1 1 1x4 1 1 1 1 1x5 1 1 1 1 1x6 1 1 1 1 1

Figure 2: An ill-conditioned covering matrix.

same non zero cost for positive literals, and a zero costfor negative literals, the minimum cost true assignmentsare the 4-tuples (1; 0; 0; 0) and (0; 0; 0; 1).

We denote hX;Y;Costi a covering problem, where X andY are the sets of rows and columns respectively. We iden-tify yv with the set of rows y covers when it is assigned tov. We identify a row x with the set of columns that coverit. The symbol y will be used to denote a set yv, and weconsider that y represents the pair of columns (y0;y1).One says that y is unate if yv = � for some v. A coveringproblem whose columns are all unate is a unate coveringproblem, also called a set covering problem.

A covering matrix can be simpli�ed using essentiality,dominance [11, 9], partitioning, and Gimpel's reduc-tion [7, 12]. The reader is referred to [2, 13, 5] for areview of these reduction techniques. Iterating the re-duction process produces a �xpoint, called the cyclic coreof the covering matrix [11]. If it is empty, a minimum so-lution is built from the essential columns found duringthe reduction, and from the columns used by Gimpel'sreductions.

If the reduced covering problem C = hX;Y;Costi isnot empty, a branch-and-bound resolution can be per-formed. A branching variable y is selected to generatetwo covering problems. The �rst one, Cv = hX�yv; Y �fyg;Costi, considers that y = v, and the other, de�nedas Cv = hX � yv; Y � fyg;Costi, considers that y = v.Both problems Cv and Cv are recursively solved, whichproduces the minimum solution of C.

3 Lower Bound Computation

When solving a covering problem with a branch-and-bound algorithm, it is essential to compute a good lowerbound of the problem generated at some branch of thesearch tree, since the branch can be pruned as soon as thelower bound is greater or equal to the best solution foundso far. This section presents two lower bound computa-tion methods, and shows that, from an approximationratio point of view, the second method is far more e�ec-tive than the widely used independent set based one. Itthen explains why in practice the latter is a better choicefor most of the problems related to logic synthesis, al-though the second method is especially e�ective in someother applications.

3.1 Independent Set Based Lower Bound

A technique widely used to compute a lower boundconsists of �nding a set L of one-to-one disjoint rows,

L �;while X 6= � do f

x an element of X;L L [ fxg;X X �

Sy3x y;

greturn L;

Figure 3: Greedy independent set computation.

i.e., there is no column covering more than one row ofL. Consider the graph whose vertices are the rows, andsuch that there is an edge between two rows i� they arecovered by a common column. Then L is nothing but anindependent set of this graph. The minimumcost neededto cover L is

Cost(L) =Xx2L

Weight(x); where

Weight (x) = miny2Yy3x

Cost(y);

which is a lower bound on the minimum cost needed tocover X.

Indeed, the independent set based lower bound can bearbitrarily far from the minimum cost solution of the bi-nate covering problem. Consider a covering matrix madeof n rows and

�n2

�columns, such that any pair of rows

is 1-covered by a single column, and each column coversonly two rows3. Fig. 2 shows such a matrix for n = 6.Assuming a unit cost for all columns, the maximuminde-pendent set based lower bound is 1, while the minimumcost covering is made of dn=2e columns.

3.2 Approximation of Independent Set

Maximizing the independent set based lower bound isNP-hard. A natural heuristic computation consists ofcomputing an independent set L in a greedy way, asshown in Fig. 3. It consists of selecting a row x of Xto be put in L, removing from X all the rows that havea column in common with x to enforce L to be an inde-pendent set, and iterating this process until X is empty.

We are now interested in comparing the best achiev-able independent set based lower bound with the oneobtained using a greedy algorithm. The e�ectiveness ofsuch a greedy algorithm depends on the choice of therow x to be put in the independent set L. Let us as-sume that we select the row x that maximizes the ratioof its weight (the greater, the better) and of its num-ber of elements (the smaller, the better). Theorem 1shows that it yields a O(jXj) ratio approximation of themaximum independent set based lower bound, which is avery poor approximation4. The proof still holds for more

3This polynomial size unate matrix is completely reduced w.r.t.to dominance relations and Gimpel's reduction for n > 2.

4The best known approximation ratio is O(jXj=(logjXj)2), butthe corresponding algorithm is more costly [1].

2

Page 3: [ACM Press the 33rd annual conference - Las Vegas, Nevada, United States (1996.06.03-1996.06.07)] Proceedings of the 33rd annual conference on Design automation conference - DAC '96

complex selection heuristics, e.g., the one proposed in [4].Although it has not been proved yet for all polynomial-time selection heuristics, it is unlikely that such a greedyapproach can achieve a better approximation.

Theorem 1 Let C = hX;Y;Costi be a binate coveringproblem. Let Lo be a maximum cost independent set, andL the independent set computed by the algorithm given inFig. 3, where the heuristic selection is:

x = argmaxx2X

Weight(x)

jxj

Then we have:

Cost(L) � Cost(Lo) � Cost(L)maxx2Xjxj

Proof. Let L = fx1; x2; : : :g where xk is the k-th selectedrow. Since L is a set of disjoint sets of columns, we canproperly de�ne

w(y) =

(Weight (xk)

jxkjif y 2 xk

0 otherwise

Let us show thatP

y2xw(y) �Weight (x)

jxjfor any x.

Note that since the independent set L is maximal (i.e.,any row intersects some row of L), there exists a leastinteger, say j, for which x \ xj 6= �.

Xy2x

w(y) =Xk

Xy2x\xk

Weight(xk)

jxkj

�Weight(xj)

jxjj

�Weight(x)

jxj

The last inequality holds because, by de�nition of j, wehave x � (Y �[j�1k=1xk), and xj has been chosen becauseit maximizes the quantity given above.

Thus we have (the second inequality holds because Lo

is made of disjoint sets):Xy2Y

w(y) =Xk

Xy2xk

w(y) =Xk

Weight (xk) = Cost(L)

�Xx2Lo

Xy2x

w(y)

�Xx2Lo

Weight(x)

jxj

�Cost(Lo)

maxx2X jxj

2

S �;while X 6= � do f

y argminy2YCost (y)�(y\X)

;X X � y;S S [ fyg;

greturn S;

Figure 4: Greedy computation of a solution.

3.3 log-Approximation of Unate Problems

We have shown that not only the maximum(NP-hard)independent set based lower bound can be arbitrarily farfrom the minimumcost solution, but also that its approx-imation with a greedy algorithm is of poor quality. Wepresent an original lower computation technique that hasthe same complexity than the method presented above,but that guarantees a log approximation ratio w.r.t tothe minimum solution on unate problem. This resultstill holds on binate covering problem when the method�nds a feasible solution.

Indeed, this result relies on an inequality relating the costof the best solution and the cost of a solution obtained ina greedy way. In other words, we �rst generate a feasiblesolution, and we then derive from its cost a lower boundon the minimum solution.

Let be some strictly positive weighting function de�nedon the set of rows, and let us de�ne

�(y) =Xx2y

(x):

Fig. 4 presents a subquadratic time algorithm that builda solution5 in a greedy way. From the cost of the solutionS obtained by this algorithm, we derive a lower boundon the problem, as stated by the following theorem (aweaker form of this result can be found in [8, 3]).

Theorem 2 Let C = hX;Y;Costi be a unate coveringproblem. Let So a minimum cost solution of C, and S thesolution computed by the algorithm given Fig. 4. Thenwe have:

Cost(S)

r� Cost(So) � Cost(S)

where

r =

maxy2Y �(y)Xk=1

1=k

if is integer valued, and

r = 1 + logmaxy2Y

�(y)

minx2y (x)

otherwise. Note that the later upper bounds the former.

5Note that it is not necessary irredundant.

3

Page 4: [ACM Press the 33rd annual conference - Las Vegas, Nevada, United States (1996.06.03-1996.06.07)] Proceedings of the 33rd annual conference on Design automation conference - DAC '96

Proof. Let S = fy1; y2; : : :g, where yk is the k-th selectedcolumn. Let us denote S(y; k) = y � [k�1j=1yj the set ofx's belonging to some y that remains uncovered by the�rst k � 1 selected columns. Thus S(yk ; k) is the set ofx's that are newly covered by yk at the k-th step. Notethat the sets S(yk ; k) form a partition of X, so we canproperly de�ne for x 2 S(yk ; k):

w(x) = Cost(yk) (x)

�(S(yk ; k))

We now show thatP

x2y w(x) � r(y)Cost(y) for any y.

Xx2y

w(x) =Xk

Xx2yk\S(y;k)

Cost(yk) (x)

�(S(yk ; k))

By de�nition, the sequence S(y; k) is decreasing andeventually becomes empty for indices k greater than someinteger j. Also note that at the k-th step of the greedy al-gorithm, y\X is nothing but S(y; k). Since yk minimizes

�y:Cost (y)�(S(y;k))

, we obtain:

Xx2y

w(x) � Cost(y)

jXk=1

�(yk \ S(y; k))

�(S(y; k))

= Cost(y)

jXk=1

ak � ak+1

ak;

where ak = �(S(y; k)), since yk \ S(y; k) = S(y; k) �S(y; k + 1).

If is integer valued, then (ak) is a decreasing positiveinteger sequence that becomes zero at k = j + 1, so wehave:

jXk=1

ak � ak+1

ak�

jXk=1

akXi=ak+1+1

1=i

=a1Xi=1

1=i

=

�(y)Xi=1

1=i

In the general case, (ak) is a decreasing positive sequencethat becomes zero at k = j + 1, and we have:

jXk=1

ak � ak+1

ak� 1 +

j�1Xk=1

Z ak

ak+1

dx

x

= 1 + loga1

aj

� 1 + logmaxy2Y

�(y)

minx2y (x)

We then have (the second inequality holds for anycover of X):

Xx2X

w(x) =Xk

Xx2S(yk;k)

w(x) =Xk

Cost(yk) = Cost(S)

�Xy2So

Xx2y

w(x)

�Xy2So

r(y)Cost (y)

� r �Cost(So)

2

Theorem 2 shows that unate covering problem islog(jXj)-approximable (take (x) = 1). Although binatecovering problem is not polynomially approximable6, thelog ratio still holds when the greedy algorithm of Fig. 4produces a feasible solution.

This log inequality holds for any , in particular forthe branching column selection heuristics of [13] and [4].Here, we are interested in minimizing r and maximizingCost(S). Since (x) captures the di�culty of covering x(the greater, the more di�cult), we can reverse the cri-terion that would yield a good upper bound in order toobtain a good lower bound (e.g., we can take (x) = jxj).

3.4 Discussion

Let us call IND the independent set based lower boundpresented in Section 3.2, and LB the lower bound compu-tation method presented in Section 3.3. We have shownthat not only IND has a poor approximation ratio regard-ing the maximum independent set based lower bound,but also that the latter can be arbitrarily far from theminimum solution. On the other hand, LB has a log-ratio approximation of the minimum solution.

Intuitively, the greater the density of the covering ma-trix, the more edges between the rows in the inducedgraph, the smaller the maximum independent set. Thuswe can expect low quality lower bound with IND on suchproblems, while LB , thanks to its log-approximation ra-tio, can be still e�ective. On some high density (> 30%)optimal assignment problems in operations research, LBis typically 20% to 60% better than IND , and up to 5times better.

Unfortunately, LB , although far better than IND froma theoretical point of view, yields a smaller lower boundthan IND when the problem becomes smaller or sparser.Moreover, IND can take advantage of the limit lowerbound [5], which can prune a complete search tree thatLB could not prevent from being explored. In general,LB does not improve the lower bound enough to sig-ni�cantly increase the number of prunings, except forsome very particular examples, e.g., ill-conditioned orhigh density covering matrices.

6Finding a feasible solution of a binate covering problem comes

down to �nding a true assignment of a conjunctive normal form,which is NP-complete.

4

Page 5: [ACM Press the 33rd annual conference - Las Vegas, Nevada, United States (1996.06.03-1996.06.07)] Proceedings of the 33rd annual conference on Design automation conference - DAC '96

B1

B2

. . .

Bn

Figure 5: Partition of a covering problem

4 A Partition Based Pruning Technique

When the rows and columns of a covering matrix canbe permuted so that its 0 or 1 elements only occur in apartition made of disjoint diagonal blocks Bk as shown inFig. 5, each block can be solved independently and con-catenating their solutions yields a solution of the originalproblem [13]. This section describes a pruning method7

that takes advantage of such a partitioning. We denoteC:lower and C:upper some lower and upper bound of aset covering problem C. As soon as C:lower � C:upper,the branch C belongs to can be pruned. When C issolved, both C:lower and C:upper are set to the mini-mum cost solution. We call block a subproblem yieldedby a partition.

Fig. 6 shows how a partition is solved. The method relieson three aspects:

� Lower bounds of the blocks are computed to prunethe resolution as soon as possible.� Upper bounds of the blocks are computed and usedwith the lower bounds to overconstrain the resolu-tion of blocks, thus reducing the space search thesolver has to look at.� Each block and its most recent resolution status iscached, so that it can be retrieved when anotherpartitioning yields the same block.

The resolution status is as follows: UNSAT (the prob-lem is not feasible), SOLVED (the minimum solution isknown), PRUNED (one did not �nd a better solutionthan the current upper bound), LU (we only know alower and upper bound8 on the problem), and NEW (anew block with no information). When a partition of aproblem C is found, we call the function of Fig. 6 on thelist partition of blocks. If a block is found in the cache,we retrieve its most accurate informations regarding itsminimum solution, else we compute a lower and upperbound (which can solve the block if both these boundsare equal). If the resolution of C is not pruned thanksto the lower bounds, we end up with a list unsol of un-solved blocks. Each of these blocks is then solved, butits space search is restricted by forcing a possibly over-constrained upper bound loc upper . This arti�cial upperbound is the maximum room that is left to improve thecurrent solution of C. Two ags characterize how over-constrained the resolution of the block is. do better istrue i� we must improve the block's current solution so

7It was already used to produce the experimental results of [5].8It can be in�nite when it is not know whether the problem is

feasible.

function SolvePartition(C; partition)unsol �;lower 0;upper 0;foreach c 2 partition f

RetrieveBlock(c);/* c:status is UNSAT , SOLVED , LU , or NEW . */if c:status = UNSAT return UNSAT ;if c:status = NEW then

ComputeLowerAndUpperBound (c);/* c:status is now LU or SOLVED . */if c:status = LU then

unsol unsol [ fcg;lower lower + c:lower;upper upper + c:upper ;if lower � C:upper return PRUNED ;

gforeach c 2 unsol f

unsol unsol � fcg;loc upper min(upper ; C:upper)�

Pc2unsol c:lower;

if c:lower � loc upper return PRUNED ;do better (loc upper � c:upper);overcons (loc upper < c:upper);c:upper min(c:upper ; loc upper);upper upper � c:upper ;Solve(c);/* c:status is UNSAT , SOLVED , or PRUNED . */if c:status = UNSAT return UNSAT ;if c:status = PRUNED f

if (overcons = TRUE)then c:status LU ;else c:status SOLVED ;

if do better = TRUE return PRUNED ;g/* Update the global upper bound. */upper upper + c:upper ;

gBuild C's minimum solution and return SOLVED ;

Figure 6: Solving a partition.

that C's solution can be improved. overcons is true i�we forced a better upper bound. After having exploredthe (possibly smaller) search space of the block, we candecide whether the block has been solved, and whetherwe can abort the resolution of the partition.

5 Experimental Results

Caching all the subproblems generated during abranch-and-bound is de�nitely not e�ective, because thehit ratio is nearly zero. However caching the blocks gen-erated by a partition is e�ective. Most of the time, onlysmall blocks (say 10�10 covering matrices) are retrieved,but it happens that larger blocks are generated severaltimes, reducing substantially the CPU time.

Table 1 shows the performances of our solver SCHERZOwith and without the partition based pruning technique

5

Page 6: [ACM Press the 33rd annual conference - Las Vegas, Nevada, United States (1996.06.03-1996.06.07)] Proceedings of the 33rd annual conference on Design automation conference - DAC '96

Name Without Withnode CPU LU XAT node CPU

mlp4L 5546 29:1 0 21 984 12:1m4L 35386 62:0 4 151 3428 19:5m2I 1301374 1076:2 501 2085 32692 149:6max512L 3939882 4924:4 21 326 6902 110:2addm4L 1513341 4301:9 2 119 1114 21:7expsI � > 2h 412 4176 22877 414:6count:b 43314 85:3 393 873 18280 64:7des:a 144569 202:0 459 3743 26106 146:5apex4:a 1625529 937:3 6755 7298 33898 149:4int13 35971 189:3 65 196 16779 147:0C880:b 428711 1084:8 174 603 52187 234:8int14 � > 2h 2669 33869 475266 6584:3

The table gives the #hits on upper and lower bound(LU), the #hits on solved blocks (XAT), i.e., with sta-tus SOLVED and UNSAT , the number of nodes in thesearch tree, and the CPU time in seconds on a 60 MHzSuperSparc (85.4 SpecInt).

Table 1. SCHERZO with and without thepartition based pruning method.

presented in Section 4. The upper (respectively lower)part of the table presents data for unate (respectively bi-nate) covering problems. We have chosen examples thatcan be partitioned during the branch-and-bound resolu-tion. Of course, this does not happen with all examples,but the sparser the covering matrix, the more likely parti-tions can be found. Table 1 shows that when the coveringmatrix can be partitioned into independent blocks duringthe branch-and-bound, caching the blocks and overcon-straining the block resolutions can reduce the CPU timeby a factor of up to 100.

Table 2 compares the performances of SCHERZO with anespresso-like [13] version of SCHERZO, i.e., without theimprovements presented both in this paper and [5, 4].The table clearly shows that the search space that mustbe explored to solve both unate and binate covering prob-lems is dramatically reduced, and consequently the CPUtime is cut by a factor of up to 1000.

6 Conclusion

This paper has presented some more insights in solv-ing unate and binate covering problems. When solvingcovering problems with a branch-and-bound algorithm,quickly computing a good quality lower bound is a primenecessity. We investigated the tradeo� between the theo-retical approximation ratio and the \practical" quality oftwo lower bound computation algorithms. We also pre-sented a partition based pruning method that can sub-stantially speed up the resolution. Experiences show thatthe results presented both in this paper and [5, 4] im-prove the state-of-the-art of unate and binate coveringsolver up to three order of magnitudes in speed.

References

[1] R. Boppana, M. Halld�orsson, \Approximating Maxi-mum Independent Sets by Excluding Subgraphs", Bit

Example Esp�like ScherzoName sol node CPU node CPU

l in:rom:oc 113 7966 161:9 24 1:5test1 110 252837 1327:9 284 5:1f outL 364 1273837 1665:6 13112 52:4m4L 1335 545728 1854:6 3428 19:5mlp4L 1316 4265632 24339:3 984 12:1max512L 1087 � > 4d 6902 110:2ocex1 69 � > 4d 6555 192:0max1024 259 � > 4d 131968 10472:5prom2 287 � > 4d 27958 11947:0ex5 65 � > 4d 105916 17514:5add4 31 616545 899:3 622 7:15xp1:b 12 27925 106:6 6500 9:2count:b 24 1166613 6358:0 18280 64:7e64:a 80 � > 2h 160 0:6sao2:b 25 � > 2h 280 1:2jac3 15 � > 4d 290 6:9des:a 942 � > 4d 26106 146:5int13 110 � > 4d 16779 147:0apex4:a 776 � > 2h 33898 149:4C 880:b 63 � > 4d 52187 234:8int10 116 � > 2h 998 507:4int14 140 � > 4d 475266 6584:3

The table gives the minimum cost solution, the numberof nodes in the search tree, and theCPU time in secondson a 60 MHz SuperSparc (85.4 SpecInt).

Table 2. ESPRESSO-like vs. SCHERZO.

32, pp. 180{196, 1992.

[2] R. K. Brayton, G. D. Hachtel, C. T. McMullen,A. L. Sangiovanni-Vincentelli, Logic Minimization Al-gorithms for VLSI Synthesis, Kluwer Acad. Pub., 1984.

[3] V. Chv�atal, \A Greedy Heuristic for the Set-CoveringProblem", Math. Op. Res., 4-3, pp. 233{235, Aug. 1979.

[4] O. Coudert, J.-C. Madre, \New Ideas for Solving Cov-ering Problems", 32nd DAC, pp. 641{646, June 1995.

[5] O. Coudert, \Two-Level Logic Minimization: AnOverview", Integration, 17-2, pp. 97{140, Oct. 1994.

[6] S. Devadas, A. Ghosh, K. Keutzer, Logic Synthesis,McGraw-Hill, 1994.

[7] J. F. Gimpel, \A Reduction Technique for Prime Impli-cant Tables", IEEE Trans. Elec. Comp., 14, pp. 535{541, June 1965.

[8] D. S. Johnson, \Approximation Algorithms for Combi-natorial Problems", J. Comp. Sys. Sci., 9, 1974.

[9] E. L. Jr. McCluskey, \Minimization of Boolean Func-tions", Bell Sys. Tech. Jour., 35, pp. 1417{1444, April1959.

[10] G. de Micheli, Synthesis and Optimization of Digital Cir-cuits, McGraw-Hill, 1994.

[11] W. V. O. Quine, \On Cores and Prime Implicants ofTruth Functions", Am. Math. Monthly, 66, pp. 755{760,1959.

[12] S. Robinson, R. House, \Gimpel's Reduction TechniqueExtended to the Covering Problem With Costs", inIEEE Trans. Elec. Comp., 16, pp. 509{514, Aug. 1967.

[13] R. L. Rudell, Logic Synthesis for VLSI Design, PhD the-sis, UCB/ERL M89/49, 1989.

6