multi-decomposition: theorems and algorithms2 problem under the general operator and pre-specified...

14
1 Multi-decomposition: Theorems and Algorithms Chih-Jen Hsu, Abstract—Multi-decomposition is the general form of bi- decomposition, and decomposes a logic function into multiple sub-functions driving to the given multi-input operator. Com- pared to bi-decomposition, multi-decomposition can transform a function into more complicated structures and achieves better re- sults in balancedness and disjointness of sub-functions. This study presents a novel approach to decomposing a function by checking whether there exists functions that make all constraints unsatis- fiable (UNSAT). To simplify the problem, this study presents a reduction theorem to reduce the number of unknown functions based on the concept of Boolean resolution. This study also proposes several algorithms to solve this problem in considering completeness and efficiency. Experimental results demonstrate the efficiency of the proposed algorithms, the effectiveness of the reduction theorem, and the benefit of multi-decomposition versus bi-decomposition. Index Terms—Binary decision diagram (BDD), Craig interpo- lation, functional decomposition, logic synthesis, satisfiability. This paper could be copied and disseminated without any permission. If this paper could help anyone’s research, please help me to send this paper to him/her. Any comment or feedback are welcome. I. I NTRODUCTION Functional decomposition is a fundamental technique in logic synthesis [1][2][3] that has been researched and devel- oped through several decades, and plays an important role in facing urgent problems in synthesis [4][5]. Functional de- composition divides a Boolean function into multiple smaller functions without structural bias. This allows netlist transfor- mations using decomposition to achieve better quality through exploring a larger search space. Bi-decomposition [6][7], one of the most widely used forms, separates a function f (X) into two sub-functions g 1 ,g 2 under the given operator h and pre-specified supports Y 1 ,Y 2 X such that f (X)= h(g 1 (Y 1 ),g 2 (Y 2 )). (1) The key to generating smaller sub-functions is to use fewer supports. Thus, properly considering pre-specified supports significantly affects the quality of decomposed results [8][9]. Furthermore, logic functions can be iteratively bi-decomposed into multi-level netlists consisting of 2-input gates to reduce timing, area, and power of logic circuits. Multi-decomposition, the general form of bi-decomposition, divides a function into multiple smaller ones driving to an n- input operator (Fig.1). This method has aroused the interest C.-J Hsu is with the Graduate Institute of Electronics Engineering, National Taiwan University, and the Institute of Information Science, Academia Sinica, Taipei 10617, Taiwan (E-mail: [email protected]). Acknowledgement: Thank my classmate - Chi-An Wu’s help on theoritical discussion. Thank Prof. Jie-Hong Roland Jiang’s teaching and help. Thank Prof. Bow-Yaw Wang’s financial support. Without them, I can’t accomplish this paper. X f g2 Y 1 Y 2 (b)Bi-decomposition (c)Multi-decomposition g3 Y 3 Y 2 Y 1 h g2 g1 g1 h (a)Target function Fig. 1. (a) The target function f with the support X. (b) Bi-decomposition with pre-specified supports Y 1 ,Y 2 X. (c) Multi-decomposition from f with a 3-input operator. of many influential studies [6][7][10], but they have only dis- cussed problems limited by the 2-input operator or additional restrictions. Recursive bi-decomposition seems to be able to realize multi-decomposition. However, because of the charac- teristic of the recursive method, the quality of n-way decompo- sition is difficult to control in each bi-decomposition. Further- more, a recursive bi-decomposition cannot be used to divide a function with a binate operator such as mux. Bi-decomposition is therefore unable to handle multi-decomposition. For the actual synthesis application, some advanced circuit techniques such as the latch-mux design style [11] require netlists to be transformed into specific architectures. These transformations are beyond the power of bi-decomposition. Instead, the multi- decomposition technique can synthesize functions into desired architectures and achieve better results in balancedness and disjointness. Previous research has explored the decomposition of logic functions. Pioneering studies were presented by Ashenhust [12], Curtis [13], and Roth et al. [14]. Whose algorithms decompose the function into cascaded blocks, but not the form of multi-decomposition. In the 1990s, [15][16] proposed dedicated theorems and algorithms for and/or/xor decompo- sition. Their approaches check relations among quantified functions on partition groups of supports. However, algorithms [6][7][8][9] based on these theorems are difficult to extend to multi-decomposition because of complicated relations be- tween numerous groups of the supports’ partition (2 n - 1 in n-decomposition). Yang et al. [17] utilized the charac- teristic of the binary decision diagram (BDD) representation to decompose functions under 2-input operators and mux. However, their approach is a structural-dependent algorithm in the specific operators and has difficulty considering the supports of sub-functions. Disjoint support decomposition [10] is an efficient algorithm for circuit optimization, but not for decomposing functions under a desired operator and supports. Ba˜ neres et al. [18] proposed a recursive paradigm for multi- decomposition. However, the recursive approach to solve the Boolean relation is time-consuming for the decomposition, and it does not consider the pre-specified supports. There is currently no unified algorithm to solve this decomposition

Upload: others

Post on 08-Dec-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

1

Multi-decomposition:Theorems and Algorithms

Chih-Jen Hsu,

Abstract—Multi-decomposition is the general form of bi-decomposition, and decomposes a logic function into multiplesub-functions driving to the given multi-input operator. Com-pared to bi-decomposition, multi-decomposition can transform afunction into more complicated structures and achieves better re-sults in balancedness and disjointness of sub-functions. This studypresents a novel approach to decomposing a function by checkingwhether there exists functions that make all constraints unsatis-fiable (UNSAT). To simplify the problem, this study presents areduction theorem to reduce the number of unknown functionsbased on the concept of Boolean resolution. This study alsoproposes several algorithms to solve this problem in consideringcompleteness and efficiency. Experimental results demonstratethe efficiency of the proposed algorithms, the effectiveness of thereduction theorem, and the benefit of multi-decomposition versusbi-decomposition.

Index Terms—Binary decision diagram (BDD), Craig interpo-lation, functional decomposition, logic synthesis, satisfiability.

This paper could be copied and disseminated without anypermission. If this paper could help anyone’s research, pleasehelp me to send this paper to him/her. Any comment orfeedback are welcome.

I. INTRODUCTION

Functional decomposition is a fundamental technique inlogic synthesis [1][2][3] that has been researched and devel-oped through several decades, and plays an important rolein facing urgent problems in synthesis [4][5]. Functional de-composition divides a Boolean function into multiple smallerfunctions without structural bias. This allows netlist transfor-mations using decomposition to achieve better quality throughexploring a larger search space. Bi-decomposition [6][7], oneof the most widely used forms, separates a function f(X)into two sub-functions g1, g2 under the given operator h andpre-specified supports Y1, Y2 ⊆ X such that

f(X) = h(g1(Y1), g2(Y2)). (1)The key to generating smaller sub-functions is to use fewersupports. Thus, properly considering pre-specified supportssignificantly affects the quality of decomposed results [8][9].Furthermore, logic functions can be iteratively bi-decomposedinto multi-level netlists consisting of 2-input gates to reducetiming, area, and power of logic circuits.

Multi-decomposition, the general form of bi-decomposition,divides a function into multiple smaller ones driving to an n-input operator (Fig.1). This method has aroused the interest

C.-J Hsu is with the Graduate Institute of Electronics Engineering, NationalTaiwan University, and the Institute of Information Science, Academia Sinica,Taipei 10617, Taiwan (E-mail: [email protected]).

Acknowledgement: Thank my classmate - Chi-An Wu’s help on theoriticaldiscussion. Thank Prof. Jie-Hong Roland Jiang’s teaching and help. ThankProf. Bow-Yaw Wang’s financial support. Without them, I can’t accomplishthis paper.

X f

g2

Y1

Y2

(b)Bi-decomposition (c)Multi-decomposition

g3 Y3

Y2

Y1

h g2

g1 g1

h

(a)Target function

Fig. 1. (a) The target function f with the support X . (b) Bi-decompositionwith pre-specified supports Y1, Y2 ⊆ X . (c) Multi-decomposition from fwith a 3-input operator.

of many influential studies [6][7][10], but they have only dis-cussed problems limited by the 2-input operator or additionalrestrictions. Recursive bi-decomposition seems to be able torealize multi-decomposition. However, because of the charac-teristic of the recursive method, the quality of n-way decompo-sition is difficult to control in each bi-decomposition. Further-more, a recursive bi-decomposition cannot be used to divide afunction with a binate operator such as mux. Bi-decompositionis therefore unable to handle multi-decomposition. For theactual synthesis application, some advanced circuit techniquessuch as the latch-mux design style [11] require netlists to betransformed into specific architectures. These transformationsare beyond the power of bi-decomposition. Instead, the multi-decomposition technique can synthesize functions into desiredarchitectures and achieve better results in balancedness anddisjointness.

Previous research has explored the decomposition of logicfunctions. Pioneering studies were presented by Ashenhust[12], Curtis [13], and Roth et al. [14]. Whose algorithmsdecompose the function into cascaded blocks, but not theform of multi-decomposition. In the 1990s, [15][16] proposeddedicated theorems and algorithms for and/or/xor decompo-sition. Their approaches check relations among quantifiedfunctions on partition groups of supports. However, algorithms[6][7][8][9] based on these theorems are difficult to extendto multi-decomposition because of complicated relations be-tween numerous groups of the supports’ partition (2n − 1in n-decomposition). Yang et al. [17] utilized the charac-teristic of the binary decision diagram (BDD) representationto decompose functions under 2-input operators and mux.However, their approach is a structural-dependent algorithmin the specific operators and has difficulty considering thesupports of sub-functions. Disjoint support decomposition [10]is an efficient algorithm for circuit optimization, but not fordecomposing functions under a desired operator and supports.Baneres et al. [18] proposed a recursive paradigm for multi-decomposition. However, the recursive approach to solve theBoolean relation is time-consuming for the decomposition,and it does not consider the pre-specified supports. Thereis currently no unified algorithm to solve this decomposition

Page 2: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

2

problem under the general operator and pre-specified supports.This paper proposes theorems and unified algorithms in a

novel perspective for multi-decomposition under pre-specifiedsupports. To the best of our knowledge, this is the firstpaper to deal with multi-decomposition under pre-specifiedsupports. The core of this work focuses on constraints con-taining unknown functions. This approach transforms multi-decomposition into constraints, and solve the problem by find-ing functions that make all constraints unsatisfiable (unsat). Tosolve existence of functions that make constraints unsat, thisstudy proposes several algorithms that consider completeness,efficiency, scalability, and practicability. To simplify the prob-lem, this study proposes a reduction theorem to eliminate someunknown functions by concatenating constraints. Experimentalresults demonstrate the efficiency of proposed algorithms, theeffectiveness of the reduction theorem, and the benefit ofmulti-decomposition compared to bi-decomposition.

This paper is organized as follows. Section II describes thepreliminaries and the problem formulation. Sections III to VIIpresent the methodology. Section III outlines the relationshipbetween components in the methodology. Section IV proposescomplete algorithms. Section V presents theorems of resolu-tion and reduction. Section VI describes the automatic supportpartition. Section VII proposes an approximate technique into leverage efficiency and accuracy. Section VIII presents theexperimental results. Section IX discusses related issues, andSection X concludes the paper.

II. PRELIMINARY

We first introduce notations to facilitate further explana-tion. The support of a Boolean function is an ordered setof variables denoted by X,Y, Z, V with or without super-scripts/subscripts. Especially, we utilize Y to represent a subsetof X and supports with different superscripts indicate distinctand irrelevant ones. Furthermore, the lowercase xi denotessome variable in X , and the bold lowercase x denotes theassignment of X . In addition to the support, the generalnotation of a set is the symbol modified by a vector ~c or curlybrackets {ci} where ci denotes as an element of ~c, ({ci}). Forexample, Y 1

3 is one element of ~Y 1 and subset of X1 but isirrelevant to Y 2

3 .The Boolean function is denoted by f, g, h, ϕ, ψ, and Sv(f)

denotes the support size of f . Moreover, We further usesymbol g to represent an unknown function. An unknownfunction is fixed in the support size but changeable in thefunction value. The model of an unknown function g is denotedby g which is one Boolean function able to realize g, thatis, Sv(g) = Sv(g) and g has the defined function value.Besides, we use u to denotes the signed unknown function,e.g. u ∈ {gi or ¬gi}. Hence, u is used to denote the functionfrom substituting gi for gi of u.

A. Problem Formulation of Multi-decomposition

Multi-decomposition is the technique to separate a functionf into multiple smaller ones {gi} to a given operator h underthe pre-specified supports of sub-functions {gi} such that {gi}

driving to h is equal to f . The following paragraph introducesthe terminology and the definition of decomposability.

Definition 1 (bound-set). The support of a sub-function isthe subset of the support of a target function. The set ofpre-specified supports of sub-functions is named “bound-set”and denoted by ~Y = {Y1, Y2...Ym|Yi ⊆ X} where X is thesupport of the target function.

Definition 2 (operator). An operator is a Boolean functionh(V ) : Bm → B. We call the given operator decompositiontype, type, or operator through the paper.

Definition 3 (decomposable). Given a target function f(X), adecomposition type h(V ), and a bound-set ~Y where Sv(h) =|~Y | = m, f is decomposable to h under ~Y iff there existfunctions ~g = {gi} such that

f(X) = h(g1(Y1), g2(Y2), g3(Y3)...gm(Ym)) (2)where Yi is the support of gi. If there are no such functionsthat satisfy (2), f is indecomposable to h under ~Y . The RHSof (2) is abbreviated as h({gi(Yi)})

Note that, without the constraint of the pre-specified bound-set, a target function f can be trivially decomposed to anyoperator1 into sub-functions composed of {0, 1, f}. Therefore,in the decomposition problem, the pre-specified bound-set isan important metric to guide theoretical reduction and getbetter quality. Moreover, bi-decomposition is a special caseof multi-decomposition whose type is a 2-input function.Besides, the operator used in this paper is like standard cellsused in technology mapping where the size of an operatoris from 1 to 7. Furthermore, the common method [15][16]for solving bi-decomposition evaluates relations between func-tions quantified on {Y1/Y2, Y2/Y1, Y1 ∩ Y2}. However, therelations are too complicated to be analyzed when |~Y | > 2.In a new perspective, we transform multi-decomposition intoconstraints containing unknown functions, and solve it byfinding functions which make all constraints unsat.

B. Transformation into Constraints with Unknown Functions

Definition 4 (cube quantifier). A cube c on the domain V isthe conjunction of literals l1 ∧ l2... ∧ ln where var(li) ∈ V .We define the cube quantifier to c on a set of function ~g witha bound-set ~Y as (3) (where |~g| = |~Y | = |V |)∧

c{gi(Yi)} =∧vi∈c{gi(Yi)} ∧

∧¬vi∈c

{¬gi(Yi)}. (3)

Besides, the onset Con and offset Coff of h are the setsof cubes as (4). Next, we propose Theorem 1 to checkdecomposability.∨

Con = h(V ) and∨Coff = ¬h(V ). (4)

Theorem 1. While Con and Coff are the onset and offset ofoperator h, target f is decomposable to h under ~Y if and onlyif there exist functions ~g = {gi} that∧

c{gi(Yi)} ∧ ¬f(X) = 0, for each c ∈ Con∧c{gi(Yi)} ∧ f(X) = 0, for each c ∈ Coff.

(5)

1except the constant function

Page 3: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

3

Proof: We would prove the equivalence between (2) and(5). First, f = h({gi(Yi)}) is equivalent to (6).

( (f(X) ∧ ¬h({gi(Yi)})) = 0) ∧( (¬f(X) ∧ h({gi(Yi)})) = 0) .

(6)

As the cube expands h and ¬h at (4), we can infer (7).h({gi(Yi)}) =

∨c∈Con

∧c{gi(Yi)}

¬h({gi(Yi)}) =∨c∈Coff

∧c{gi(Yi)}.

(7)

If we substitute (7) for h({gi(Yi)}) and ¬h({gi(Yi)}) in (6)and perform the distributive law, we could infer that f =h({gi(Yi)}) is equivalent to (5). We prove that if functions ~gsatisfies (2), then it would also satisfy (5), and vice versa.

Example 1. h is mux(v1, v2, v3) and whose onset Con = {v1∧v2,¬v1 ∧ v3} and offset Coff = {v1 ∧ ¬v2,¬v1 ∧ ¬v3}. Thecube quantifier to c = v1 ∧ ¬v2 on ~g with ~Y is∧

v1∧¬v2{gi(Yi)} = g1(Y1) ∧ ¬g2(Y2) (8)

Function f(X) could be decomposed to h under ~Y if and onlyif there exist g1, g2, and g3 such that

¬f(X)∧ g1(Y1)∧ g2(Y2) = 0¬f(X)∧ ¬g1(Y1)∧ g3(Y3) = 0f(X)∧ g1(Y1)∧ ¬g2(Y2) = 0f(X)∧ ¬g1(Y1)∧ ¬g3(Y3) = 0 .

(9)

Therefore, multi-decomposition is transformed into findingfunctions that make constraints, like (9), unsat. A constraintis composed of known and unknown functions as well as theconstraint after the resolution (that will be described in SectionV). The following paragraph formally introduce constraint andthe core of this paper, unknown function problem.

C. Constraint and Unknown Function Problem

When solving the decomposition problem, we are facing:whether there are functions ~g = {gi} to make all constraintsin forms like (10) unsat. In (10), ϕi is the known function butgi is the unknown function.

ϕ1(X1) ∧ ¬ga(Y 1

i ) ∧ gb(Y 1j )... ∧ gp(Y 1

m)ϕ2(X

2) ∧ gc(Y 2k ) ∧ gd(Y 2

l )... ∧ ¬gq(Y 2n )

...(10)

The graphical representation of (10) is in Fig. 2.

1 ga gb

0 1

gp

1

1

2 gc gd

1 1

gq

0

1

Fig. 2. Graphical representation of (10) to find ~g to make constraints unsat.

Definition 5 (constraint). A constraint c under the unknownfunctions ~g is defined as (11).

constraint c := ϕ(X) ∧ (∧{ui(Yi)})

c-function ϕ(X) := Boolean functionc-variable ui(Yi) := ui ∈ {gj ,¬gj |gj ∈ ~g}

Yi ⊆ X, |Yi| = Sv(ui).

(11)

ϕ, called c-function, is a quantifier-free Boolean function, andui, called c-variable, is an unknown function symbol w/ or w/othe negation. When a constraint does not contain c-variable,it is called closed constraint.

Definition 6 (Unknown Function Problem). Given Σ = 〈~g,~c〉where ~g is the set of unknown functions and ~c is the set ofconstraints under ~g, the unknown function problem Σ decides

whether “there are functions {gi} substituted for {gi} of {ci}to make all substituted constraints {ci} unsat.” The functions{gi} making {ci} unsat are called the fitting model of Σ.

For instance, (5) is the case of the unknown functionproblem containing Sv(h) unknown functions and |Con| +|Coff| constraints. The safety property model checking [19]is another well-known case. It contains a single unknownfunction g and 3 constraints {i(S) ∧ ¬g(S) , g(S) ∧ ¬p(S) ,g(S)∧t(S, S′)∧¬g(S′)}. 2 The property holds iff there exists gmake constraints all unsat. Section IV presents algorithms forsolving the unknown function problem. Prior to that, the nextsection characterizes the overview of the whole methodology.

All sub- functions

Multi- decomposition

Auto-supports Partition

Operator

Bound set

Greedy refine

Constraint w/o unknowns

Constraint w/ unknowns

Theorem 1 transform

Statically Learning

Unsat Core

Conflict Learning

QBF in 2 solvers Decompo-

sability

Partial sub-

functions

BDD

ALL-SAT

Incremental SAT

Target function

Resolve & Reduce

Data Method and switch

Target function

Interpolant recover

Fig. 3. Methodology overview. The shaded box represents the information,and open one represents the algorithm/function.

III. METHODOLOGY OVERVIEW

This section describes the overview of the methodologycovering from Section IV to VII, and Fig. 3 shows the topicsand relations in the methodology. Overall, automatic supportpartition uses multi-decomposition iteratively to optimize thebound-set. Multi-decomposition uses the operator, the targetfunction, and the bound-set as its inputs to determine de-composability. Through Theorem 1, multi-decomposition firsttransforms the inputs into constraints containing unknownfunctions. Then, the unknown function problem is solved byseveral algorithms in different advantages, as follows:

1. The All-SAT-based algorithm is only applicable insmall-size cases, but it can generate the unsat core forfurther usage.

2. The quantified Boolean formula (QBF)-based algorithmis efficient in small-size cases.

3. The BDD-based algorithm is efficient in medium-sizecases.

4. The SAT-based algorithm only solves the problem com-posed of closed constraints, but it is most efficient

2In model checking, S and S′ are the state and the next state variables.i(S) is initial function, p(S) is property , and t(S, S′) is transition relation.

Page 4: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

4

especially in large-size cases.To simplify the unknown function problem, we propose

reduction theorem to reduce the number of unknown functions.Moreover, decompositions to some operators could be reducedinto closed constraints. Hence, the SAT-based algorithm couldefficiently solve these decompositions. For the reduced un-known function, the missing function in the fitting model couldbe constructed by Craig-interpolation [20].

Furthermore, based on the advantage of the SAT-basedalgorithm, static learning and conflict learning are proposedto convert the problem into closed constraints. Although thisconversion only approximates decomposability, both methodscan trade off between efficiency and accuracy.

IV. ALGORITHMS FOR UNKNOWN FUNCTION PROBLEM

A. All-SAT Assignment to Relation SolvingFor solving unknown functions, first, we encode each as-

signment of each unknown function as a distinct Booleanvariable, and explore all relations among these variables. Then,we solve the existence of such unknown functions by solvingsatisfiability of these relations. If unsat, there are no suchfunctions to make all constraints unsat. If sat, the fitting modelcould be obtained from return values of the solver. The returnvalue 1/0 of an encoded variable ν, which corresponds to theassignment y of function gi, indicates gi(y) = 1/0. We defineAgi(y) = ν to map an assignment y of unknown function gito a Boolean variable ν. Besides, ’x|Y ’ denotes the projectionfrom assignment x to domain Y ⊆ X .

A relation among encoded variables is converted from a satassignment in a c-function. Assume that, for a constraint c =ϕ(X)∧¬g2(Y1)∧ g4(Y2), x is a sat assignment of ϕ(X). Formaking c unsat, g2(x|Y1) = 0 and g4(x|Y2) = 1 could not holdsimultaneously. Hence, the clause (νa∨¬νb) must hold whereνa = Ag2(x|Y1) and νb = Ag4(x|Y2). All relations in the formof clauses could be explored by the All-SAT assignments ofthe c-functions in all constraints.

Overall, we propose Algorithm 13 by using All-SAT forsolving the unknown function problem. Line 2 creates andallocates distinct variables to all assignments of all unknownfunctions. Line 5 solves All-SAT assignments of c-functionϕ(X), and Line 7 to 10 convert an assignment into a clause.Among them, Line 9 and 10 convert a projected assignmentinto a literal w/ or w/o the negation depending on the sign ofc-variable ui.

Proposition 1. Algorithm 1 is a sound and complete algorithmfor solving unknown function problem Σ = 〈~g,~c〉.

Proof: (⇒) If the algorithm returns true, Σ is true.Because return values from the solver would not violateany clause, that is, the fitting model would not violate anyconstraint. (⇐) If Σ is true, the algorithm returns true. IfΣ is true, there exists the fitting model {gi}, which couldbe transformed into values of encoded variables. If somevalues violate some clause, the fitting model would make thecorresponding constraint sat. Therefore, {gi} is not the fittingmodel→ contradiction. Hence, these values would not violateany clause, which means the algorithm would return true.

3In algorithms, ‘=’ is the equal operator, ‘←’ is the assign operator

Algorithm 1: DecompAllSAT(Σ)Input: Σ = 〈~g,~c〉Output: True or False

1 s←Init SAT Solver2 {Agi} ← variable encoding of {gi}3 foreach ci ∈ ~c do4 ci = ϕ(X) ∧ (

∧{ui(Yi)})

5 foreach x in ϕ(x) = 1 do6 cls← ⊥7 foreach ui(Yi) ∈ ~u do8 yi ← project x from X to Yi9 if gj = ui then cls← cls ∨ ¬Agj (yi)

10 if ¬gj = ui then cls← cls ∨ Agj (yi)11 s.addClause( cls )12 ret← s.solve()13 post processing

B. Mux-remodeling for 2-Solver QBF Solving

For solving unknown functions, the common method re-models the problem by substituting muxes for unknown func-tions into QBF and solves it by the QBF solver. Instead ofusing a QBF solver, we solve the remodeled formula by using2 alternative SAT solvers more efficiently.

Any function could be implemented by a mux. A muxdenoted by muxn(Y, Z) is composed of n select-signals Yand 2n data-signals Z. Every assignment y of Y forwardsthe value on a distinct data-signal zi ∈ Z to its output.On the other hand, an n-input function has 2n mintermsdeciding its functionality. Hence, any n-input function couldbe implemented by a muxn with some z assigned. To find anunknown function, we could replace it by a mux and find anassignment z to satisfy the converted formula.

The problem Σ = 〈~g,~c〉 could be remodeled into QBFby replacing unknown functions {gi} with muxes. First, forevery gi ∈ ~g, we create the domain Zi = Bn (wheren = Sv(gi)) as the common space. Then, for the constraint cj ,we replace each c-variable gi(Y j) of cj by a muxn(Y j , Zi)whose select-signals and data-signals connect to Y j and Zi

respectively. Hence, constraint cj is remodeled into a newconstraint c′j(X

j , ~Z) with additional supports ~Z but withoutany unknown function. The problem Σ is therefore convertedinto Σ′ = 〈~Z, ~c′〉 for checking whether there is an assignment~z of ~Z making all remodeled constraints {c′i} unsat. Thestatement “all constraints unsat” could be rewritten as Booleanformula ψ(~Z, ~X) =

∧¬c′i(~Z,Xi). Hence, the problem could

be remodeled in QBF as (12).

Σ′ = ∃~z ∀~x ψ(~z,~x). (12)

Example 2. For the case of Example 1, muxes remodel the4 constraints in (9) into ψ(~Z, ~X) of (12) as Fig. 4. Eachconstraint replaces its 2 unknown functions by 2 muxes. Then,4 remodeled constraints are connected together through Zi,which are the new supports to represent the minterms ofunknown functions.

AND

f

&

Target

function

Inverter

mux

f

&

&

f

0

0

f

&

&

f

0

0

Fig. 4. Remodeling Example 1 into QBF.

Page 5: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

5

To solve (12) by state-of-the-art solvers, ψ needs to befurther transformed into the conjunction normal form(CNF)cnfψ with additional internal variables in. Therefore, (12)needs to be transformed into 2 alternation QBF as (13a) or1 alternation QBF as (13b) proposed by [9].

∃~z ∀~x ∃incnfψ(~z,~x, in) (13a)¬(∀~z ∃~x ∃incnf¬ψ(~z,~x, in)) (13b)

Instead of using QBF solvers, we propose a more efficientmethod to solve (12) by using 2 alternative SAT solvers inAlgorithm 2.

Algorithm 24 uses two SAT solvers ssol, sblk interleavinglyto solve the inner quantifier and the outer quantifier of (12).Solver ssol which is initialized at Line 2 solves the negatedinner case at line 9 with an assignment z as (14)

∃x¬ψ(z, x). (14)Solver sblk adds clauses at line 11 and solves the outer caseat line 5 as (15) where xi is the witness in solving (14).

∃z∧xi

ψ(z, xi). (15)

In the while loop, if an assignment z found by sblk makesssol false, ∀xψ(z, x) is verified. Hence, (12) is verified. If zdoes not make the ssol false, ssol would produce a witness xi.This witness further generates the function ψ(Z, xi) to sblk toavoid the same z to be solved again. When sblk returns falseafter iteratively conjoining ψ(Z, xi), there is no z to verify∀xψ(z, x). Hence, (12) is falsified.

Algorithm 2: QBF-in-CBL(ψ)/* check ∃z∀xψ(z, x) */Input: ψ(Z,X) // quantifier-free boolean formulaOutput: True or False

1 sblk, ssol ← Init SAT solver2 ssol ← CNF transform ¬ψ(Z,X)3 sblk ← Init domain Z4 while > do5 if sblk .solve() then6 z← model Z of sblk7 else8 return False9 if ssol.solve(z) then

10 xi ← model X of ssol11 sblk ← sblk ∧ cnfψ(Z,xi)12 else13 return True

This algorithm would terminate due to the finite assignments2|Z| in Z. Fortunately, the number of while iterations isrelatively smaller than 2|Z| in practice. Hence, for solving (12),Algorithm 2 is more efficient than algorithms using state-of-the-art solvers. Experimental result shows the efficiency of thisalgorithm.

C. Subset-Computation in Recursive Solving

For solving the unknown function problem, we use themethod5 that iteratively enlarges subsets of the onset and offsetof the unknown functions until it finds a fitting model orencounters a conflict. A conflict is the situation that there isan intersection between the subsets of the onset and offset

4Scalar notation replaces vector notation in this algorithm for clear layout.5This method is inspired by symbolic model checking [21].

of some unknown function g. This means that g would notexist. For facilitating the following description, g⊥ and (¬g)⊥

denote the subsets of the onset and offset of g respectively6.For c-variable u = ¬g, (¬u)⊥ denotes g⊥ due to doublenegation elimination.

This method is the recursive procedure with continuoussubset-computation. The subset-computation is the step toenlarge subsets of functions, and the fixpoint is the situationthat no subset could be enlarged by subset-computation. Whenno conclusion is drawn at fixpoint, it performs case splittingfor continuing the further subset-computation. Hence, theoverall algorithm is formed in the recursive procedure. We firstintroduce the various topics in this method and then describethe whole algorithm.

The problem with n unknown functions has 2n subsets offunctions {g⊥i , (¬gi)⊥|gi ∈ ~g}. Proposition 2 describes thestep to enlarge the subset of some unknown function.

Proposition 2 (Subset-Computation). Given a constraint c =ϕ(X) ∧

∧{ui(Yi)} and the subsets of the onset and offset

of unknown functions {g⊥i , (¬gi)⊥}, we claim that the u′icomputed in (16) is also the subset of ¬ui where i ∈ [1, |~u|].

u′i(Yi) = ∃X/Yiϕ(X) ∧

∧j 6=i

{u⊥j (Yj)}. (16)

The graph representation of (16) is shown in Fig. 5(a).

Y1 Y2 Yn ui’(Yi)

u1 u2

un

Y1 Yi u'(Y’)

(a) (b)

Fig. 5. (a) Graphical representation of (16). (b) Graphical representation of(18). The supports Yi are connected together to Y ′.

From Proposition 2, u′i ∨ (¬ui)⊥ is also the subset of ¬uiand the one larger than (¬ui)⊥. We can continuously applysubset-computation to all constraints to enlarge all subsets. Inpractice, the subset-computation is easy to be performed inBDD7due to its compact and canonical representation. Hence,we use BDD to perform the operations and build this method.

The fixpoint is the situation that no subset could be enlargedthrough the subset-computation. The termination is the situa-tion that the problem is verified or falsified. The problem isfalsified when there is an intersection between g⊥i and (¬gi)⊥of some gi ∈ ~g, (e.g. gi does not exist). Besides, the problem isverified when the fitting model is found in some combinationof subsets of functions as (17) that gi equals the subset or thesuperset of gi.

{gi ← {g⊥i or ¬(¬g⊥i )}|gi ∈ ~g}. (17)We check whether some combination of (17) is the fittingmodel by directly checking whether it makes all constraintsunsat. Besides, the termination conditions are checked when-ever the fixpoint is reached. However, there is the case thatthe fixpoint occurs but no conclusion is drawn8. We need to

6The subset of the offset of g is equal to the subset the onset of (¬g) sowe use the notation (¬g)⊥ to denote it .

7The BDD techniques used in (16) are variable shift, conjunction, andexistential quantifier. They are all basic operations in common BDD packages.

8In safety model checking, the fixpoint implies that the fitting model existssince there is only one constraint which has more than one c-variables.However, in multi-decomposition, constraints with more than one c-variablesare often multiple. Hence, the fixpoint is not the termination.

Page 6: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

6

break through this situation to be able to further apply thesubset-computation

The strategy to break through the fixpoint is to force somesubset to be enlarged. We select a pair of subsets g⊥i , (¬gi)⊥and a minterm m ⊆ ¬(g⊥i )∧¬((¬gi)⊥). If one of the enlargedcases, {g⊥i ← (g⊥i ∨ m) or (¬g)⊥ ← ((¬g)⊥ ∨ m)}, couldinduce a fitting model, the problem is verified. If both cases areconflict, the problem is falsified. Therefore, the algorithm withcase splitting forms a recursive procedure. However, in therecursive procedure, enlarging the subset by joining a mintermis time-consuming. Alternatively, for efficiency, we use a cubeinstead of a minterm for enlarging the subset and refine theguessed cube while it infers a conflict.

The initial subsets of unknown functions are not only emptysets but also the sets generated from constraints. If a constraintc = ϕ(X)

∧{u(Yi)} has all the same c-variable u, we could

generate the subset of (¬u) by (18) as shown in Fig. 5(b).

u′(Y ′) = ∃X(ϕ(X)∧{(Yi ≡ Y ′)}). (18)

From the topics discussed above, we propose Algorithm3 formed by recursive procedure with continuous subset-computation for solving the unknown function problem.InitSubsets initializes subsets of the onset and offset ofunknown functions. BddTvl verifies the unknown functionproblem Σ = 〈~g,~c〉 with the constraint of given subsets offunctions G = {g⊥i , (¬gi)⊥}. GuessTvl is the fixpoint han-dler to break through the fixpoint by splitting cases. BddTvland GuessTvl call each other such that this algorithm isformed by the recursive procedure.

In BddTvl, Line 13 to 18 perform all possible enlarge-ments on all c-variables of all constraints. At Line 19, thefixpoint is checked whether they are the same nodes in BDD.Terminations are checked at Line 21 and 23. If the fixpointoccurs without terminations, it calls GuessTvl to performcase splitting.GuessTvl breaks through the fixpoint by forcing some

subset to be enlarged. It first selects a c-variable u with acube q and then tests whether u⊥ ← u⊥∨ q would induce thefitting model. If test returns conflict, it refines q and furtherperform BddTvl. Until q is refined to a minterm, it tests bothcases u⊥∨q and (¬u)⊥∨q. Hence, this algorithm is complete.

The common limitation of BDD applications is costly inconstructing functions. Nevertheless, the limitation of thisalgorithm is not only constructing BDDs of c-functions ineach constraint. It would also encounter the long decision treein the recursive procedure. For increasing the practicality, itcould directly return false at the long decision to generate aconservative answer. Therefore, we sacrifice completeness toenhance practicality. In automatic support partition, it couldget a conservative bound-set but more efficiently.

D. SAT solving in Problem without Unknown Functions

If the unknown function problem Σ = 〈~g,~c〉 has no un-known function ~g = {}, we could solve this problem directlyby solving the satisfiability of all constraints. If all constraintsare unsat, the problem is verified. If some constraint is sat, theproblem is falsified. This method is most powerful but onlyapplicable to closed constraints.

Algorithm 3: DecompBDD(Σ)Input: Σ = 〈~g,~c〉Output: True or False

1 G = {g⊥i , (¬gi)⊥} ← InitSubsets (~c)2 return BddTvl (Σ, G)

3 Function InitSubsets ( ~c )4 G = {g⊥i ← 0, (¬gi)⊥ ← 0|gi ∈ ~g}5 foreach c in ~c do6 if all c-variables in c are the same u then7 c = ϕ(X)

∧{u(Yi)}

8 u′(Y ′)← ∃X ( ϕ(X)∧{(Yi ≡ Y ′})

9 (¬u)⊥ ← (¬u)⊥ ∨ u′10 return G

11 Function BddTvl ( Σ = 〈~g,~c〉, G = {g⊥i , (¬gi)⊥} )12 repeat13 G′ ← G14 foreach ci in ~c do15 ci = ϕ(X)

∧{ui(Yi)}

16 foreach i in [1, |{ui}|] do17 u′(Yi)← ∃X/Yi

ϕ(X) ∧∧j!=i{u⊥j (Yj)}

18 (¬ui)⊥ ← (¬ui)⊥ ∨ u′19 until G = G′// Fix point occur20

21 if g⊥i ∧ (¬gi)⊥ 6= 0 in some gi then22 return false23 if subsets of functions make ~c UNSAT then24 return true25 return GuessTvl (Σ, G)

26 Function GuessTvl ( Σ = 〈~g,~c〉, G = {g⊥i , (¬gi)⊥} )/* Fix point Handler, Decision making */

27 u← randomly select from {gi,¬gi|gi ∈ ~g}28 q ← random cube in (¬(u⊥)) ∧ (¬((¬u)⊥))29 inv ← false

30 while True do31 if ¬inv then u⊥ ← u⊥ ∨ q32 else (¬u)⊥ ← (¬u)⊥ ∨ q33 ret ← BddTvl ( Σ, G )

34 if ret then return True35 if inv then // u 6∈ u⊥ ∧ u 6∈ (¬u)⊥

36 return False

37 resume u⊥, (¬u)⊥ // conflict occurs38 if q is not minterm then // Refine q39 q ← non empty subset of q40 else41 inv ← true

V. CONSTRAINTS REDUCTION AND RESOLUTION

Resolution and reduction on constraints could reduce thecomplexity of multi-decomposition, and its concept is similarto resolution of clauses. Resolution of clauses in the SATproblem P is an inference rule to produce a new clausewhile maintaining its functionality. For two clauses µ1 =(ν∨l1∨l2...∨ln), µ2 = (¬ν∨l3∨l4...∨lm) in P , if they havea complementary literal, e.g., ν ∈ µ1 and ¬ν ∈ µ2, clausesµ1 and µ2 can resolve a new clause µ′ = (µ1/ν) ∨ (µ2/¬ν)to P without changing its functionality. Furthermore, assumethat ~µν , ~µ¬ν ⊆ P are the sets of all clauses containing literalν and ¬ν respectively, and ~µ′ is the set of all clauses thatcould be resolved from ~µν and ~µ¬ν . The clauses joining ~µ′

but discarding ~µν and ~µ¬ν from P is logically equivalent toP but without variable ν. Similarly, we propose the resolutionrule and the reduction theorem on constraints. Based on them,the complexity of multi-decomposition could be significantlyreduced.

Page 7: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

7

In this section, we first describe the resolution on constraintsand the reduction on the unknown function problem. Next, weshow methods to solve derivative issues, including the orderof reduction and the construction of the missing fitting model.

A. Constraints Resolution and Reduction

Theorem 2 (Resolution on Constraints). Given the unknownfunction problem Σ = 〈~g,~c〉, and constraints c1, c2 ∈ ~c wherec1 = ϕ1(X

1) ∧ u1(Y11 ) ∧ u2(Y

12 ) ∧ ... gi(Y

1x ) ...un(Y

1n )

c2 = ϕ2(X2) ∧ u3(Y

23 ) ∧ u4(Y

24 ) ∧ ... ¬gi(Y 2

y ) ...um(Y 2m),

if c1 and c2 have an inversive c-variable, e.g., gi ∈ c1 and¬gi ∈ c2, constraints c1 and c2 can resolve a new one c′

c′ = ϕ′(X1 ∪X2) ∧ u1(Y11 ) ∧ u2(Y

12 ) ∧ ... ∧ un(Y 1

n )∧u3(Y

23 ) ∧ u4(Y

24 ) ∧ ... ∧ um(Y 2

m)where ϕ′(X1 ∪X2) = ϕ1(X

1) ∧ ϕ2(X2) ∧ (Y 1

x ≡ Y 2y ).

(19)

We claim that the functionality of Σ holds even when resolvedconstraint c′ is added to Σ. Specifically, c′ is the constraintcombining c1, c2 and eliminating gi, ¬gi. Its support is X1 ∪X2 with Y 1

x ∈ X1 and Y 2y ∈ X2 bound together.

Proof: From Propositional 1, the satisfiability of clausesconverted by Algorithm 1 is equal to the decomposability ofthe unknown function problem Σ. We will prove that theclauses generated from c′ are exactly the clauses resolved bythe ones generated from c1 and c2.

(⇒) Assume that, assignment x′ satisfies c′ to generatesclause µ′, and assignments x′|X1 and x′|X2 would thereforesatisfy c1 and c2 to generate clauses µ1 and µ2 respectively.In addition, µ1 and µ2 have a complementary variable, e.g,¬Agi(x′|Y 1

x ) ∈ µ1 and Agi(x′|Y 2y ) ∈ µ2 since Y 1

x ≡ Y 2y .

Hence, clause µ′ generated from c′ could be resolved by µ1

and µ2, which are generated from c1 and c2. (Other variablesare the same as Aui(Y

ji ) in µ1 and µ2).

(⇐) Besides, if clauses µ1 and µ2 generated from c1 and c2resolve a clause µ′, µ′ could also be generated from c′. Thiscan be proved by the same way. Hence, the resolved constraintadded to the problem Σ would not change its functionality andits fitting models.

Fig.6 represents resolution performing on two constraints.Both ϕ1 and ϕ2 are quantifier-free Boolean functions. Thec-function of the generated constraint c′ is also a quantifier-free Boolean function, which is the concatenation of ϕ1(X1)and ϕ2(X2) with Y 1

x ≡ Y 2y .

resolution 2 gi u3 um u4

0 1 u1 u2 un gi

1

1 u1 u2 un 2 u3 um u4

Fig. 6. Resolution of constraints as Theorem 2. Upper two constraints resolvethe bottom constraint to eliminate the c-variable gi and ¬gi. The c-functionof the new constraint is composed of ϕ1 and ϕ2 with Y 1

x ≡ Y 2y .

Corollary 1 (Reduction on Unknown Function Problem). Foran unknown function gi in Σ = 〈~c,~g〉 , assume that ~cgi(~c¬gi) ∈~c is the set of constraints which have one gi(¬gi) in theirc-variables, and ~c′ is the set of all constraints resolved from ~cgi

and ~c¬gi . If other constraints ~c/(~cgi∪~c¬gi) do not contain anyc-variable gi, we claim that the reduced problem Σ′ adding ~c′but discarding ~cgi and ~c¬gi from Σ has the same functionalityand the fitting model as Σ but without using gi, that is Σ′ =〈~g/gi, (~c ∪ ~c′)/(~cgi ∪ ~c¬gi)〉 ↔ Σ = 〈~g,~c〉.

Based on Corollary 1, unknown function gi could be elimi-nated to accelerate the solving process in multi-decomposition.Although the reduction may increase constraints and theresolution would enlarge their c-functions, fewer unknownfunctions can accelerate the solving process, as shown in ourexperimental result. Next, Example 3 specifically shows thereduction and resolution on constraints.

Example 3. Given a target function f(X), a bound-set ~Y ,and the decomposition type h(V ) = (v1 ∧ v2) ∨ v3 whereCon = {v1 ∧ v2, v3} and Coff = {¬v1 ∧ ¬v3,¬v2 ∧ ¬v3},solving h-decomposition is to find whether there are functions{gi} that make all constraints in (20) unsat.

c1 = ¬f(X1) ∧ g1(Y 11 ) ∧ g2(Y 1

2 )c2 = ¬f(X2) ∧ g3(Y 2

3 )c3 = f(X3) ∧ ¬g1(Y 3

1 ) ∧ ¬g3(Y 33 )

c4 = f(X4) ∧ ¬g2(Y 42 ) ∧ ¬g3(Y 4

3 ).

(20)

In (20) where ~cg3 = {c2} and ~c¬g3 = {c3, c4}, the otherconstraint {c1} does not contain g3. Therefore, it satisfyCorollary 1, and (20) could be reduced into (21) where {c5, c6}is resolved by {c3, c4} and {c2}.

c1 = ¬f(X1) ∧ g1(Y 11 ) ∧ g2(Y 1

2 )c5 = ¬f(X2) ∧ f(X3) ∧ (Y 2

3 ≡ Y 33 ) ∧ ¬g1(Y 3

1 )c6 = ¬f(X2) ∧ f(X4) ∧ (Y 2

3 ≡ Y 43 ) ∧ ¬g2(Y 4

2 )(21)

Furthermore, after eliminating unknown functions g1 and g2,h-decomposition is reduced into (22) without any unknownfunction.

c7 = ¬f(X2) ∧ f(X3) ∧ (Y 23 ≡ Y 3

3 )∧¬f(X′2) ∧ f(X4) ∧ (Y ′23 ≡ Y 4

3 )∧¬f(X1) ∧ (Y 1

1 ≡ Y 31 ) ∧ (Y 1

2 ≡ Y 42 )

(22)

Hence, the problem of checking the decomposability is re-duced to checking satisfiability of closed constraint c7. Fig. 7shows the reduction process.

c1 c4 c2 c3

0 1 1

1 0 0

1 0 0

0 1

1 0

g1

g2

1 0 0

0

0 1 1 0

0 1 0 1 0

g3

g3

Unknown functions g1, g2, g3

Resolution on gi

gi Target

function f

Y1 Y2 Y3

c7

Fig. 7. The process of the resolution and reduction in Example 3. Afterreducing on unknown functions g3 , g2 and g1, it is transformed into a closedconstraint.

As observed in Example 3, the structure of circuit expansiononly depends on what decomposition type uses. For instancein c7, the number of target duplications and the topology ofthe connection are only relevant to the corresponding operatorh = (v1∧v2)∨v3 but irrelevant to the functionality of f and the

Page 8: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

8

combination of ~Y . Due to this characteristic, the structure ofconstraints could be stored according to what decompositiontype uses. Then, whenever performing h-decomposition, thestored structure could combine the given target function andthe bound-set to construct corresponding constraints.

Besides, although reduction could decrease the complexity,there is missing functions in the fitting model. For instancein Example 3, if problem in (21) is verified, we could getthe fitting model g1 , g2. However, we could not get thefunction g3 from the answer of algorithms. In subsectionV-C, we propose a method to construct the missing functionfrom Craig-interpolation. Next, we present another issue inreduction.

B. Termination and Reduction Order

While more than one gi appear in a constraint, Corollary 1cannot be applied to eliminate gi. Furthermore, if no unknownfunction could be eliminated, the reduction process terminates.Thus, the final reduced problem still involves some unknownfunctions.

Example 4. From the constraints in Example 1, we couldeliminate the unknown functions g2 and g3 to form con-straints in (23). However, g1 could not be further reducedbecause each constraint contains 2 g1. Hence, the reductionprocess terminates. mux-decomposition is finally reduced intoΣ = 〈g1, {c1, c2}〉, which could not be solved by SAT-basedalgorithm in IV-D.

c1 = ¬f(X1) ∧ f(X3) ∧ (Y 12 ≡ Y 3

2 ) ∧ g1(Y 11 ) ∧ g1(Y 3

1 )c2 = ¬f(X2) ∧ f(X4) ∧ (Y 2

3 ≡ Y 43 ) ∧ ¬g1(Y 2

1 ) ∧ ¬g1(Y 41 ).

(23)

If we change the order to “g1, g2” to perform reduction onthe problem in Example 1, we would get the constraints shownin Fig. 8. Comparing (23) and Fig. 8, mux-decomposition is re-duced into different structures from different reduction orders.Furthermore, different structures cause different complexity.Fortunately, a good structure and its reduction order could beprecomputed and stored. Therefore, we could perform multi-decomposition efficiently from the stored structure and the bestreduction order.

0 0 1 0 1 1 0 1 1 0 0 1

0 0 1 1 1 0 0 1 1 1 0 0

Target function f

Unknown function g3

Fig. 8. Constraints reduced by unknown functions g1 and g2 from the casein Example 1. There are 4 constraints containing 16 duplications.

C. Interpolate the Missing Fitting Model

For the missing function in the fitting model, we useinterpolation to retrieve it from the unsat core of resolvedconstraints. Before the explanation, we first describe the situ-ation and introduce the use of symbols. Given the problemΣ = 〈~g,~c〉 and an unknown function gx ∈ ~g, if ~c couldbe divided into ~cgx , ~c¬gx and ~cn, (where ~cgx(~c¬gx) is theset of constraints containing one gx(¬gx) and ~cn is the setof constraints without gx), the problem could therefore bereduced to problem Σr = 〈~g/gx,~cr〉, (where ~cr = ~cn ∪ ~cv ,~cv is the set of constraints resolved from ~cgx and ~c¬gx ). If

the reduced problem is true, we could get the fitting model{gi|i6=x} that make constraints ~cr of Σr unsat. However,for problem Σ, there is a missing function gx that makesconstraints in ~cgx and ~c¬gx unsat even if we are convinced theexistence of such function. For facilitating the explanation, ifa function ψ(X) = ϕ(X)∧(

∧ui(Yi)) contains c-variables ui,

ψ(X) denotes the function from substituting the fitting modelgi for gi of ui as ϕ(X) ∧ (

∧ui(Yi)). In this section, we first

describe the simplified case, |~cgx | = |~c¬gx | = 1, to explain theidea and then extend it to the general case.

While ~cgx = {c1} and ~c¬gx = {c2}, the constraint ~cvtherefore contains only one constraint c′ resolved from c1 andc2. Hence, we could formulate c1, c2 and c′ as (24)9

c1(X1) = ψb(X

1) ∧ gx(Y 1b )

c2(X2) = ψa(X

2) ∧ ¬gx(Y 2a )

c′(X1 ∪X2) = ψb(X1) ∧ ψa(X2) ∧ (Y 1

b ≡ Y 2a )

(24)

Since the fitting model {gi|i 6=x} makes constraints in Σrunsat, c′ is also unsat. We derive the interpolant from the unsatcore of c′ by letting ψb(X1) as B and ψa(X2) ∧ (Y 1

b ≡ Y 2a )

as A. The generated interpolant is exactly the missing functiongx because B ∧ I , A ∧ ¬I would be unsat, e.g. (25) unsat.

c1(X1) = ψb(X

1) ∧ I(Y 1b )

c2(X2) = ψa(X

2) ∧ ¬I(Y 1a ).

(25)

Therefore, the fitting model {gi|i 6=x} of Σr with I = gx wouldlet constraint ~c = {~cn, c1, c2} unsat. That is the fitting modelof original problem Σ. As shown in Fig. 9(a), the interpolantis generated from the junction of two concatenated constraintssubstituted the fitting model of Σr.

In the viewpoint of Boolean space, ∃X1/Y 1bψb(X

1) and∃X2/Y 2

aψa(X2) are the subsets of the offset and onset of gx

according to Propositional 2. Hence, the space between theonset and offset is the function gx as shown in Fig. 9(b).

1 2 B A

A B

(b) (a) Interpolant

Fig. 9. (a) The resolved constraint c′ substituted the fitting model. Theinterpolant is generated from the junction of A and B. (b) The viewpoint inBoolean space. The interpolant is larger than A but smaller than ¬B.

For the general case, |~cgx | ≥ 1 and |~c¬gx | ≥ 1, weapply the similar method to retrieve the missing function,and we start explaining it from the viewpoint of Booleanspace. For a constraint ci ∈ ~cgx , we formulate it as ci =ψci(X

i) ∧ gx(Y ix). Besides, each function ∃Xi/Y ixψci(X

i)from ci ∈ ~cgx is the subset of the offset of gx, andthe union of subsets is also the subset. Therefore, afterconsidering all possible subsets (constraints), the functionbetween the offset

∨ci∈~cgx

∃Xi/Y ixψci(X

i) and the onset∨cj∈~c¬gx

∃Xj/Y jxψcj (Xj) is a valid implementation of miss-

ing function gx, as shown in Fig. 10.

In the algebraic viewpoint, we let A and B as (26) for9ψb(X

1) = ϕ1(X1) ∧ (∧{ui(Y 1

i )}), ψa(X2) = ϕ2(X2) ∧(∧{uj(Y 2

j )}), where ui, uj ∈ {gi,¬gi|i 6=x}.

Page 9: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

9

A

Interpolant

B

Fig. 10. Each open circle on the left side represents ∃Xj/Y

jxψcj (Xj) where

cj ∈ ~c¬gx , and it is also the subset of onset. Each shaded circle on the rightside represents ∃Xi/Y i

xψci (Xi) where ci ∈ ~cgx , and it is also the subset of

offset. The dotted region is the interpolant of A and B, and it is a valid gx.

generating the interpolant, which is the missing function gx.

A =∨ci∈~c¬gx

(ψci(X

i) ∧ (Y ix ≡ Y ))

B =∨cj∈~cgx

(ψcj (Xj) ∧ (Y jx ≡ Y )

).

(26)

A∧B would be unsat since each pair of constraints ci ∈ ~c¬gXand cj ∈ ~cgx resolve a constraint c′ ∈ ~cr, which is unsat underthe fitting model {gi|i 6=x}. The interpolant I generated fromthe unsat core of A∧B can make the constraints in (27) unsat.

ci(Xi) = ψci(X

i) ∧ ¬I(Y ix), for each ci ∈ ~c¬gxcj(X

j) = ψcj (Xj) ∧ I(Y jx ), for each cj ∈ ~cgx .(27)

Therefore, all constraints in ~c = {~cgx ,~c¬gx ,~cn} would beunsat under the fitting model {gi|i 6=x} of Σr with gx = I .Hence, {gi|i 6=x} with gx = I is the fitting model of Σ, and theinterpolant I is the implementation of missing function gx. Fig.11 shows the unsat formula for generating the missing functiongx. From the technical viewpoint for ORing constraints, weadd the control variable to all clauses of each constraint, andadding the clause of negated control variables is same asORing constraints.

: c-variable substituted the fitting model

2 A

OR m

1

n

B

OR

Y

1

Fig. 11. The unsat formula for generating the missing fitting model gx. Theinterpolant is constructed from the junction of A and B.

Example 5. For the reduced unknown function problem inExample 3, if c7 is unsat, the problem is decomposable. Thefitting model g1 could be retrieved from the unsat core of c7,as shown on the top of Fig. 12. Besides, the function g2 couldalso be constructed in the similar method. After finding g1 andg2, function g3 could be constructed from the unsat formulabuilt like Fig. 11 by g1,g2 and c5,c6 of (21) , as shown on thebottom of Fig. 12. Therefore, the fitting model {gi} of (20)could be constructed if c7 is unsat.

0 1 0 1 0

0 1 1 0

1

interpolation

substitution

A B

Fitting model

Target function f

0

0

1 0

OR

Fig. 12. Fitting model constructed from the reduction problem in Fig. 7.

VI. AUTOMATIC BOUND-SET PARTITION

Given a target function and a decomposition type, automaticbound-set partition is the method to explore a good decompos-able bound-set. A good bound-set would be considered fromwhat application used. In general, disjointness and balanced-ness are the common metrics to determine a good bound-set.We use these metrics in this work and present the algorithm.Besides, we show the advantage of the SAT-based algorithmused in automatic bound-set partition.

The bound-set is represented in a matrix and the costfunction are designed as a function of the matrix. For thebound-set {Yi|Yi ⊆ X} where |~Y | = m and |X| = n, thematrix Mm×n represents it, and each entry αij ∈ Mm×nrepresents whether xj ∈ Yi. For instance in (28), thereare 5 variables in the support of the target function and 3variables in the decomposition type. Furthermore, α23 = 0represents x3 6∈ Y2. We use the matrix as the input to computedisjointness and balancedness.

M3×5 =

(0 1 1 0 11 1 0 1 00 1 0 0 1

)(28)

The disjointness and balancedness are proposed as (29a)and (29b) respectively where #cj is the number of ones inthe column vector cj and #ri is the number of ones in therow vector ri. The smaller value is the better quality. Hence,we define cost(M) as (29c) used in the Algorithm 4 whereα = 0.6 and β = 0.25 such that disjointness and balancednessare sensitive from > 0.6 and > 0.25 respectively.

disjointness =1

n

n∑j=1

(#cjm

). (29a)

balancedness =stdev(#ri)

n. (29b)

cost = (disjointness3 + α3)× (balancedness2 + β2). (29c)

Algorithm 4 optimizes the bound-set for the given targetfunction and decomposition type while maintaining decom-posable. It selects an “1” entry which could lower the costand flips it from 1 to 0. If it is decomposable, we accept thischange. If not, we reject it and continue the next iteration. The“decomposable” is tested by algorithms proposed in IV.

Algorithm 4: Auto-partitionInput: f(X):Target function , h(V ):Decomposition typeOutput: ~Y : Optimized bound-set

1 ~Y ← {X,X, ...X}2 repeat3 αij ← an entry to decrease cost of ~Y4 ~Yr ← ~Y |αij←0

5 if f decomposable under h and ~Yr then6 ~Y ← ~Yr7 else8 mark αij not be choosed in ~Y9 until no αij could be select

For solving decomposability during automatic bound-setpartition, the SAT-based approach could reuse the solvinginstance among different bound-sets. The SAT-based approach

Page 10: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

10

only solves the problem in closed constraints. A closed con-straint is composed of target duplications and bound-set con-nections. The duplications are the same, but the connectionsare varied from different bound-sets. Therefore, we add controlvariables to switch connections (Y ai ≡ Y bi ) from differentbound-sets as (30) where αij is the variable representing anentry of the matrix of a bound-set.

|X|∧j=1

(αij → (xaj ≡ xbj)

)(30)

Hence, for solving among different bound-sets, we could solveit with different αij through the incremental SAT. Based onthe benefit of using the incremental SAT, we further translatethe multi-decomposition problem into closed constraints in theleverage of efficiency and accuracy, and we present it in thenext section.

VII. APPROXIMATE DECOMPOSITION ALGORITHM

This section proposes two techniques, static learning andconflict learning, to generate closed constraints for approxi-mating decomposability where the closed constraints are asfollows: that one constraint is sat implies the problem isindecomposable, but that all constraints are unsat impliesthe problem approximate decomposable. Although the ap-proximative method would result in a Type II error10, it isapplicable for two reasons. The main reason is for scalability.Because SAT does not explicitly enumerate the function andincremental SAT reuses the solving instance, it could handlelarger cases. Another reason is that this Type II error could becontrolled and refined. Like the time-frame expansion used inmodel checking whose accuracy could be controlled by moreframes expanded, the accuracy of this method could be refinedby adding more constraints. We first introduce the requireddefinition and then describe these techniques. The usefulnessof these approaches are shown in experimental results.

We define a template as like the form in (20) (21) (22)to describe the circuit expansion forming the constraint butwithout describing what target function and what bound-setused. A template is composed of duplications, connections,and unknown function symbols. We could substitute the targetfunction f and the bound-set ~Y into template t to construct aconstraint denoted by t(f, ~Y ). A template is closed if it hasno unknown function, and a closed template is call belongingto operator h iff t(h, {Yi = {xi}}) is unsat.

Theorem 3. Given a template t belonging to h, if f is decom-posable with h under ~Y , t(f, ~Y ) would be unsat. Reversely,if t(f, ~Y ) is sat, f is indecomposable with h under ~Y .

For performing h-decomposition, following two techniquescollect the templates belonging to h as complete as possible.Static learning systematically resolves templates into largerones like the time-frame expansion used in BMC [21]. Conflictlearning generates the template from each false-negative resultto avoid this mistake occurred again.

10It is actually indecomposable but all constraints are unsat.

A. Static Learning

Static Learning systematically resolves templates into or-derly ones and composes them into templates belonging tosome operator h. However, this technique is only applicableto the problem Σ = 〈g,~c〉 which has 1 unknown function andthe constraints with less than 2 c-variables11, like (23). We use(23) as the running example to explain static learning.

1) Arranging Templates: We sort templates by cases withc-variables in {} , {g} , {¬g} , {g, g} , {¬g,¬g}, and {g,¬g}.If two templates have the same c-variables, they are ored intoa single template. Furthermore, if a template has symmetryc-variables, we flip it and or the template and the flipped onetogether. In (23), there are two constraints having symmetryc-variables {g, g} and {¬g,¬g}. We flip the template and orthem into ϕA and ϕB as shown in Fig. 13.

1 1 1 0

0 0 1 0 1 1

1 0

1 0 OR

Unknown

function g

Target

function f Y1 Y2 Y3

A

1 0

1 0

0

OR

0

B c1 c2

Fig. 13. c1 of (23) has symmetry c-variables {g, g}. Hence, we flip it andor c1 and the flipped one into a undirected template ϕA. Beside, ϕB isconstructed from c2.

2) Generating Transition Relation: Next, we generate thetransition relation which is the template containing c-variables{g,¬g}. In addition to the original one, templates with{g, g} and {¬g,¬g} could resolve another template containing{g,¬g}. As shown in Fig. 14, ϕA and ϕB resolves thetransition relation ϕTR which contains c-variable {g,¬g}.

TR 1 1 0 0 B A 1 0 A B

Fig. 14. ϕA and ϕB resolve transition relation ϕTR.

Any template with the transition relation could resolve anew one which has the same c-variables as the given template.Hence, we could concatenate many transition relations to anytemplate to generate more templates with the same c-variables.In Fig. 15, ϕA with n transition relations resolves a newtemplate ϕnA containing c-variables {g, g}. The same happensto ϕB .

n

0 0 0 0 0 0

1 1 1 1 1 1

n

g

A

B

TR

{ g, g }

{g, g}

Fig. 15. n transition relations with ϕA and ϕB resolve new templates withc-variables {g, g} and {¬g,¬g} respectively.

3) Generating Template Belonging to h: Finally, we con-struct the closed template by connecting the onset and offsetof g. The onset of g could be constructed from the templatescontaining {¬g} or {¬g,¬g}, like Fig. 5(b). Hence, for therunning example, the onset and offset of g could be constructedfrom ϕnB and ϕnA, as shown in Fig. 16. In this case, mux-decomposition, we expand 3 transition relations concatenatingto ϕA and ϕB for constructing the onset and offset of g, andthe template belonging to mux is constructed as Fig. 16. Inour experiment, expanding 2 transition relations would cause

11The reason is that templates containing less than 2 c-variables only resolvethe template containing less than 2 c-variables. Therefore, the sort of templateswould not grow. Hence, we could systematically resolve templates to the largerones.

Page 11: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

11

Type II error, but expanding 3 transition relations would notresult in Type II error for cases we studies. The further moredetailed analysis and study will be remained as future work.

OR

OR

onset

offset

Fig. 16. Constructing the template belonging to mux from ϕnA and ϕnBwhere n = 3 is work to all the cases we studies.

B. Conflict LearningConflict learning generates the template belonging to the

operator h to avoid the same error from happening again.The feedback loop is shown as Fig. 17. For performing h-decomposition, the stored templates belonging to h combinethe target f and the bound-set ~Y to generate closed constraints.If the result of solving these constraints is unsat, it approxi-mates decomposable. Meanwhile, if the result from completealgorithms is indecomposable, Type II error happens. We usethe All-SAT based algorithm as the complete algorithm tocheck the correctness, and it would generate the unsat corewhile indecomposable. Moreover, from the unsat core, wegenerate the closed template t′ whose topology is isomorphicto the unsat core, and this generated template could avoidthe same error from happening again, e.g., t′(f, ~Y ) would besat. Therefore, the stored templates become more completewhenever an error occurs. In the following, we explain howand why unsat core generates the template belonging to h.

closed constraints

Templates belonging

to h

Conflict Learning

SAT Solving unsat Checking Correctness decomp

indecomp Type II error

Substitution Target f

Bound-set

new template

unsat core add to

Fig. 17. The loop to generate a new template belonging to h when Type IIerror occurs. The generated template could avoid the same error.

The unsat core generated from All-SAT based algorithmcould generate a template which is an isomorphism of theunsat core. Each clause generates the corresponding duplica-tions of the target from the c-function which generates suchclause. A connection is generated from the common variableof clauses to bind the generated duplications together.

As shown in Fig. 18, the unsat core is generated from theclauses which are converted from ϕA and ϕB of Fig. 13 byAll-SAT based Algorithm. Besides, the template belonging tomux on the right side is generated from the unsat core on theleft side. µ0 to µ4 generate the duplications as the form of ϕAof Fig. 13, and µ5 to µ9 generate the duplications as the formof ϕB . The duplications corresponding to µ0, µ3, µ5, µ6 areconnected together since ν1 are shared among these clauses.

The generated template t substituted with the same f and~Y would be sat, and such template belongs to operator h.The reason is that each variable of unsat core corresponds toan assignment of the unknown function, and these assignmentsare exactly the sat assignment of t(f, ~Y ). Hence, this templatecould avoid the same Type II error from happening again.Furthermore, such template belongs to h since the clausesgenerated from solving t(h, {Yi = {xi}}) is isomorphic tothe unsat core, e.g., t(h, {Yi = {xi}}) would be unsat.

5

6

7

8

9

0

3

4

1

2

v1

v5

v2

v6

v8

v7

v3

v4

Fig. 18. Template generated form unsat clauses where ϕA and ϕB are theduplications of the target as in Fig. 13.

VIII. EXPERIMENTAL RESULTS

We implemented the proposed algorithms and methods fordealing with multi-decomposition in C++ language. The All-SAT and SAT-based approaches were based on MiniSat-2.2[22]; the BDD-based approach was based on CUDD [23]; andCraig interpolation was implemented on McMillan Algorithm[24] with MiniSat-p-1.14. All the experiments were conductedon a Linux machine with Xeon 2.5GHz CPU and 32GBmemory. The experiments were designed so as to demonstrate:

1. the role that the operators play in multi-decomposition,2. the runtime among proposed algorithms, and3. the benefit of multi-decomposition vs. bi-decomposition.

A target function used in the experiments was extracted fromthe transitive-fanin cone of some node of a circuit formed inAIG. Besides, we sorted functions by the size of the support asthe intervals {5, 10, 16, 25, 40, 64, 100, 160}. Based on circuitsfrom ISCAS85, ISCAS89, ITC99, and IWLS05 benchmarksuits, we collected 100 functions in each interval as shown inFig. 19. In each interval, the average and standard deviationof the gate count were showed on the top of the figure. Thedecomposition type was chosen from 4-input functions. Thebound-set was analyzed and used from the ones optimizedduring the automatic bound-set partition. The runtime waslimited to 300 seconds for performing the automatic bound-setpartition.

2 5 10 16 25 40 64 100 160

101

102

103

Number of the support size(log)

Num

ber

of

the

node

size

in A

IG (

log)

I3.12.6

II13.617.5

III33.726

IV79136

V159244

VI223380

VII246127

VIII502225

Intervalavg sizestd size

Fig. 19. The node size of the single output functions versus the support size.

A. Characteristic of Operators in Reduction and Learnings

Table I presents the characteristic of 4-input operators usedfor multi-decomposition. Column 1 lists 4-input functionsformed in 4 hex. strings to represent distinct NPN-equivalenceclass12. Column 2, 3, 4 list the size of support, the onsets, andoffsets of the operators respectively. Besides, different classesof operators have different frequency of occurrence in realcases. Column 5 shows the ratio of 4-input functions in all

12there are 222 NPN-equivalence classes of 4-input functions, and reduc-tions to operators in the same equivalence class have the isomorphic structures.

Page 12: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

12

TABLE ITHE CHARACTERISTIC OF 4-INPUT OPERATORS IN REDUCTION

Fun h function statistic reductionSv(h) Con Coff lut% |~g| #f cls

8000 4 abcd a b c d 11.72 0 5 IE000 4 acd bcd ab c d 10.95 0 7 IF8F8 3 ab c bc ac 10.47 0 5 IF888 4 ab cd ac ad bc bd 7.94 1 24 II8888 2 ab a b 7.06 0 3 I8080 3 abc a b c 5.76 0 4 IFF80 4 abc d ad bd cd 5.63 0 7 IF800 4 abd cd bc ac d 4.49 0 8 ID800 4 abd acd ab ac d 3.58 1 12 IID8D8 3 ab ac ab ac 3.48 1 8 IID580 4 abc ad ab ac ad 3.27 1 22 III8787 3 abc bc ac abc bc ac 1.90 1 68 IIIF600 4 abd abd cd abc abc d 1.79 1 20 II*

7800 4abcd

bcd acd

abc

bc ac d1.78 1 102 III

F880 4abd

abc cd

cd bd

ad bc ac1.78 2 38 III

7F80 4abcd

cd bd ad

abcd

cd bd ad1.67 1 420 III

9696 3abc abc

abc abc

abc abc

abc abc1.65 2 6 III

6666 2 ab ab ab ab 1.62 1 8 II*AAAA 1 a a 1.49 0 2 I

1EE1 4

abcd abcd

bcd acd

bcd acd

abcd abcd

bcd acd

bcd acd

1.47 2 24 III

E8E8 3 bc ac ab bc ac ab 1.40 2 10 IIIF6F6 3 ab ab c abc abc 1.26 1 12 II*F780 4 abc bd ad abc bd ad 1.11 1 600 III

Sum 93.26

LUT-mapped [25] circuits. Furthermore, the NPN-operatorslisted in Table I are commonly used ones which total 93.26%.

We analyzed the structure of the unknown function problemafter performing the reduction where the reduction is appliedand terminated to the best order. Column 6, 7 list the numberof unknown functions and the number of duplications ofthe target function in the reduced problem. Furthermore, weclassified the operators by what algorithms could be appliedto, as shown in Column 8, and the relation is I ⊆ II* ⊆II ⊆ III. Decomposition to operators of Class I could bereduced into a closed constraint. Decomposition to operatorsof Class II could be converted into closed constraints bystatic learning, and Class II* is the special subset of II thatdecomposition to those operators could be transformed intoclosed constraints as XOR-decomposition proposed in [7]. Theother operators belong to Class III. We could observe (andprove) that if the operator could be written in cascaded formlike v1 ∨ v2 ∨ (v3 ∧ v4 ∧ (v5 ∨ v6 ∨ v7)), it belongs to ClassI. Furthermore, the more commonly used operator has thesimpler onset and offset. Besides, the decomposition to theoperator with the simpler onset and offset could be reducedinto the problem with fewer unknown functions and dupli-cations. The unknown function problem with fewer unknownfunctions and duplications has lower complexity. Therefore,the decomposition to the commonly used operator could besolved efficiently.

Table II classifies single output operators in the standardcell library. As shown, most of the operators are classified intoClass I and II. Therefore, multi-decomposition could often besolved efficiently and scalably as [7].

TABLE IITHE CHARACTERISTIC OF OPERATORS IN STANDARD CELL LIBRARY

I II II* IIIBUF AOI21 NAND3 NOR2 NOR4BB OAI31 AOI22 MXI2 XOR2 AOI222 MXI3INV AOI211 NAND4 NOR3 OR2 OAI2BB1 AOI221 OAI22 XNOR2 AOI32 OAI222

TBUF AOI31 NAND2B NOR4 OR3 OA21 AOI2BB2 OAI221 AOI33 OAI33AND2 AOI2BB1 NAND3B NOR2B OR4 OA22 AO22 OAI2BB2 MX4 XOR3AND3 AO21 NAND4B NOR3B OAI21 MX2 MXI4 XNOR3AND4 NAND2 NAND4BB NOR4B OAI211 MX3

34 9 2 11

TABLE IIIIMPACT OF NUMBER OF UNKNOWN FUNCTIONS (second)

|~g| 4 3 2 1 0

cases E000 (class I)41− 64 0.17 0.17 0.18 0.18 0.1865− 100 0.00 0.00 0.00 0.00 0.00101− 160 0.012 0.012 0.012 0.012 0.012

F888 (class II)17− 25 1.68 1.12 1.00 0.21 −†

26− 40 6.72 6.37 8.30 3.10 −41− 64 24.24 21.60 23.07 13.07 −

D580 (class III)17− 25 1.77 2.00 1.33 0.77 −26− 40 8.16 9.44 7.08 5.17 −41− 64 29.08 21.96 20.53 18.13 −

F780 (class III)17− 25 3.40 3.79 4.19 4.62 −26− 40 9.98 9.66 9.31 10.33 −41− 64 24.96 21.46 22.57 25.08 −

† The decomposition to this operator could not be reduced into closed constraints.

Table III demonstrates the impact of number of unknownfunctions on the runtime of decomposition. This experimentused operators E000, F888, D580, F780 to decompose thecases shown in Column 1 by the BDD-based algorithm.Column 2 to 6 show the average runtime of solving the reducedunknown function problems with 4 to 0 unknown functions.In F888 and D580, the most efficient way is solving themost reduced problem which contains 1 unknown function.F780 is an extreme example that most reduced constraintscontain 600 duplications. However, solving the most reducedproblem is about the same as solving the non-reduced problemwhich contains 6 duplications only. Another extreme case isE000, which is the operator belonging to class I. Althoughthe runtime are about the same among reduced problems,decomposition to the most reduced form could be solved bythe SAT-based approach more efficiently. This demonstratesthe effectiveness of the reduction on constraints.

B. Runtime Analysis

Table IV compares the runtime of mux-decomposition withrespect to the algorithms proposed in Section IV. Column 5lists the runtime of the conservative BDD-based algorithmwhich returns indecomposable when conflicts exceed 1000times. Column 7 lists the runtime of QBF algorithm byusing depQBF [26] for solving (13b). Column 11 to 15list the runtime of solving closed constraints converted bystatic learning and conflict learning. Conflict Learning 1, 2uses 7, 6 closed templates belonging to mux to approximatedecomposability respectively where templates are generatedfrom unsat core of the All-SAT based algorithm.

As shown, The SAT-based algorithm is most efficient. TheBDD-based algorithm is efficient in solving medium-size cases(supports < 64). The QBF-based algorithm is efficient insolving small-size cases (supports < 12). The All-SAT basedalgorithm is not practical to solve unknown function problem.

Page 13: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

13

TABLE IVRUNTIME ANALYSIS OF MUX-DECOMPOSITION

Not Reduced(sec.) Reduced (sec.) Approximate (sec.) (SAT)

ALLSAT QBF BDDBDD

ALLSATQBF

QBF BDDBDD Static Conflict Learning 1 Conflict Learning 2

consv (13b)[26] consv Learning runtime err rate % runtime err rate %

2− 5 0.000 0.001 0.004 0.004 0.000 0.001 0.000 0.004 0.003 0.002 0.001 0.00 0.000 0.00

6− 10 0.070 0.070 0.904 0.006 0.025 0.003 0.006 0.064 0.004 0.017 0.010 0.00 0.008 0.19

11− 16 39.59 34.09 5.24 0.044 28.15 13.65 7.41 2.32 0.008 0.074 0.006 0.05 0.004 0.88

17− 25 −† 233.73 12.76 0.602 − 63.22 45.99 6.90 0.787 0.711 0.010 0.02 0.007 0.95

26− 40 − − 21.26 5.72 − − − 22.40 4.05 0.774 0.032 0.05 0.025 1.12

41− 64 − − 28.76 3.21 − − − 31.58 3.00 2.13 0.025 0.03 0.019 0.88

65− 100 − − 48.85 16.55 − − − 41.22 15.37 0.87 0.027 0.02 0.021 0.98

101− 160 − − − − − − − − − 1.74 0.034 0.03 0.025 0.81

161− 250 − − − − − − − − − 5.69 0.106 0.03 0.083 1.00

† Impractical when runtime is limited to 300 secs

Furthermore, solving the reduced problem is more efficientthan solving the non-reduced problem, that is similar to theobservation of Table III. Although the SAT-based algorithm ismost efficient, constraints from conflict learning may answerfalse decomposable and static learning could only be appliedto operators of Class II. Furthermore, compared to [18], ourmethod could handle larger cases more efficiently.

We further analyzed the variants of the algorithms. Com-pared to Column 7 and 8, solving QBF by 2 alternativeSAT-solvers is faster than solving QBF by the state-of-the-artQBF solver. In BDD-based approach, compared to Column9 and 10, if we compromise completeness to get conservativeresults, we could enhance the practicality. As shown in Column12 to 15, solving fewer closed constraints is more efficientbut more prone to error, and vice versa. Therefore, differentalgorithms and their variant plays the different roles in themulti-decomposition.

Furthermore, we studied the relative runtime of decompo-sition during the automatic bound-set partition as shown inFig. 2013. Along the progress, the size of the bound-set isreduced and optimized for balancedness and disjointness asshown in Fig. 20(b). However, as shown in Fig. 20(a), theruntime of the decomposition would increase sharply to thehighest point at about 0.2 progress and then decrease along theprogress. From this experiment, we observed that the runtimeof the decomposition is not positively related to the size of thebound-set. The reason we conjectured is that the runtime isassociated with the flexibility of decomposition, and differentalgorithms have different capability of exploring the flexibility.Like the SAT-based algorithm, because the incremental SATkeeps the learning, it has more consistent runtime for exploringflexibility during the automatic bound-set partition.

C. Multi-decomposition vs. Bi-decomposition

In this experiment, we demonstrated the benefit of multi-decomposition versus bi-decomposition. Given a target func-tion, we tried to decompose it into 4 sub-functions for opti-mizing the cost (29c) in 300 seconds. Within 300 seconds, weapplied different seeds and chose the best result. In each run ofpure bi-decomposition, we first decomposed the target into 2sub-functions by Algorithm 4 to operator or, and, or xor. Next,

13In this experiment, All-SAT and QBF-based algorithms decomposed 11-16 cases; BDD-based and SAT-based algorithm decomposed 17-25 and 26-40cases respectively. The decomposition type is F888. We normalized each resultin runtime and progress, accumulated them, smoothed the data, and sampledit by 1e-5 relative progress.

0 0.2 0.4 0.6 0.8 10

1

2

3

4

5x 10

−5

Progress

Rel

ativ

e ru

nti

me

sat

bdd

qbf

allsat

(a) Relative runtime among algo-rithms

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Progress

Pro

bab

ilit

y/R

atio

accept rate

bound−set

(b) Ratio of size of the bound-set andthe probability of decomposability

Fig. 20. Relative runtime during automatic bound-set partition

we iteratively bi-decomposed the sub-function which had themost supports until the target was decomposed into four. Ineach run of multi-decomposition, we applied Algorithm 4 to4-input operators in Class I to decompose target into 4 sub-functions.

Fig. 21 shows the result. In addition to the cost, we alsocompare the maximum number of supports and the maximumnumber of nodes of 4 sub-functions . As shown, the costof multi-decomposition is smaller than the cost of pure bi-decomposition. Besides, optimizing the cost in decompositionwould also optimize the size of the support and the size ofsub-functions. Therefore, we demonstrate the usefulness of thecost equation and the benefit of multi-decomposition.

IX. DISCUSSION AND FUTURE WORK

Solving on closed constraints is NP problem, but the generalunknown function problem, like safety model checking, isPSPACE problem. However, decomposition on the operator ofclass II seems to be NP problem because we have not foundout the Type II error from solving constraints converted fromthe static learning. We will further study the complexity ofmulti-decomposition.

Although the operator and target function in this paperare single-output complete specified function, it is simple toextend the concept of this paper to multi-output incomplete-specified function. Besides, automatic bound-set partitioncould not only apply the step-by-step greedy method tooptimize the cost. We could flip several entries of the bound-set to accelerate the optimization process. Furthermore, wecould apply pattern simulation accelerate the flow. Besides,in conflict learning, the witness of indecomposable problemcould be converted into closed template, like the unsat core inAll-SAT based algorithm. In BDD traversal, the witness is thetrace of the conflict. We could minimize the trace and convert it

Page 14: Multi-decomposition: Theorems and Algorithms2 problem under the general operator and pre-specified supports. This paper proposes theorems and unified algorithms in a novel perspective

14

0.01 0.02 0.03 0.04 0.050.01

0.02

0.03

0.04

0.05

Cost of Bidecomp.

Cost

of

Mult

idec

om

p.

(a) Cost

100

101

102

100

101

102

Max(#sup) of Bidecomp.

Max

(#su

p)

of

Mu

ltid

eco

mp

.

(b) Maximum #supports

100

102

104

100

101

102

103

104

Max(#node) of Bidecomp.

Max

(#n

od

e) o

f M

ult

idec

om

p.

(c) Maximum #nodesFig. 21. Multi-decomposition vs. Bi-decomposition. The cost, maximum number of support, and maximum number of nodes in 4 sub-functions.

into the template. We will study the technical implementationsin the future work.

Compared to bi-decomposition theoretically, the reasonwhy or and and decomposition is workable in [7] couldbe explained by the concept of this paper. However, xordecomposition has not been studied fully from the conceptof this paper. We will explain it by the concept of this paperin the future.

X. CONCLUSIONS

This study investigates multi-decomposition from a newperspective under a variety of considerations, including ef-ficiency, completeness, and scalability. As Table V shows,decomposition to operators in different classes can be solvedefficiently using suitable strategies with respect to the sizeof the target function. Conflict learning converts this probleminto closed constraints such that solving them is a trade-offbetween accuracy and efficiency.

Experimental results demonstrate the efficiency of the pro-posed algorithms and the benefit of multi-decomposition com-pared to bi-decomposition. We hope that these results maybenefit several areas, for example, optimizing the logic fortiming and area, mapping the function to standard cells, orselecting better spare cells in back-end ECO.

TABLE VSUITABLE STRATEGIES

small

Sv(f) < 12

medium

Sv(f) < 64

large

Sv(f) ≥ 64

Class I SAT

Class II static learning + SAT

Class III QBF BDD NA

REFERENCES

[1] G. De Micheli, Synthesis and Optimization of Digital Circuits.McGraw-Hill, 1994.

[2] C. Scholl, Functional Decomposition with Application to FPGA Synthe-sis. Kluwer Academic Publishers, 2001.

[3] S. Khatri and K. Gulati, Advanced Techniques in Logic Synthesis,Optimizations and Applications. Springer, 2010.

[4] D. Baneres, J. Cortadella, and M. Kishinevsky, “Timing-driven N-waydecomposition,” in Proc. Great Lakes Symp. VLSI (GLSVLSI), 2009, pp.363–368.

[5] V. N. Kravets and A. Mishchenko, “Sequential logic synthesis usingsymbolic bi-decomposition,” in Proc. Design, Automation and TestEurope (DATE), 2009, pp. 1458–1463.

[6] A. Mishchenko, B. Steinbach, and M. Perkowski, “An algorithm for bi-decomposition of logic functions,” in Proc. Design Automation Conf.(DAC), 2001, pp. 103–108.

[7] R.-R. Lee, J.-H. R. Jiang, and W.-L. Hung, “Bi-decomposing largeBoolean functions via interpolation and satisfiability solving,” in Proc.Design Automation Conf. (DAC), 2008, pp. 636–641.

[8] M. Choudhury and K. Mohanram, “Bi-decomposition of large Booleanfunctions using blocking edge graphs,” in Proc. Int’l Conf. Computer-Aided Design (ICCAD), 2010, pp. 586–591.

[9] H. Chen, M. Janota, and J. P. M. Silva, “QBF-based Boolean function bi-decomposition,” in Proc. Design, Automation and Test Europe (DATE),2012, pp. 816–819.

[10] V. Bertacco and M. Damiani, “The disjunctive decomposition of logicfunctions,” in Proc. Int’l Conf. Computer-Aided Design (ICCAD), 1997,pp. 78–82.

[11] Y. Li, M. Hempstead, P. Mauro, D. Brooks, Z. Hu, and K. Skadron,“Power and thermal effects of SRAM vs. latch-mux design styles andclock gating choices,” in Proc. Int’l Symp. Low Power Electronics andDesign (ISLPED), 2005, pp. 173–178.

[12] R. L. Ashenhurst, “The decomposition of switching functions,” in Proc.Int’l Symp. Theory of Switching, 1957, pp. 74–116.

[13] H. A. Curtis, A New Approach to the Design of Switching Circuits. D.Van Nostrand Co., 1962.

[14] J. P. Roth and R. M. Karp, “Minimization over Boolean graphs,” IBMJ. of Research and Development, vol. 6, no. 2, pp. 227–238, Apr. 1962.

[15] D. Bochmann, F. Dresig, and B. Steinbach, “A new decompositionmethod for multilevel circuit design,” in Proc. European Design Au-tomation Conf. (Euro-DAC), 1991, pp. 374–377.

[16] B. Steinbach, “Synthesis of multi-level circuits using EXOR-gates,” inProc. of IFIP WG 10.5 - Workshop on Applications of the Reed-MullerExpansion in Circuit Design, 1995, pp. 161–168.

[17] C. Yang, V. Singhal, and M. Ciesielski, “BDD decomposition forefficient logic synthesis,” in Proc. Int’l Conf. Computer Design (ICCD),1999, pp. 626–631.

[18] D. Baneres, J. Cortadella, and M. Kishinevsky, “A recursive paradigmto solve Boolean relations,” in Proc. Design Automation Conf. (DAC),2004, pp. 416–421.

[19] E. Clarke, O. Grumberg, and D. Peled, Model Checking. MIT Press,1999.

[20] W. Craig, “Linear reasoning: A new form of the Herbrand-Gentzentheorem,” J. Symbolic Logic, vol. 22, no. 3, pp. 250–268, 1957.

[21] A. Biere, A. Cimatti, E. M. Clarke, M. Fujita, and Y. Zhu, “Symbolicmodel checking using SAT procedures instead of BDDs,” in Proc.Design Automation Conf. (DAC), 1999, pp. 317–320.

[22] N. Een and N. Sorensson, “An extensible SAT-solver,” in Proc. Int’lConf. Theory and Applications of Satisfiability Testing (SAT), 2004, pp.333–336.

[23] F. Somenzi, CUDD: CU Decision Diagram Package – release2.4.2, Department of Electrical and Computer Engineering,University of Colorado at Boulder, Apr. 2009. [Online]. Available:ftp://vlsi.colorado.edu/pub/cudd-2.4.2.tar.gz

[24] K. L. McMillan, “Interpolation and SAT-based model checking,” inProc. Computer Aided Verification (CAV), 2003, pp. 1–13.

[25] B. L. Synthesis and V. Group, “ABC: A system forsequential synthesis and verification.” [Online]. Available:http://www.eecs.berkeley.edu/˜alanmi/abc

[26] A. B. Florian Lonsing, “Depqbf: A dependency-aware qbf solver,” J.Satisfiability, Boolean Modeling and Computation (JSAT), vol. 7, pp.71–76, 2010.