on the linear programming and group …ufdcimages.uflib.ufl.edu/uf/e0/04/22/32/00001/khalil_m.pdfi...

120
1 ON THE LINEAR PROGRAMMING AND GROUP RELAXATIONS OF THE UNCAPACITATED FACILITY LOCATION PROBLEM By MOHAMMAD KHALIL A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2010

Upload: hathuan

Post on 29-May-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

1

ON THE LINEAR PROGRAMMING AND GROUP RELAXATIONS OF THE UNCAPACITATED FACILITY LOCATION PROBLEM

By

MOHAMMAD KHALIL

A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE

UNIVERSITY OF FLORIDA

2010

2

©2010 Mohammad Khalil

3

This work is dedicated to my parents, my wife, Rania,

my daughters, Habiba and Jana, and my siblings, Shaimaa, Sara, and Ahmed,

for their continuous love and support

4

ACKNOWLEDGMENTS

First of all, I would like to express my sincere appreciation to my advisor, Dr. Jean-

Philippe Richard, for his invaluable guidance. In addition to providing vision and

encouragement, Dr. Richard gave me unlimited freedom to explore new avenues for

research. He was always available to discuss my ideas and give me excellent feedback.

His enthusiasm encouraged me to produce better work. He has taught me not only

Operations Research, but also how to communicate effectively and how to write

technical work. His devotion to teach me, to pay close attention to my thoughts (even

when they are wrong), and to make this thesis as best as it can has been and will

continue to be an inspiration to me. Without his help this work would not have been

possible.

Second, I would like to thank Dr. Youngpei Guan, the member of my examining

committee, for the time he spent reviewing this thesis.

I am grateful to Dr. Attia Gomaa, the former chair of Industrial Engineering

Department at Fayoum University, who was the first person to introduce Operations

Research to me and encouraged me to pursue an academic career. His support is

enduring.

I am also indebted to Mr. Mahmoud Talaat who built my basic mathematical skills

and inspired me to study engineering.

Finally, I would like to thank my wife and daughters for their patience, kindness,

and continuous support during my studies at the University of Florida.

5

TABLE OF CONTENTS page

ACKNOWLEDGMENTS .................................................................................................. 4 

LIST OF TABLES ............................................................................................................ 7 

LIST OF FIGURES .......................................................................................................... 8 

LIST OF ALGORITHMS .................................................................................................. 9 

LIST OF ABBREVIATIONS ........................................................................................... 10 

LIST OF NOTATIONS ................................................................................................... 11 

ABSTRACT ................................................................................................................... 12

CHAPTER

1 INTRODUCTION .................................................................................................... 14 

1.1 The Facility Location Problem ........................................................................... 15 1.2 Literature Review .............................................................................................. 16 1.3 The Uncapacitated Facility Location Problem ................................................... 17 1.4 Group Relaxations of Mixed Integer Programming ........................................... 20 

2 ON THE POLYHEDRAL STRUCTURE OF THE LINEAR PROGRAMMING RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM .............. 28 

2.1 Introduction ....................................................................................................... 28 2.2 Case 1: y t Fm m n+ < ........................................................................................ 31 2.3 Case 2: y t Fm m n+ = ........................................................................................ 32 

2.4 Case 3: y t Fm m n+ > ....................................................................................... 45 2.5 Constructing UFLP Instances of Desired Determinant ...................................... 62 

3 MAXIMUM POSSIBLE DETERMINANT OF BASES OF THE LINEAR PROGRAMMING RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM .............................................................................................................. 79 

3.1 Computing the MPD of ±1 Matrices .................................................................. 79 3.2 Computing the MPD of (0,1) Matrices ............................................................... 82 3.3 Computing the MPD of Bases of the LPR of UFLP ........................................... 84 3.4 On The Feasibility of The LP Solution to UFLP that has The MPD ................... 85 3.5 Solving Group Relaxations of UFLP ................................................................. 95 

6

4 SPECIAL CASES ................................................................................................. 102 

4.1 Case 1: Two Customers and/or Two Facilities ................................................ 102 4.2 Case 2: Three Customers and Three Facilities ............................................... 102 

5 EXPERIMENTAL RESULTS ................................................................................. 111 

6 CONCLUSION AND FUTURE RESEARCH ......................................................... 116 

LIST OF REFERENCES ............................................................................................. 117 

BIOGRAPHICAL SKETCH .......................................................................................... 120 

7

LIST OF TABLES

Table page 3-1 Maximum possible determinant of a ±1 square matrix of size ( ),h h . ................. 80 

3-2 MPD of a (0,1) square matrix of size ( ),h h and number of square (0,1) matrices that have the MPD. .............................................................................. 82 

3-3 Maximum possible determinant of B and kT for given number of columns corresponding to iy and it variables. ................................................................ 98 

3-4 Probability that the MPD of pseudo bases of the LPR of the UFLP for given Cn and Fn is less than or equal ( )U h . ............................................................ 100 

5-1 Selection of the parameters Cn and Fn in the construction of UFLP experiments. ..................................................................................................... 111 

5-2 Experimental Results. ....................................................................................... 114 

8

LIST OF FIGURES

Figure page 1-1 Constraint matrix, ,A of the LPR of the UFLP formulation shown in (1.3). ......... 19 

1-2 The group network associated with the MIP problem in Example 1.1. ............... 26 

2-1 Matrix A in Example 2.2. ................................................................................... 30 

2-2 Basis B that was obtained from matrix A in Example 2.2. ............................... 30 

2-3 Illustration of the different cases encountered in the proof of Lemma 2.5. ......... 43 

2-4 Final arrangement of columns included in B if y t Fm m n+ > . ........................... 46 

2-5 Illustration of how to apply ERO (2.5) using (2.7) in Example 2.2. ...................... 51 

2-6 B in Example 2.2 after column permutations in accordance with Figure 2-4. .... 52 

2-7 Submatrix G reflects the selected ijx columns in Example 2.2. ........................ 52 

2-8 Illustration of Example 2.5. a) Matrix ,B and b) Matrix .B .................................. 55 

4-1 The six bases of the LPR of UFLP (with 3C Fn n= = ) that have determinant absolute values equal to 2. ............................................................................... 104 

4-2 A complete bipartite graph between the set of facilities and the set of customers for UFLP with 3C Fn n= = . .............................................................. 108 

4-3 A complete bipartite graph between the set of facilities and the set of customers that corresponds to the matching associated with each of the inequalities in (4.10). ........................................................................................ 109 

9

LIST OF ALGORITHMS

Algorithm page UFLP-UNI-1 .................................................................................................................. 44 

UFLP-UNI-2 .................................................................................................................. 44 

UFLP-UNI-3 .................................................................................................................. 44 

UFLP-DET ..................................................................................................................... 63 

UFLP-BASIS ................................................................................................................. 64 

UFLP-INSTANCE .......................................................................................................... 78 

HADAMARD .................................................................................................................. 80 

BINARY ......................................................................................................................... 83 

10

LIST OF ABBREVIATIONS

LP Linear Programming.

IP Integer Programming.

MIP Mixed Integer Programming.

MILP Mixed Integer Linear Programming.

LPR Linear Programming Relaxation.

FLP Facility Location Problem.

UFLP Uncapacitated Facility Location Problem.

MPD Maximum Possible Determinant.

ERO Elementary Row Operation.

ECO Elementary Column Operation.

11

LIST OF NOTATIONS

F Set of potential locations where facilities can be opened.

C Set of customers.

O Set of opened facilities where O F.⊆

Fn F=

Cn C=

{ }min ,C Fn n n=

if Cost of opening facility ,i i F.∀ ∈

ijc Cost of assigning customer j to facility , ,i i F j C.∀ ∈ ∈

, ,c A b The cost vector, constraints matrix, and the right-hand-side vector in the standard LP form { }.min , : , 0 cx subject to Ax b x= ≥

I Identity matrix.

0 Matrix of zeros.

E Matrix of ones.

nrC n -combination of r.

,h kA Matrix A with h rows and k columns.

( ),A h k Component in row h and column k of matrix A.

( ),A . h thh column of matrix A.

( ),A k . thk row of matrix A.

12

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science

ON THE LINEAR PROGRAMMING AND GROUP RELAXATIONS OF THE

UNCAPACITATED FACILITY LOCATION PROBLEM

By

Mohammad Khalil

August 2010

Chair: Jean-Philippe Richard Major: Industrial and Systems Engineering

The uncapacitated facility location problem (UFLP) is a classical problem in the

Operations Research community that has been studied extensively. Various

approaches have been proposed for its solution. In this thesis we seek to determine if

the group approach to mixed integer linear programming (MILP) could lead to new

advances in the solution of UFLP.

To determine whether group relaxations of UFLP can be solved efficiently, we first

determine the maximum possible determinant (MPD) of a basis of its linear

programming relaxation (LPR). This motivates the investigation of the bases of the LPR

of UFLP. Although we show that the MPD is exponential (in terms of the number of

customers and the number of facilities), we also show that most bases have small

determinants.

We give several algorithms to construct bases of the LPR of UFLP. In particular,

we present three algorithms to construct unimodular bases, one algorithm to construct

bases with desired determinant, and one algorithm to construct bases with the MPD.

We show that the solutions corresponding to the bases of the LPR of UFLP with MPD

13

we describe are feasible. We also show that the corresponding linear programming (LP)

solution is not very fractional.

Finally, we use the above results to study two small instances of UFLP. In the first,

we assume that we have two customers and/or two facilities and we show that the LPR

of UFLP always describes the convex hull of its integer solutions. In the second, we

assume that we have three customers and three facilities and we show that the convex

hull of integer solutions can be obtained by adding six inequalities to its LPR.

This thesis is organized as follows. In Chapter 1, we give a brief introduction to

UFLP and to group relaxations in MILP. In Chapter 2, we obtain some results about the

polyhedral structure of the LPR of UFLP. In Chapter 3, we determine the MPD of a

basis of the LPR of UFLP, we discuss the feasibility of the solutions corresponding to

the bases of the LPR of UFLP with MPD we construct, and conclude with comments

about the efficiency of using group relaxations to solve UFLP. In Chapter 4, we study

two instances of UFLP with small number of customers and/or facilities. In the first, we

assume that we have two customers and/or two facilities while in the second we

assume that we have three customers and three facilities. We obtain convex hull

descriptions for the set of integer solutions to these problems. In Chapter 5, we describe

experimental results on solving the group relaxations of UFLP. Finally, we give a

conclusion and discuss future research directions in Chapter 6.

14

CHAPTER 1 INTRODUCTION

Many commercial companies, service organizations, and public sector agencies

face the problem of deciding where to locate their facilities. The facilities can be

factories, warehouses, retailers, hospitals, schools, etc., or even routers and caches for

firms working in web services. This problem is known in the Operations Research

community as the facility location problem (FLP). It is considered to be a long term

decision problem that can affect the success of an organization.

FLP has vast applications and it is often considered to be a pillar of supply chain

management. As a result, it has been studied extensively in the literature. Numerous

variants of the problem have been considered and a vast array of solution

methodologies has been proposed.

Variants of the problem include situations where customers’ demands are

deterministic or stochastic, the capacities of facilities are finite or infinite, the potential

locations are continuous or discrete, etc. In this thesis we study a variant of the problem

that is known as uncapacitated facility location problem (UFLP) in which demand is

deterministic, the location of facilities must be chosen from a given set, and the capacity

of these facilities is sufficiently large to handle all customers’ demands.

Solution methodologies for UFLP are diverse and include enumeration techniques,

cutting plane approaches, approximation algorithms, and heuristics. In this thesis, our

goal is to determine whether the group approach to mixed integer linear programming

(MILP) could be useful in studying UFLP.

The remainder of this chapter is organized as follows. In Section 1.1, we describe

different variants of FLP. In Section 1.2, we give a brief literature review of studies of

15

FLP. In Section 1.3, we present the classical mixed integer programming formulation of

UFLP and its linear programming relaxation (LPR). Finally, in Section 1.4 we give an

introduction to group relaxations in mixed integer programming.

1.1 The Facility Location Problem

Given a set of potential locations and a set of customers, the facility location

problem (FLP) seeks to determine: how many facilities should be opened, where the

open facilities should be allocated, what customers should be assigned to what open

facility, so that the total cost associated with opening facilities and assigning customers

to open facilities is minimized.

The study of FLP dates back to the 1960s [1,2,3,4,5,6]. Different variants have

been considered over the years. We next review some of the common variants of the

problem that have been studied. We refer to [7,8,9] for detailed discussions.

The uncapacitated facility location problem (UFLP) assumes that the capacities of

the facilities are infinite while the capacitated facility location problem (CFLP) assigns a

maximum capacity limit for each facility.

The p-center problem is similar UFLP except that in the p-center problem the

number of facilities that should be opened is fixed and the problem minimizes the

maximum distance between a customer and its assigned facility.

We say that a customer is “covered” if the distance between this customer and its

assigned facility is less than or equal a given distance. The p-median problem is a

variant that minimizes the number of uncovered customers under the restriction that the

number of opened facilities is at most equal to p.

If each customer’s demand must be entirely satisfied from a single facility, the

problem is called a single-sourcing facility location problem. Otherwise, customer’s

16

demands can be satisfied from multiple facilities yielding multi-sourcing facility location

problems.

The discrete facility location problem is a variant of the problem where facility

locations must be selected from a discrete set. However, if the locations are given as

coordinates that are continuous, the problem is called a planar facility location problem.

The deterministic facility location problem assumes that the customers’ demands

are deterministic and known beforehand, while the stochastic facility location problem

assumes that the customers’ demand is known only through a probability distribution.

1.2 Literature Review

The use of exact solution algorithms (branch-and-bound and cutting plane

algorithms) to solve FLP is discussed in [10,11]. The polyhedral structure of the FLP is

studied in [12,13,14,15,16,17,18].

Cho, Padberg, and Rao [15] showed that for two customers and/or two facilities,

the LPR of UFLP always has an integer optimal solution. In Chapter 3, we prove the

same result by different means.

As far as approximation algorithms are concerned, different greedy algorithms

have been developed in [7,19], linear programming (LP) rounding algorithms were

provided in [20,21,22], and primal dual algorithms were given in [23,24]. Guha and

Khuller [25] proved that there is no approximation algorithm for the UFLP that can give

better approximation factor than 1.463 (unless P=NP). The best approximation factor

known is 1.52 and was obtained by Mahdian and Zhang [26].

Other heuristics have also been proposed for this problem including local search,

simulated annealing, and variable neighborhood search; see [27,28].

17

1.3 The Uncapacitated Facility Location Problem

Given a set F of Fn potential locations and a set C of Cn customers, UFLP is

concerned with finding a subset O F⊆ of facilities to open and an assignment of

customers to open facilities that minimizes total cost. Each facility is assumed to have

an infinite capacity; hence, an open facility can satisfy the demand of all customers

assigned to it. Further, we assume that each customer’s demand is entirely satisfied

from a single facility.

We next present the classical mixed integer programming (MIP) formulation of the

UFLP and its LPR [7].

Inputs:

if : cost of opening facility , .i i F∀ ∈

ijc : cost of assigning customer j to facility , , .i i F j C∀ ∈ ∈

Decision variables:

1 if facility is opened, ,0 otherwise.1 if customer is assigned to facility , , ,0 otherwise.

i

ij

i i Fy =

j i i F j Cx =

∀ ∈⎧⎨⎩

∀ ∈ ∈⎧⎨⎩

UFLP can be formulated using MIP as follows

( )

( )

( ){ } ( ){ } ( )

min 1.1.a

:1 , 1.1.b

, , 1.1.c0,1 , 1.1.d0,1 , . 1.1.e

ij ij i ii F, j C i F

iji F

ij i

i

ij

c x f y

subject tox j C

x y i F j C y i Fx i F j C

∈ ∈ ∈

⎫+⎪⎪⎪⎪= ∀ ∈⎬⎪≤ ∀ ∈ ∈ ⎪

∈ ∀ ∈ ⎪⎪∈ ∀ ∈ ∈ ⎭

∑ ∑

∑ (1.1)

18

The objective function (1.1.a) minimizes the total cost associated with opening

facilities (such as construction costs) and assigning customers to facilities (such as

transportation or routing costs). Constraint (1.1.b) requires that each customer j is

assigned to exactly one facility . j C∀ ∈ The second constraint (1.1.c) ensures that

customer j is assigned to facility i only if facility i is open (in other words constraint

(1.1.c) impose that 0ijx = whenever 0iy = ). Constraints (1.1.d) and (1.1.e) enforce the

binary nature of variables , .ij ix y In particular, the fact that { }0,1ijx ∈ imposes single-

sourcing of customers demand.

The LPR of the previous formulation (1.1) can be obtained by relaxing the decision

variables from binary to continuous as shown in (1.2).

( )

( )

( )( )( )

min 1.2.a

:1 , 1.2.b

, , 1.2.c1 , 1.2.d

, 0 , . 1.2.e

ij ij i ii F, j C i F

iji F

ij i

i

ij i

c x f y

subject tox j C

x y i F j C y i Fx y i F j C

∈ ∈ ∈

⎫+⎪⎪⎪⎪= ∀ ∈⎬⎪≤ ∀ ∈ ∈ ⎪

≤ ∀ ∈ ⎪⎪≥ ∀ ∈ ∈ ⎭

∑ ∑

∑ (1.2)

In (1.2), the relaxed constraint 1ijx ≤ was removed since it is implied by ij ix y≤

and 1.iy ≤ LPR (1.2) can be transformed to a standard LP

min { }: , 0cx subject to Ax b x= ≥ (1.3)

by introducing slack variables ijs in (1.2.c) and it in (1.2.d), yielding

19

( )

( )

( )( )( )

min 1.4.a

:1 , 1.4.b

0 , , 1.4.c1 , 1.4.d

, 0 , . 1.4.e

ij ij i ii F, j C i F

iji F

ij ij i

i i

ij i

c x f y

subject tox j C

x s y i F j C y t i Fx y i F j C

∈ ∈ ∈

⎫+⎪⎪⎪⎪= ∀ ∈⎬⎪+ − = ∀ ∈ ∈ ⎪

+ = ∀ ∈ ⎪⎪≥ ∀ ∈ ∈ ⎭

∑ ∑

∑ (1.4)

The LPR in a standardized form, (1.4), can be used to describe matrix A and

vectors b and c in (1.3). Let ,n nI denote the identity matrix of size ( ),n n and ,n nE

denote the matrix of all ones of size ( ), .n n The structure of the matrix

,2 2C C F F C F Fn n n n n n nA + + + is shown in Figure 1-1, where all empty spaces are zeros.

ijx ijs iy it

11 1 Cnx .. x 21 2 Cnx .. x 1F F Cn n nx .. x 11 1 Cns .. s 21 2 Cns .. s 1F F Cn n ns .. s 1 Fny .. y 1 Fnt .. t

,C Cn nI ,C Cn nI ,C Cn nI

,C Cn nI ,C Cn nI ,1Cn-E

,C Cn nI ,C Cn nI

,C Cn nI

,C Cn nI

,1Cn-E

,F Fn nI ,F Fn nI

Figure 1-1. Constraint matrix, ,A of the LPR of the UFLP formulation shown in (1.3).

The objective and the right-hand-side are given as follows

[ ]1,2 2 1,21,2,

C F F FC Fn n n ij i nn n

c c f+⎡ ⎤⎡ ⎤= ⎣ ⎦⎢ ⎥⎣ ⎦

(1.5)

and

20

,1 1, 1, 1,, , .C C F F C C F F

T

n n n n n n n nb E 0 E+ + ⎡ ⎤= ⎣ ⎦ (1.6)

1.4 Group Relaxations of Mixed Integer Programming

Many practical applications of Operations Research can be modeled using MIP.

Two main families of methods are used to solve MIP problems in practice; branch-and-

bound and cutting planes algorithms. The two algorithms rely on solving the LPR of a

MIP problem.

The group relaxation approach (also known as corner relaxation) was introduced

by Gomory [29]. Group relaxations can be used to replace the LPR in the branch-and-

bound algorithm because it is “simple” to optimize linear functions over its feasible

region [30]. We next describe how to construct the group relaxation of a MIP problem.

Consider the following MIP

min { }, ,1 ,1: , .nm n n ncx subject to A x b x += ∈ (1.7)

To obtain its corner relaxation, we first remove the integrality constraint and hence

obtain its linear relaxation

min { }, ,1 ,1: , .nm n n ncx subject to A x b x += ∈ (1.8)

We next solve the LPR in (1.8) using the simplex algorithm. Let Bx and Nx

denote the basic variables and the non-basic variables of the optimal LP solution

obtained by simplex. Also, let BA denote the submatrix of A corresponding to the basic

variables and NA the submatrix of A corresponding to the non-basic variables.

Similarly, denote by Bc and Nc the subvectors of c corresponding to the cost elements

of Bx and .Nx We rewrite (1.8) as

21

min { }: , , .B NB B N N B B N N B Nc x c x subject to A x A x b x x+ ++ + = ∈ ∈ (1.9)

The optimal solution *x of the LPR and the optimal value *z are given by

1 , 0,* *B B Nx A b x−= = (1.10)

and

.*B Bz c x= (1.11)

To obtain the corner relaxation of (1.7), we create the problem

min { }: , ,B NB B N N B B N N B Nc x c x subject to A x A x b x x ++ + = ∈ ∈ (1.12)

where the nonnegativity of the basic variables Bx has been relaxed. It is possible to

reformulate the problem (1.12) using the Smith Normal Form of .BA

Lemma 1.1 [31]: Given a nonsingular integer matrix A of size ( ), ,n n there exist ( ),n n

unimodular matrices 1U and 2U such that ( ) 1 2,SNF A U AU= is a diagonal matrix with

positive elements 1,…, nd d such that id divides 1,id + for { }1,…, 1 .i n∈ − ( )SNF A is

called the Smith Normal Form of .A

To reformulate (1.12), we compute the Smith Normal Form of the optimal LP basis

,BA

( ) 1 2.BS SNF A U AU= = (1.13)

Then, we multiply the constraints in (1.12) by 1U on the left and write

min { }11 2 2 1 1: , ,B N*

N N B B N N B Nz c x subject to U A U U x U A x U b x x−++ + = ∈ ∈ (1.14)

where Nc is the reduced cost of variables Nx in (1.9), i.e., 1 .N N B B Nc c c A A−= −

22

After substituting 1 2BU A U by S in (1.13), we obtain

min { }12 1 1: , , .B N*

N N B N N B Nz c x subject to SU x U A x U b x x−++ + = ∈ ∈ (1.15)

Since Lemma 1.1 states that 2U is a unimodular matrix, 12U − is also a unimodular

matrix. It follows that 12

mBU x− ∈ for every .m

Bx ∈ Therefore (1.15) reduces to

min ( ){ }1 1: mod , .N*N N N N Nz c x subject to U A x U b S x ++ = ∈ (1.16)

Let id be the diagonal elements of ,S for { }1,…, .i n∈ Then the thi row of (1.16)

is simply considered modulo .id For every diagonal element in the thi row of S that is

equal to one, it is clear that the corresponding thi row of (1.16) can be removed. After

removing these rows, (1.16) represents the corner relaxation problem or group

minimization problem associated with the MIP in (1.7). We next present an example to

illustrate the aforementioned steps.

Example 1.1: Consider the following MIP,

1 2 3 4 5

1 2 3 4 5

1 2 3 4

1 2 3 4 55

min 8 42 32 186 129:

2 2 2 3 623 4 31

3 5 3 2 62.

x x x x xsubject to

x x x x xx x x xx x x x x

x +

+ + + +

+ + + + =+ + + =+ + − − =

(1.17)

The constraints matrix ,A the objective ,c and the right-hand-side b are

( )2 2 2 1 3 621 1 3 4 0 , 8 42 32 186 129 , and 31 .3 1 5 3 2 62

A c b⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟= = =⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟− −⎝ ⎠ ⎝ ⎠

(1.18)

Relaxing the integrality constraint, we obtain the following LP

23

1 2 3 4 5

1 2 3 4 5

1 2 3 4

1 2 3 4 55

min 8 42 32 186 129:

2 2 2 3 623 4 31

3 5 3 2 62.

x x x x xsubject to

x x x x xx x x xx x x x x

x +

+ + + +

+ + + + =+ + + =+ + − − =

(1.19)

For this linear program, the basic and non-basic variables in the optimal solution

are given by

{ } { }1 2 3 4 5, , and , .B Nx x x x x x x= = (1.20)

Therefore,

( ) ( )2 2 2 1 31 1 3 , 4 0 , 8 42 32 , and 186 129 .3 1 5 3 2

B N B NA A c c⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟= = = =⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟− −⎝ ⎠ ⎝ ⎠

(1.21)

The optimal solution and the optimal value of the LP (1.19) are

1

15.515.5 , 0, and 775.

0

* * *B B N B Bx A b x z c x−

⎛ ⎞⎜ ⎟= = = = =⎜ ⎟⎜ ⎟⎝ ⎠

(1.22)

The simplex tableau corresponding to this solution is

4 5

1 4 5

2 4 5

3 4 5

min 775 186 129:

5.25 0.25 15.54 2.5 15.51.75 0.75 0.

x xsubject to

x x xx x xx x x

+ +

− − =+ + =+ − =

(1.23)

The Smith Normal Form for the optimal basis BA can be verified to be

1 2

2 0 1 1 0 0 1 2 11 1 1 , 0 2 0 , and 0 1 0 .3 0 2 0 0 4 0 1 1

U S U− − −⎛ ⎞ ⎛ ⎞ ⎛ ⎞

⎜ ⎟ ⎜ ⎟ ⎜ ⎟= − = =⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟− −⎝ ⎠ ⎝ ⎠ ⎝ ⎠

(1.24)

24

We then compute ( )1 1 modN NU A x U b S= as in (1.16), to obtain

( )( )( )( )

4 5

4 5 4 5

min 775 4 50 ,:

5 8 62 mod18 5 31 mod 2 , , .9 13 62 mod 4

x xsubject to

x x b x x

+ +

+ +

⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟= = ∈ ×⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

(1.25)

The first row of (1.25) is removed since ( )1,1 1S = and (1.25) reduces to

( ) ( )( )

4 5

4 5 4 5

min 775 4 50 ,:

mod 20 1 1, , .

mod 41 1 2

x xsubject to

x x b x x

+ +

+ +

⎛ ⎞ ⎛ ⎞= = ∈ ×⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

(1.26)

We refer to (1.26) as the corner relaxation problem or group minimization

problem associated with (1.17).

To solve the corner relaxation problem associated with a MIP problem, a directed

graph ( ),G V E is created that is called the group network. This network representation

was first introduced by Shapiro [32]. Each vertex of the network corresponds to a vector

{ }1 2, , , ,mδ δ δ… where iδ +∈ for all { }1, ,i m∈ … and .i iδ d∈ The number of vertices in

the group network is therefore equal to the absolute value of the determinant of the

optimal basis ( )1

det .m

B ii

A d=

⎛ ⎞=⎜ ⎟

⎝ ⎠∏ For each column j corresponding to a non-basic

variable, and for every vertex ,kg V∈ where ( ){ }0, , det 1 ,Bk A∈ −… we create a

directed arc from the vertex kg to the vertex ( )( )1 mod .k jg U A S+ The cost associated

with this arc is equal to .jc Problem (1.16) then reduces to finding the shortest path

25

from the vertex (0, 0,..,0) to the vertex corresponding to ( )( )1 mod .U b S Any appropriate

shortest path algorithm can be used to solve this problem. The solution of the shortest

path algorithm yields optimal values for the non-basic variables in an optimal solution of

(1.16). We then substitute these values into (1.15) to obtain an integer solution of (1.7).

Although this solution is integer, it is not necessary nonnegative. However, if the

obtained integer solution is indeed feasible (i.e., nonnegative) then it is an optimal

solution for (1.7). We refer to [30] for more information and show next an example that

illustrates the use of a shortest path algorithm to solve the group minimization problem.

Example 1.1-continued: Since ( )det 8,BA = the group network has 8 vertices that are

arranged along 2 dimensions (because we have only two elements different from one

on the diagonal of matrix S ).

For each of the eight vertices, we draw two arcs corresponding to non-basic

variables 4x and 5.x We represent the arcs that are corresponding to 4x by continuous

lines and the arcs corresponding to 5x by dotted lines. The cost of all the arcs drawn

with solid lines is 4 4c = while the cost of the arcs drawn with dashed lines is 5 50.c = In

this network, we are looking for the shortest path from node (0,0) to node (1,2). Figure

1-2 illustrates the group network where the source and destination nodes are

highlighted in black.

Solving the shortest path problem, we obtain a shortest path from (0,0) to (1,2)

that visits vertex (0,1). This solution is represented by heavy lines in Figure 1-2. The first

arc corresponds to non-basic variable 4x while the second arc corresponds to non-

basic variable 5x . As a result, an optimal solution to the group minimization problem is

26

Figure 1-2. The group network associated with the MIP problem in Example 1.1.

4 1x = and 5 1.x = Substituting into (1.23) we obtain

1 2 3 4 521, 9, 1, 1, and 1.x x x x x= = = − = = (1.27)

We have an integer solution to the MIP in (1.17) whose objective value is equal

to 829. Although this solution is integer it is not optimal for (1.17) since it is infeasible as

3 0.x < Further, since the group minimization problem is a relaxation of (1.17), we have

that the optimal value of (1.17) is at least 829.

Given a MIP problem, the number of arcs that originates from each vertex in the

group network of its corner relaxation is equal to the number of non-basic variables of

its LPR. Further, the number of vertices in the group network is equal to the absolute

value of the determinant of the optimal basis of its LPR. Therefore, the size of the group

network is a direct function of the determinant of the optimal basis of the LPR of this

MIP problem. Since the running time of shortest path algorithms is a function of the

number of vertices and arcs of the network, then it follows that the difficulty of solving

27

the group relaxations of MIP problem is intimately related to the maximum possible

determinant (MPD) of the bases of its LPR.

Therefore, to determine the difficulty of solving group relaxations of UFLP, it is

important to determine the MPD of the bases of its LPR.

28

CHAPTER 2 ON THE POLYHEDRAL STRUCTURE OF THE LINEAR PROGRAMMING

RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM

2.1 Introduction

As mentioned in Section 1.4, the difficulty of solving the group relaxation of a MIP

problem is directly related to the determinant of the optimal basis of its LPR. Therefore,

it is important to determine the bases with MPD. It is also important to obtain information

about its unimodular bases since they yield integer solutions. This motivates the

investigation of the polyhedral structure of the LPR of UFLP.

Assumptions: The formulation shown in (1.4) is the one for which we study bases.

Unless otherwise mentioned, the variables in the different bases we discuss are

arranged from left to right in the following order: , , ,ij ij ix s y then ,it . i F, j C∀ ∈ ∈ More

precisely, ijx comes before * *i jx if *i i< or if *i i= and .*j j< The same ordering is

applied to .ijs Similarly, iy comes before *iy and it comes before *i

t if .*i i<

First we introduce two lemmas that will be used in the remainder of this chapter to

compute determinants and to obtain the inverse of block matrices.

Lemma 2.1 [33]: Let , , ,q q p q q pP R Q× × ×∈ ∈ ∈ and .p pV ×∈ If V is an invertible

matrix, then

( ) ( )1det det det .P Q

V P QV RR V

−⎛ ⎞= −⎜ ⎟

⎝ ⎠ (2.1)

Lemma 2.2 [34]: Let , , ,q q p q q pP R Q× × ×∈ ∈ ∈ and .p pV ×∈ If P and V are

invertible matrices, then

29

( ) ( )( ) ( )

1 11 1 11

1 11 1 1.

P QV R P QV R QVP QR V V RP Q RP V RP Q

− −− − −−

− −− − −

⎡ ⎤− − −⎡ ⎤ ⎢ ⎥=⎢ ⎥ ⎢ ⎥⎣ ⎦ − − −⎢ ⎥⎣ ⎦

(2.2)

A basis of ,2 2C C F F C F Fn n n n n n nA + + + is a square submatrix of ,2 2C C F F C F Fn n n n n n nA + + + that is

invertible and that has ( )C C F Fn n n n+ + rows and columns. The total number T of

bases that may be obtained from matrix A is bounded above by

( )( ) ( )

2 2 2 2 !.

! !C F F

C C F F

n n n C F Fn n n n

C C F F C F F C

n n nT C

n n n n n n n n+

+ +

+≤ =

+ + + − (2.3)

Example 2.1: For UFLP with 4,Cn = and 4,Fn = the total number of different bases of

LPR is less than or equal to 40 1024 6.2852 10 .C = ×

Let B denote an arbitrary square submatrix of A of size ( ).C C F Fn n n n+ + For all

,i F∈ we define ixM and i

sM to be the sets of indices j of variables ijx and ijs whose

associated columns are included in .B Similarly, we denote by yM and tM the sets of

indices of variables iy and it whose associated columns are included in .B Finally, let

, , ,i ix s ym m m and tm denote the cardinality of sets , , ,i i

x s yM M M and tM and let xm and

sm be the sums of ixm and i

sm respectively, for ,i F∈ i.e.,

, , , and .i i i ix x x s s s y y t t

i F i F i F i Fm m M m m M m M m M

∈ ∈ ∈ ∈

= = = = = =∑ ∑ ∑ ∑ (2.4)

Example 2.2: For UFLP with 4,Cn = and 4,Fn = the constraint matrix A is given in

Figure 2-1. The basis B presented in Figure 2-2 is obtained by selecting the columns

marked with (•) in Figure 2-1. Using our notation, we write

30

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • •1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 1 1 1 1 1 1

Figure 2-1. Matrix A in Example 2.2.

ijx ijs iy

it • • • • • • • • • • • • • • • • • • • • • • • • 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 -1 1 -1 1 -1 1 -1

B = 1 -1 1 -1 1 -1

1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1 1 1

Figure 2-2. Basis B that was obtained from matrix A in Example 2.2.

31

• { } { } { } { }1 2 3 41,2,3 , 1,2,4 , 1,3,4 , 2,3,4 ,x x x xM M M M= = = =

• { } { } { } { }1 2 3 44 , 3 , 2 , 1 ,s s s sM M M M= = = = and

• { }1,2,3,4 .y tM M= =

• Further, { } 3, 1, 1,…,4 ,i ix sm m i= = ∀ ∈

• 12,xm = and 4.s y tm m m= = =

To simplify the study of the different bases of the LPR of UFLP, we divide the

discussion into three main sections; , ,y t F y t Fm m n m m n+ < + = and .y t Fm m n+ >

2.2 Case 1: y t Fm m n+ <

Lemma 2.3: Let B be an arbitrary square submatrix of A of size ( )C C F Fn n n n+ + that

is such that .y t Fm m n+ < Then ( )det 0.B =

Proof: Let B be any submatrix of A of size ( ).C C F Fn n n n+ + We consider now the last

Fn rows of .B Denote by ir the number of elements that are equal to one in the thi of

these rows. Clearly { }0,1,2 ,ir ∈ { }1,…, ,Fi n∀ ∈ see Figure 1-1. Further,

1.

Fn

i y t Fi =

r m m n= + <∑ Therefore, there exists i with 0,ir = i.e., the thi of the last Fn rows

of B is identically zero. It follows that ( )det 0B = since B contains a zero row.

Example 2.3: When 4,Cn = and 4,Fn = any submatrix B of A of size (24,24) that has

{ }1,2yM = and { }4tM = is singular because the rd23 row is identically zero, i.e.,

32

( ) ( ) 1,243,. 23,. .C C FB n n n B 0+ + = =

Similarly, we obtain the following result.

Lemma 2.4: Let B be an arbitrary square submatrix of A of size ( )C C F Fn n n n+ + such

that ( ) .y tF M M− ≠ ∅∪ Then ( )det 0.B =

Proof: We consider the last Fn rows of .B If ( ){ },y ti F M M∈ − ∪ then

( ),.C C FB n n n i+ + is identically zero showing that ( )det 0.B =

If the condition of Lemma 2.3, ,y t Fm m n+ < holds then the condition of Lemma

2.4, ( ) ,y tF M M− ≠ ∅∪ also holds. Therefore, Lemma 2.4 is a strict generalization of

Lemma 2.3 because it can also be applied when .y t Fm m n+ ≥

Example 2.3-contined: Any submatrix B of A of size (24,24) that has { }1,2yM = and

{ }1,4tM = is singular because ( ) ( ) 1,243,. 23,. .C C FB n n n B 0+ + = =

2.3 Case 2: y t Fm m n+ =

Before proceeding with the discussion of the case where ,y t Fm m n+ = , we

introduce a new notation. In particular, we introduce an elementary row operation (ERO)

that will modify the structure of B by eliminating some of its elements (converting

nonzero elements to zero).

33

In the matrix ,A every column corresponding to variable ijx has exactly two

components equal to one; one in the thj row and one in the ( )thCin j+ row; see Figure

1-1. Columns corresponding to variables ijs however have exactly one component

equal to one, which is located in the same row as the second component equal to one

in the ijx column; i.e., it is located in the ( )thCin j+ row. In summary; if ( )., ijA x and

( )., ijA s denote the columns in matrix A corresponding to variables ijx and ijs , then

( ) { } ( )( )

1 if , , 2.5.a, , ,

0 otherwise, 2.5.bC

ij

h j in ji F j C A h x

∈ +⎧⎪∀ ∈ ∈ = ⎨⎪⎩

(2.5)

and

( ) ( )( )

1 if , 2.6.a, , ,

0 otherwise. 2.6.bC

ij

h in ji F j C A h s

= +⎧⎪∀ ∈ ∈ = ⎨⎪⎩

(2.6)

Example 2.2-continued: Column 24x has exactly two components equal to one; in the

4th and the 12th row of .A Also column 24s has a single one in the 12th row.

For every ijx column that has been selected in ,B , ,i F j C∀ ∈ ∈ we may subtract

from the row that has the first one component, the row that has the second one

component. The corresponding (ERO) is then described as

( ) ( ) ( ) , : ,. ,. ,. .ix Ci F h M B h B h B in h∀ ∈ ∈ ← − + (2.7)

ERO (2.7) can also be obtained by multiplying matrix B on the left with a simple

matrix. More precisely, we multiply B with matrix 1iER to perform elimination in the first

upper Cn components of column ijx where

34

( ) ( )( ) ( )

( )

1

1 1

, 1 if 1 , 2.8.a, , -1 if , 2.8.b

0 otherwise. 2.8.c

iC C F F

i i iC x

ER h h h n n n n i F ER ER h h in h M

⎧ = ≤ ≤ + +⎪∀ ∈ = + = ∈⎨⎪⎩

(2.8)

Note that ( )1det 1.iER = Further applying this transformation for all ,i F∈ we

obtain

1 .i

i F

B = ER B∈∏ (2.9)

Clearly,

( ) ( )det det .B B= (2.10)

Because ERO (2.7) is made only to rows ( ),. ,B h where ,ixh M∈ it only affects

the first upper Cn rows while the remaining C F Fn n n+ rows remain unchanged. As a

result, every one component in the first Cn rows and in the columns associated with the

ijx will be eliminated and a block of zeros will be obtained, , .C xn m0 For the columns

associated with the variables ,ijs if B has a ijs column that has the same indices, i

and ,j as a ijx column that also belongs to B then (2.7) will cause ( ), ijA j s to take on

a coefficient -1 instead of zero. Otherwise ( ), ijA j s remains unchanged and so equal to

zero. In summary,

( ) ,1: ,1:C xC x n mB n m 0= (2.11)

and

( ) ( )( )

.

1 , 2.12.a, , ,

0 2.12.b

ixi

s ij ix

j Mi F j M B j s

j M⎧− ∈⎪∀ ∈ ∈ = ⎨ ∉⎪⎩

(2.12)

35

Lemma 2.5: Let B be an arbitrary square submatrix of A of size ( )C C F Fn n n n+ + such

that y t Fm m n+ = and ( ) ,y tF M M− =∅∪ then ( ) { }det 0,1 .B ∈

Proof: If we consider the last Fn rows of ,B since y t Fm m n+ = and ( )y tF M M− =∅∪

then we know that every row ( ),.C C FB n n n h+ + has exactly one component that is

equal to one, .h F∀ ∈ With column permutations we can obtain an identity matrix of size

( ),F Fn n in the lower right corner of B (note that the column permutations only affect

the sign of determinant while our discussion is concerned with the absolute value of the

determinant), i.e.,

( ) ,1: , 1: .F FC C F C C F F C C F C C F F n nB n n n n n n n n n n n n n n = I+ + + + + + + + (2.13)

Moreover, the first C C Fn n n+ columns of B are a combination of ijx and ijs

columns (since the iy and it columns form the last Fn columns of B as y t Fm m n+ = ).

It follows that the last Fn rows of the first C C Fn n n+ columns are all zeros, in other

words,

( ) ,1: ,1: .F C C FC C F C C F F C C F n n n nB n n n n n n n n n n = 0 ++ + + + + (2.14)

We now use these observations to decompose B into blocks of matrices to ease

the calculation of its determinant,

( )1 2

, ,

3 4, ,

det det .C C F C C F C C F F

F C C F F F

n n n n n n n n n n

n n n n n n

B BB

B B+ + +

+

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.15)

36

Vertically, 1 3 TB B⎡ ⎤⎣ ⎦ corresponds to ijx and ijs columns while 2 4 T

B B⎡ ⎤⎣ ⎦ is

associated with iy and it columns. The rows are divided into two blocks, one that has

the upper C C Fn n n+ rows and the other contains the lower Fn rows.

Using (2.13) and (2.14) we can substitute 3B and 4B as follows

( )1 2

, ,

, ,

det det .C C F C C F C C F F

F C C F F F

n n n n n n n n n n

n n n n n n

B BB

0 I+ + +

+

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.16)

Using Lemma 2.1, we obtain that

( ) ( ) ( )1 2 1, , , , ,det det det .

F F C C F C C F C C F F F F F C C Fn n n n n n n n n n n n n n n n n nB I B B I 0−+ + + += − (2.17)

This expression reduces to

( ) ( )1,det det .

C C F C C Fn n n n n nB B + += (2.18)

In (2.18), 1B is composed of the first C C Fn n n+ rows of a set of ijx and ijs

columns only, , .i F j C∀ ∈ ∈ Because the total number of ijx columns in A is C Fn n and

the total number of ijs columns in A is also equal to C Fn n (which is less than

C C Fn n n+ ), then the columns of 1B cannot be composed solely of ijx or of ijs columns.

Next we study different cases based on the choice of ijx and ijs columns in 1B :

Case 1: All ijx columns are included in 1,B i.e., , .x C F s Cm n n m n= =

We decompose 1B into blocks of matrices to ease the calculation of its

determinant as follows:

( )1 1

, ,11 1

, ,

det det .C C F C C

C F C F C F C

x sn n n n n

x sn n n n n n n

B BB

B B

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.19)

37

In (2.19), the matrices 1 1 Tx xB B⎡ ⎤⎣ ⎦ contain all ijx columns while matrices

1 1 Ts sB B⎡ ⎤⎣ ⎦ contain all ijs columns. The upper block 1 1x sB B⎡ ⎤⎣ ⎦ contains the upper

Cn rows of 1.B

Since all ijx columns are included in 1,B then 1, , ,

C F C F C F C F

xn n n n n n n nB I= see (2.5). We

apply ERO (2.7) on (2.19). As a result, we obtain 1B instead of 1B with

( ) ( )1 1det detB B= . Since ERO (2.7) transform 1,C C F

xn n nB into ,C C Fn n n0 , we obtain

( )1 1 1

, , , ,11 1 1

, , , ,

det det det .C C F C C C C F C C

C F C F C F C C F C F C F C

x s sn n n n n n n n n n

x s sn n n n n n n n n n n n n n

B B 0 BB

B B I B

⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟= =⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

(2.20)

We then permute the positions of the blocks of 1B so that the invertible square

block ,C F C Fn n n nI is moved to the lower right corner of 1B , i.e.,

( )1

, ,11

, ,

det det .C C C C F

C F C C F C F

sn n n n n

sn n n n n n n

B 0B

B I

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.21)

Using Lemma 2.1 we obtain that

( ) ( ) ( ) ( )1 1 1 1 1, , , , , ,det det det det .

C F C F C C C C F C F C F C F C C C

s s sn n n n n n n n n n n n n n n n n nB I B 0 I B B−= − = (2.22)

Since all the ijx columns are included in 1B then all the ijs columns have the

same indices as some of the ijx columns. It follows that from (2.12.a), 1,C C

sn nB is a matrix

where all elements are zeros except for exactly Cn components that are equal to -1,

one in each column. It is easily verified that, depending on the arrangements of the -1

components in 1, ,

C C

sn nB ( )1

,detC C

sn nB will be either 0 or 1. In particular, ( )1

,detC C

sn nB will be

equal to 1 only if there is a -1 component in every row and in every column of 1, .

C C

sn nB

38

This will happen when the indices j of the ijs columns covers the range { }1,…, .Cn

Using (2.10), (2.18) and (2.22) we therefore conclude that

( ) ( ) ( )( )

( )1 1

,

1 if , 2.23.adet det det

0 otherwise. 2.23.bC C

is

s i Fn n

C - MB B B ∈

⎧ = ∅⎪= = = ⎨⎪⎩

∪ (2.23)

Case 2: All ijs columns are included in 1,B i.e., ,s C Fm n n= .x Cm n=

Decomposing 1B as described in Case 1, we obtain

( )1 1

, ,11 1

, ,det det ,C C C C F

C F C C F C F

x sn n n n n

x sn n n n n n n

B BB

B B

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.24)

where the blocks have sizes different from those in (2.19).

Since every ijs column has a single component equal to one and all ijs columns

are included in 1,B then from (2.6) we know that 1, ,C C F C C F

sn n n n n nB 0= and

1, , .

C F C F C F C F

sn n n n n n n nB I= Using these observations in (2.24) we obtain

( )1 1 1

, , , ,11 1 1

, , , ,det det det .C C C C F C C C C F

C F C C F C F C F C C F C F

x s xn n n n n n n n n n

x s xn n n n n n n n n n n n n n

B B B 0B

B B B I

⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟= =⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

(2.25)

We then apply Lemma 2.1 to obtain

( ) ( ) ( ) ( )1 1 1 1 1, , , , , ,det det det det .

C F C F C C C C F C F C F C F C C C

x x xn n n n n n n n n n n n n n n n n nB I B 0 I B B−= − = (2.26)

Now observe that 1,C C

xn nB is a square matrix that has exactly Cn components that

are equal to one, one in each column. Similar to (2.23) in Case 1, we write that

( ) ( ) ( )( )

( )1 1

,

1 if , 2.27.adet det det

0 otherwise. 2.27.bC C

ix

x i Fn n

C - MB B B ∈

⎧ = ∅⎪= = = ⎨⎪⎩

∪ (2.27)

39

Case 3: Not all the ijx nor all the ijs columns are included in 1,B i.e.,

,s X C C Fm m n n n+ = + ,C s C Fn m n n≤ < ,C x C Fn m n n≤ < and there is in 1B a subset of

the ijs columns that can be combined with the ijx columns to obtain an identity matrix

of size ( ),C F C Fn n n n in the lower left square corner of 1,B i.e., we can obtain an identity

matrix in ( )1 1: , 1:C C C F C C C FB n n n n n n n n+ + + + that is solely composed of ijx and ijs

columns.

As mentioned before, every ijx column has two components that are equal to one.

Denote by xV the set of indices of rows that have the second one component among

the ijx columns in 1.B Also, let sV denotes the set of indices of rows that have the one

element among the ijs columns in 1.B Since, for any given ,i ixM and i

sM have all the

indices j of ijx and ijs that belong to 1,B then using (2.5) and (2.6) we obtain

{ }ix

x Ci F j M

V in j∈ ∈

= +∪ ∪ (2.28)

and

{ }.is

s Ci F j M

V in j∈ ∈

= +∪ ∪ (2.29)

An identity matrix of size ( ),C F C Fn n n n in the lower left corner of 1B can be

obtained only if we have exactly one component in each row k , for

{ }1,…,C C C Fk n n n n∈ + + (regardless of whether it is associated with ijx or ijs ). In other

words, we will see such an identity matrix if

{ } ( )1,…, .C C C F x sn n n n V V+ + − = ∅∪ (2.30)

40

When creating the above-mentioned identity matrix, we first select all ijx columns

present in 1,B and then add ijs columns when needed. Define xsV to be the set of

indices of the rows that have no one component associated with any of the ijx columns.

We need a ijs column for each of these rows, where

{ }1,…, .xs C C C F xV n n n n V= + + − (2.31)

For all ,i F∈ we define i ixs sM M⊆ to be the set of indices j of the ijs columns that

will be combined with the ijx columns to obtain the aforementioned identity matrix and

denote by isM the set of indices j of the remaining ijs columns. Note that

.i i is s xsM M M= − Further, let xsm denote the sum of the cardinality of ,i

xsM i.e.,

.ixs xs

i Fm M

=∑ We have that .x xs C Fm m n n+ =

Let { }1, , ,xsm xsv v V∈… for { }1, , ,xsk m∈ … kv denote an index of a row that has no

one component associated with any of the ijx columns. For every ,kv we can use (2.6)

to determine the indices i and j of a ijs column that gives one component in the row

with index .kv Those ijs columns will be selected to complete the desired identity matrix

and hence, they are associated with .ixsM Let

, where , , .ixs C xs

C

vj M i j v in v Vn⎢ ⎥

∈ = = − ∀ ∈⎢ ⎥⎣ ⎦

(2.32)

We perform column permutations and decompose 1B in such a way that the left

blocks contain the ijs columns of ixsM and all the ijx columns while the right block is

composed of the remaining ijs columns.

41

( ) .1 1

, ,11 1

, ,

det det C C F C C

C F C F C F C

xs sn n n n n

xs sn n n n n n n

B BB

B B

⎛ ⎞= ⎜ ⎟⎜ ⎟

⎝ ⎠ (2.33)

We then apply ERO (2.7) on (2.33), so as to obtain 1B instead of 1.B Given the

condition that was set in the definition of Case 3 and our above selection of the

columns, we have that 1, ,C F C F C F C F

xsn n n n n n n nB I= and 1

, , ,C C F C C F

xsn n n n n nB 0= and so

( )1

, ,11

, ,

det det .C C F C C

C F C F C F C

sn n n n n

sn n n n n n n

0 BB

I B

⎛ ⎞= ⎜ ⎟⎜ ⎟

⎝ ⎠ (2.34)

We then follow the steps that we used in Case 1. The result is similar except that

because not all of the ijx columns are included in 1B we cannot claim that (2.12.a)

hold. It may be that some of the ijs columns share indices with the ijx columns (2.12.a),

or the reverse case (2.12.b). In 1, ,

C C

sn nB the number of components that are equal to -1

can only be shown to be less than or equal to Cn and no more than one nonzero

element is present in a single column. It follows that the value ( )1,det

C C

sn nB cannot be

greater than one. If all the ijs columns included in 1,C C

sn nB (the columns associated with

isM ) share the indices j and i with ijx columns in 1

, ,C C F

xsn n nB then we have exactly Cn

components that are equal to -1. Then whether ( )1,det 1

C C F

xsn n nB = depends on the

arrangements of these components, i.e.,

( ) ( ) ( )

( )

1 if & , , & , 2.35.adet

0 otherwise. 2.35.b

i i i ix s x s

i F i F

M C i F j M j M M CB ∈ ∈

⎧ ⎛ ⎞ ⎛ ⎞= ∀ ∈ ∈ ∃ ∈ =⎪ ⎜ ⎟ ⎜ ⎟= ⎝ ⎠ ⎝ ⎠⎨

⎪⎩

∪ ∪ (2.35)

42

Case 4: Not all the ijx nor all the ijs columns are included in 1,B i.e.,

,s X C C Fm m n n n+ = + ,C s C Fn m n n≤ < ,C x C Fn m n n≤ < and there is no way to find in 1B

a subset of the ijs columns that can be combined with the ijx columns to obtain an

identity matrix of size ( ),C F C Fn n n n in the lower left square corner of 1,B i.e., we cannot

obtain an identity matrix in ( )1 1: , 1:C C C F C C C FB n n n n n n n n+ + + + that is solely

composed of ijx and ijs columns:

Compared to Case 3, we have at least one row that has no one components

corresponding to any of the ijx nor the ijs columns,

{ } ( )1,…, .C C C F x sn n n n V V+ + − ≠ ∅∪ (2.36)

In this case, because there is no way to make 1, , ,

C F C F C F C F

xsn n n n n n n nB I= there is at least

one row in 1,C F C F

xsn n n nB that has all of its elements equal to zero, and hence 1B is singular,

i.e., ( )det 0.B =

Example 2.4: For 4,Cn = and 4,Fn = we illustrate in Figure 2-3 the different cases

considered in the proof of Lemma 2.5. For each of these cases, we give an example of

a submatrix B of A satisfying the corresponding conditions. Submatrix B is obtained

by selecting the columns marked with (•).

Proposition 2.1: In the first three cases of the proof of Lemma 2.5, we obtained

conditions that make ( )det 1;B = (2.23.a), (2.27.a), and (2.35.a). We can use these

conditions to construct unimodular bases of the LPR of UFLP. Algorithms UFLP-UNI-1,

43

(a)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • (b)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • (c)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • (d)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • (e)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • (f)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • (g)

ijx ijs

iy it

1i = 2i = 3i = 4i = 1i = 2i = 3i = 4i = • • • • • • • • • • • • • • • • • • • • • • • • Figure 2-3. Illustration of the different cases encountered in the proof of Lemma 2.5. a)

Case 1, ( )det 1,B = b) Case 1, ( )det 0,B = c) Case 2, ( )det 1,B = d) Case

2, ( )det 0,B = e) Case 3, ( )det 1,B = f) Case 3, ( )det 0,B = and g) Case 4,

( )det 0.B =

44

UFLP-UNI-2, and UFLP-UNI-3 (corresponding to Cases 1 to 3) describe how to select

variables , , ,ij ij ix s y and it with indices i and j that correspond to , , ,i ix s yM M M and tM

to obtain unimodular bases.

Algorithm UFLP-UNI-1. Input: , , ,FF n C and .Cn Output: Sets of indices of variables , , ,ij ij ix s y and it ( , , ,i i

x s yM M M and tM ) that yield a

basis B with ( )det 1.B =

1: , ixi F M C∀ ∈ =

2: , i F∀ ∈ let isM C⊆ such that i

s Ci F

M n∈

=∑ and is

i F

C M∈

− = ∅∪

3: yM F⊆

4: t yM F M= − Algorithm UFLP-UNI-2. Input: , , ,FF n C and .Cn Output: Sets of indices of variables , , ,ij ij ix s y and it ( , , ,i i

x s yM M M and tM ) that yield a

basis B with ( )det 1.B =

1: ,i F∀ ∈ let ixM C⊆ such that i

x Ci F

M n∈

=∑ and ix

i F

C M∈

− = ∅∪

2: , isi F M C∀ ∈ =

3: yM F⊆

4: t yM F M= − Algorithm UFLP-UNI-3. Input: , , ,FF n C and .Cn Output: Sets of indices of variables , , ,ij ij ix s y and it ( , , ,i i

x s yM M M and tM ) that yield a

basis B with ( )det 1.B =

1: ,i F∀ ∈ let ixM C⊆ such that i

xi F

M C∈

=∪

2: ,i F∀ ∈ i ixs xM C M= −

3: ,i F∀ ∈ let i is xM M⊆ such that i

si F

M C∈

=∪ and is C

i FM n

=∑

4: ,i F∀ ∈ i i is xs sM M M= ∪

5: yM F⊆

6: t yM F M= −

45

2.4 Case 3: y t Fm m n+ >

Lemmas 2.3, 2.4 and 2.5 discuss situations where .y t Fm m n+ ≤ We now

investigate submatrices for which .y t Fm m n+ > When performing this study, it suffices

to consider situations where ( ) ,y tF M M− =∅∪ otherwise B is singular; see Lemma

2.4. In the following discussion, we will have to consider extra columns corresponding to

the iy and it variables as compared to the cases we considered above. We first

consider the effect of ERO (2.7) on such columns iy and .it

When ,y t Fm m n+ > we will first permute the columns to obtain an identity matrix

of size ( ),F Fn n in the lower right square corner of .B We chose this identity matrix to

be composed of all the it columns, supplemented by iy columns when necessary. Let

tyM denote the set of indices i of the iy columns that will supplement the it columns

and denote by M the set of indices i of the remaining iy columns. Finally, let tym and

m denote the cardinality of tyM and ,M i.e.,

,ty tM F M= − (2.37)

,y tyM M M= − (2.38)

and

, .ty tym M m M= = (2.39)

Next, we select all the ijx columns and add, if needed, some ijs columns to obtain

an identity matrix of size ( ),C F C Fn n n n as we did in Case 3 of the proof of Lemma 2.5

(we use the same notation here). Then we permute the columns again so that the final

46

arrangement of the columns from left to right is as follows; the iy columns that are

associated with ,M the ijs columns that are associated with ,isM the ijx and the ijs

columns corresponding to ,ixsM then the it columns and the iy columns corresponding

to .tyM Figure 2-4 describes the final arrangement of columns. The sets of columns and

the number of columns in each set are given in the first and second rows respectively.

iy associated with M then

ijs corresponding to isM

ijs corresponding to ixsM

and all ijx iy associated with tyM

and all it

( )s xs Cm m m n+ − = x xs C Fm m n n+ = ty t Fm m n+ = Figure 2-4. Final arrangement of columns included in B if y t Fm m n+ > .

The first Cn components in every iy column are zeros. There are exactly Cn

components equal to -1 and exactly one component equal to one in the rest of the

column; i.e.,

( ){ } ( )

( )( )

=

1 if , 2.40.a

, 1 if , 2.40.b0 otherwise. 2.40.c

C

i C C F

h in k k C

i F y h h n n n i

⎧− ∈ + ∈⎪⎪∀ ∈ = + +⎨⎪⎪⎩

(2.40)

For every iy column in ,B ERO (2.7) subtracts a row that has -1 component

(2.40.a) from a row that has a zero component (2.40.c). ERO (2.7) is applied only to the

rows with indices j that corresponds to ijx columns in ,B i.e., ,ixj M∀ ∈ see (2.8). This

leads to a specific structure of the first Cn rows of the iy columns after ERO (2.7). Let

the submatrix ,C yn mG represent the upper Cn rows of the iy columns in .B For all

,yi M∈ the structure of the submatrix ,C yn mG will reflect the ijx columns that have been

47

included in B as follows; for every ijx column that is in ,B i.e., ,ixj M∈ ( ), 1G j i = and

for every ijx that is not in ,B i.e., ,ixj M∉ ( ), 0G j i = and in this case, ijs is in ,B i.e.,

( ) ( )( )

1 if , 2.41.a

, ,0 otherwise. 2.41.b

ix

y

h Mi M G h i

⎧ ∈⎪∀ ∈ = ⎨⎪⎩

(2.41)

It should be noted that the column arrangement given in Figure 2-4 locates the

variables iy in two different parts of the matrix .B Therefore a vertical decomposition is

also applied to , .C yn mG We denote by 1

,Cn mG and 2,C tyn mG the submatrices made of the first

Cn rows of the iy columns corresponding to M and to ,tyM respectively.

Further, the first Cn columns of ,B see Figure 2-4, are composed of iy and ijs

columns that are associated with .isM We define submatrix ,C s xsn m mL − to denote the first

Cn rows of the ijs columns corresponding to isM (the ijs columns part of the first Cn

columns of B ). Clearly the upper left square corner of B of size ( ),C Cn n will be written

as

( ) 1, ,1: ,1: .

C C s xsC C n m n m mB n n G L −⎡ ⎤= ⎣ ⎦ (2.42)

From (2.12), we know that after applying ERO (2.7), the ijs columns may now

have a -1 or zero component in the first Cn rows. It follows that the number of -1

components in , ,C s xsn m mL − say ,r will be less than or equal to s xsm m− (the number of its

columns) since no more than one -1 component can be present per column. If

,s xsr m m< − then at least one column in ,C s xsn m mL − is identically zero and therefore

( )1: ,1:C CB n n is singular. Similarly, if s xsr m m= − but we find more than one -1

48

component in the same row, then ( )1: ,1:C CB n n is singular. Therefore, ( )1: ,1:C CB n n

can only be nonsingular if s xsr m m= − and every -1 component has a unique row

index. Observe that is

i F

M∈∪ gives the indices j that correspond to the ijs columns in

,C s xsn m mL − and hence the indices of the rows of ,C s xsn m mL − that have -1 components

(depending on the case that holds, 2.12.a or 2.12.b). Therefore, is

i F

H C M∈

= −∪

represents the indices of the rows of ,C s xsn m mL − that do not have -1 components. Since

we are only interested in the case where ( )1: ,1:C CB n n is nonsingular, we know that

s xsr m m= − and it follows that is s xs

i F

M m m∈

= −∪ and

( ) .is C s xs

i F

H C M n m m m∈

= − = − − =∪

Next, we decompose the submatrix ( )1: ,1:C CB n n in (2.42) horizontally such that

its upper rows are associated with is

i F

M∈∪ and the lower rows are composed of the rows

corresponding to ,H i.e.,

( )1

1, , , ,

11 , ,

, ,

1: ,1:i is s s xs s xs s xs s xs

i F i F

s xss xs

M m M m m m m m m m m m

C Cm m m m m

H m H m m

G L G LB n n

G LG L∈ ∈

− − − −

−−

⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥= =⎢ ⎥⎢ ⎥⎢ ⎥ ⎣ ⎦

⎣ ⎦

∪ ∪ (2.43)

where we denote by ,m mD the submatrix composed of the rows of 1,Cn mG that have

indices in .H We next illustrate our notation on an example.

Example 2.2-continued: Applying ERO (2.7) on the basis B given in Figure 2-2

produces the following sequence of matrix operations.

49

• ( ) ( ) ( ) 1, 1,. 1,. 5,. .i B B B= ← −

• ( ) ( ) ( ) 1, 2,. 2,. 6,. .i B B B= ← −

• ( ) ( ) ( ) 1, 3,. 3,. 7,. .i B B B= ← −

• ( ) ( ) ( ) 2, 1,. 1,. 9,. .i B B B= ← −

• ( ) ( ) ( ) 2, 2,. 2,. 10,. .i B B B= ← −

• ( ) ( ) ( ) 2, 4,. 4,. 12,. .i B B B= ← −

• ( ) ( ) ( ) 3, 1,. 1,. 13,. .i B B B= ← −

• ( ) ( ) ( ) 3, 3,. 3,. 15,. .i B B B= ← −

• ( ) ( ) ( ) 3, 4,. 4,. 16,. .i B B B= ← −

• ( ) ( ) ( ) 4, 2,. 2,. 18,. .i B B B= ← −

• ( ) ( ) ( ) 4, 3,. 3,. 19,. .i B B B= ← −

• ( ) ( ) ( ) 4, 4,. 4,. 20,. .i B B B= ← −

The result can also be obtained by applying (2.9). Matrices 11ER to 4

1ER are

shown in Figure 2-5 (a) to (d). After column permutation is done according to the order

of Figure 2-4, B is given in Figure 2-6 where the shaded area represents submatrix

, .C yn mG Figure 2-7 then illustrates how the structure of the submatrix ,C yn mG reflects the

selection of the ijx columns in .B

Using the symbols introduced above, we write that

• { } { }1,2,3,4 , , 1,2,3,4 ,t ty y tyM M M M M= = ∅ = − =

• { } { }1,2,3,4 , 1,2,3,4 ,i is xs

i F i F

M M∈ ∈

= =∪ ∪ and .is

i F

M∈

= ∅∪

• { }1,2,3,4 .is

i F

H C M∈

= − =∪

• Because ,tyM = ∅ then 1.G G=

• Further, ,is

i F

M∈

= ∅∪ ( ) 11: ,1: ,C CB n n G G= = and ,

1 1 1 01 1 0 1

.1 0 1 10 1 1 1

m mD G

⎛ ⎞⎜ ⎟⎜ ⎟= =⎜ ⎟⎜ ⎟⎝ ⎠

50

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 -1 1 -1 1 -1 1 1 1 1 1 1

11ER =

1 1 1

1 1 1 1 1 1 1 1 1 1 1 1

(b)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 -1 1 -1 1 1 -1 1 1 1 1 1

21ER =

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

51

(c)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 -1 1 1 -1 1 -1 1 1 1 1 1

31ER =

1 1 1

1 1 1 1 1 1 1 1 1 1 1 1

(d)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 1 -1 1 -1 1 -1 1 1 1 1 1

41ER =

1 1 1

1 1 1 1 1 1 1 1 1 1 1 1

Figure 2-5. Illustration of how to apply ERO (2.5) using (2.7) in Example 2.2. a) 1

1,ER b) 2

1 ,ER c) 31 ,ER and d) 4

1 .ER

52

iy ijx ijs

it 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 -1 1 -1 1 -1 1 -1 1

B = -1 1 -1 1 -1 1

-1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1 1 1 1

Figure 2-6. B in Example 2.2 after column permutations in accordance with Figure 2-4.

1 1 1 01 1 0 11 0 1 10 1 1 1

G

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

1i =

11x •

2i =

21x •

3i =

31x •

4i =

41x

12x • 22x • 32x 42x •

13x • 23x 33x • 43x •

14x 24x • 34x • 44x • ↓ ↓ ↓ ↓

G =

1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1

Figure 2-7. Submatrix G reflects the selected ijx columns in Example 2.2.

We illustrate the same result on another example where the number of facilities

and the number of customers are different.

Example 2.5: For 5,Cn = and 3,Fn = we show in Figure 2-8 (b) the matrix B

corresponding to the submatrix B obtained by selecting the columns marked with (•) in

Figure 2-8 (a).

Using the symbols introduced above, we have

53

• { } { } { }1,3 , 2 , 1,3 ,t ty y tyM M M M M= = = − =

• { }3,4,5 ,ixs

i F

M∈

=∪ and { }1,2,3 .is

i F

M∈

=∪

• { }4,5 .is

i F

H C M∈

= − =∪

The upper Cn rows of the first Cn columns of B , can be computed to be

( )

1 1 -1 0 01 1 0 -1 0

1: ,1: .1 0 0 0 -11 1 0 0 00 1 0 0 0

C CB n n

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

( )1: ,1:C CB n n decomposes into the submatrices G and L corresponding to the

iy and the ijs columns,

1 1 11 1 11 0 11 1 00 1 1

G

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

and

-1 0 00 -1 0

.0 0 -10 0 00 0 0

L

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

In Figure 2-8 (b), the submatrix ,C yn mG is shaded inside of .B We know that

1 2 ,G G G⎡ ⎤= ⎣ ⎦ where 1G contains the columns associated with the iy variables with

indices ,i for ,i M∈ and 2G contains the columns associated with the remaining iy

variables, i.e.,

1

1 11 11 01 10 1

G

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

and 2

11

.101

G

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

54

Finally, the submatrix ,m mD is composed of rows of 1G that have indices ,i for

,i H∈

1,1

1,

1 11 11 01 10 1

s xsm m m

m m

GG

G−

⎡ ⎤⎢ ⎥⎢ ⎥⎡ ⎤⎢ ⎥= =⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦

and 1, ,

1 1.

0 1m m m mD G ⎡ ⎤= = ⎢ ⎥

⎣ ⎦

We now use the above derivation to compute the determinant of bases of the

LPR of UFLP.

Lemma 2.6: Let B be an arbitrary square submatix of A of size ( )C C F Fn n n n+ + such

that y t Fm m n+ > and ( ) .y tF M M− =∅∪ If B is nonsingular, then

( ) ( ),det det .C C F F C C F Fn n n n n n n n m,mB D+ + + + =

Proof: The condition ( )y tF M M− =∅∪ ensures that we can obtain an identity

matrix of size ( ),F Fn n in the lower right corner of B possibly after permuting columns.

Next, we permute the columns in accordance to Figure 2-4 and apply ERO (2.7). There

are two cases that are similar to Cases 3 and 4 in the proof of Lemma 2.5:

Case 1: There is in B a subset of the ijs columns that can be combined with the ijx

columns to obtain an identity matrix of size ( ),C F C Fn n n n in

( )1: , 1: ,C C C F C C C FB n n n n n n n n+ + + + i.e., we can obtain an identity matrix in

( )1: , 1:C C C F C C C FB n n n n n n n n+ + + + that is solely composed of ijx and ijs columns.

55

(a)

ijx ijs

iy it

1i = 2i = 3i = 1i = 2i = 3i = • • • • • • • • • • • • • • • • • • • • • • • (b)

iy ijs

ijx ijs it iy it

1 1 -1 1 1 1 -1 1 1 -1 1 1 1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1

B = -1 1 1 -1 1 -1

1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1

Figure 2-8. Illustration of Example 2.5. a) Matrix ,B and b) Matrix .B

We decompose the columns of B into the three blocks described in Figure 2-4.

We also decompose the rows of B into three blocks; the first Cn rows, the middle C Fn n

rows, then the last Fn rows.

We obtain that

( ), , ,

, , ,

, , ,

det det .C C C C F C F

C F C C F C F C F F

F C F C F F F

ys xs tyn n n n n n n

ys xs tyn n n n n n n n n n

ys xs tyn n n n n n n

B B B

B B B B

B B B

⎛ ⎞⎜ ⎟⎜ ⎟= ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(2.44)

56

Because our first step was to permute the columns to obtain an identity matrix of

size ( ),F Fn n in the lower right square corner of ,B we know that , , .F F F F

tyn n n nB I= Also

since the block ,F C F

xsn n nB is composed of ijx and ijs columns only, then , , ;

F C F F C F

xsn n n n n nB 0=

see (2.5) and (2.6).

Further, given the condition that was set by the definition of Case 1 (refer also to

(2.30) to verify when this condition holds), then , , .C F C F C F C F

xsn n n n n n n nB I= The ijs columns in

the middle vertical block of ,B that are corresponding to ,ixsM have no indices shared

with any of the ijx columns (otherwise we would not be able to obtain the identity

matrix, , ,C F C F C F C F

xsn n n n n n n nB I= ), and therefore we know from (2.12.b) that , , .

C C F C C F

xsn n n n n nB 0=

The expression (2.44) simplifies to

( ), , ,

, , ,

, , ,

det det .C C C C F C F

C F C C F C F C F F

F C F C F F F

ys tyn n n n n n n

ys tyn n n n n n n n n n

ysn n n n n n n

B 0 B

B B I B

B 0 I

⎛ ⎞⎜ ⎟⎜ ⎟= ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(2.45)

Now, we group some of the blocks and introduce the following notation to ease

the computation of the determinant of .B We write

( ), , , 1 2

, ,

, , , 3 4, ,

, , ,

det det det ,C C C C F C F

C C C C F F

C F C C F C F C F F

C F F C C F F C F F

F C F C F F F

ys tyn n n n n n n

n n n n n nys tyn n n n n n n n n n

n n n n n n n n n nys

n n n n n n n

B 0 BB B

B B I BB B

B 0 I

+

+ + +

⎛ ⎞⎜ ⎟ ⎛ ⎞⎜ ⎟ ⎜ ⎟= =

⎜ ⎟⎜ ⎟ ⎝ ⎠⎜ ⎟⎝ ⎠

(2.46)

where

1, , ,

C C C C

ysn n n nB B= (2.47)

57

( )2, , , ,

C C F F C C F C F

tyn n n n n n n n nB 0 B+ = (2.48)

,3,

,

,C F C

C F F C

F C

ysn n n

n n n n ysn n

BB

B+

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.49)

and

, ,4,

, ,

.C F C F C F F

C F F C F F

F C F F F

tyn n n n n n n

n n n n n nn n n n n

I BB

0 I+ +

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(2.50)

Applying Lemma 2.1 on (2.50), we obtain

( ) ( ) ( )4 1, , , , , ,det det det 1.

C F F C F F F F C F C F C F F F F F C F

tyn n n n n n n n n n n n n n n n n n n nB I I B I 0−

+ + = − = (2.51)

Next, we apply Lemma 2.2 on (2.50) to obtain

( )1

1 , , , , , ,4,

, , , ,

.C F C F C F F C F C F C F C F C F F F F

C F F C F F

F C F F F F C F F F

ty tyn n n n n n n n n n n n n n n n n n n n

n n n n n nn n n n n n n n n n

I B I I B IB

0 I 0 I

+ +

⎛ ⎞ ⎛ ⎞−⎜ ⎟ ⎜ ⎟= =⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

(2.52)

We next apply Lemma 2.1 on (2.46) to write

( ) ( ) ( )( )14 1 2 4 3, , , , ,det det det .

C F F C F F C C C C F F C F F C F F C F F Cn n n n n n n n n n n n n n n n n n n n n nB B B B B B−

+ + + + + += − (2.53)

From (2.51) we know that ( )4,det 1.

C F F C F Fn n n n n nB + + = Hence, (2.53) reduces to

( ) ( ) 11 2 4 3, , , ,det det .

C C C C F F C F F C F F C F F Cn n n n n n n n n n n n n n n nB B B B B−

+ + + +⎛ ⎞⎡ ⎤= −⎜ ⎟⎢ ⎥⎣ ⎦⎝ ⎠

(2.54)

We now use (2.48) and (2.52) to calculate the part of the expression enclosed in

square brackets in (2.54). Specifically,

( ) ( )

( ) .

1 , , , ,2 4, , , ,

, ,

2, , ,

C F C F C F C F C F F F F

C C F F C F F C F F C C F C F

F C F F F

C C F C F C C F F

tyn n n n n n n n n n n n nty

n n n n n n n n n n n n n n nn n n n n

tyn n n n n n n n n

I I B IB B 0 B

0 I

0 B B

+ + +

+

⎛ ⎞−⎡ ⎤ ⎜ ⎟=⎢ ⎥⎣ ⎦ ⎜ ⎟⎝ ⎠

= =

(2.55)

58

Substituting (2.55) into (2.54), we obtain

( ) ( )1 2 3, , ,det det .

C C C C F F C F F Cn n n n n n n n n nB B B B+ += − (2.56)

From (2.48) and (2.49), we can rewrite (2.56) as

( ) ( ) ( ),1 1, , , , , ,

,

det det det .C F C

C C C C F C F C C C F F C

F C

ysn n nty ty ys

n n n n n n n n n n n n nysn n

BB B 0 B B B B

B

⎛ ⎞⎛ ⎞⎜ ⎟ ⎡ ⎤⎜ ⎟= − = − ⎣ ⎦⎜ ⎟⎜ ⎟

⎝ ⎠⎝ ⎠ (2.57)

To compute ( )det ,B we next investigate the structure of ,C F

tyn nB and , .

F C

ysn nB

Submatrix ,C F

tyn nB represents the upper Cn rows of the right vertical block in Figure 2-4

which is a combination of the it columns and the iy columns associated with .tyM We

know that the first Cn components of the it columns are all zeros. They are unaffected

by ERO (2.7), see Figure 1-1. Although the first Cn components of iy columns are all

zeros, some of these components may be changed to ones after applying ERO (2.7),

see (2.41). In summary, ,C F

tyn nB is composed of tm columns that are identically zero and

have indices ,i ,t i M∀ ∈ and of tym columns that may have nonzero components and

have indices ,i .ty i M∀ ∈ Let hu be a (0,1) column of , ,C F

tyn nB ,Fh n∈ then ,C F

tyn nB can be

presented as

( )( )( )

,1,

,1

, 2.58.a,

. 2.58.bC

C F

C

n ttyn n

n ty

0 h MB . h

u h M

⎧ ∀ ∈⎪= ⎨∀ ∈⎪⎩

(2.58)

Submatrix ,F C

ysn nB is formed from the lower Fn rows of the left vertical block in

Figure 2-4. It is composed of the iy columns associated with M and the ijs columns

corresponding to .isM Form (2.6) we know that the last Fn components of the ijs

59

columns are zero and will remain unchanged after applying ERO (2.7). Also referring to

(2.40.b) we observe that the last Fn components of the iy columns only have one

nonzero component that is equal to one and will not be affected by ERO (2.7). In

conclusion, ,F C

ysn nB has ( )s xsm m− columns that are identically zero and have indices ,j

, ,is j M i F∀ ∈ ∈ and also has m columns that have exactly one component equal to one

and have indices ,i .i M∀ ∈ If he denote a column of , ,F C

ysn nB ,Ch n∈ whose

components are all zero except for one component that is equal to one, then we write

( )( )( )

,1

,1,

, 2.59.a,

. 2.59.bF

F C

F

nysn n

n

e h MB . h

0 h M

⎧ ∀ ∈⎪= ⎨∀ ∉⎪⎩

(2.59)

Using the information of (2.58) and (2.59) we now compute the product

, , .C F F C

ty ysn n n nB B From the structure shown in (2.59) if { }1 2, , , mh h h M∈… , we conclude that

( ) ( ) ( ), , , 1 , 2 , ,1 ,1., ., ., ,C F F C C F C F C F C C

ty ys ty ty tyn n n n n n n n n n m n nB B B h B h B h 0 0⎡ ⎤= ⎣ ⎦… … (2.60)

i.e., the product matrix is obtained by selecting the appropriate columns of , .C F

tyn nB

Since { }1 2, , , mh h h M∈… and we know from (2.37) that ,tyM M =∅∩ it is clear

that { }1 2, , , .m tyh h h M∉… Further, because { }1 2, , , ,m tyh h h M∉… then using (2.58) it is

simple to verify that ( ) { }, ,1 1 2., , , , , .C F C

tyn n n mB k 0 k h h h= ∀ ∈ … We conclude that

, , ,1 ,1 ,1 ,1 ,1 , .C F F C C C C C C C C

ty ysn n n n n n n n n n nB B 0 0 0 0 0 0⎡ ⎤= =⎣ ⎦… … (2.61)

Substituting (2.61) into (2.57), we obtain

( ) ( ) ( )1 1, , ,det det det .

C C C C C Cn n n n n nB B 0 B= − = (2.62)

60

Matrix 1,C Cn nB is formed from the first Cn rows of the left vertical block of Figure 2-

4. The columns of 1,C Cn nB are a combination of the iy columns associated with M and

the ijs columns corresponding to .isM We discussed the structure of 1

,C Cn nB in the

section preceding this proof. We apply the decomposition in (2.41) using the same

notation here.

As mentioned previously, after applying ERO (2.7) every ijs column may have a

-1 or zero component in the first Cn rows; see (2.12). We define r to be the number of

-1 components in , .C s xsn m mL − If ,s xsr m m< − then at least one column in ,C s xsn m mL − that is

identically zero and therefore ( )1: ,1:C CB n n is singular. Similarly, if s xsr m m= − but we

find more than one -1 component in the same row, ( )1: ,1:C CB n n is singular. Therefore,

( )1: ,1:C CB n n can only be nonsingular if s xsr m m= − and every -1 component has a

unique row index.

This case holds only if all the ijs columns that are corresponding to isM share

indices with the ijx columns. The ijs columns will have an identity matrix of size

( ), ,s xs s xsm m m m− − with -1 coefficients, in the rows that have indices ,j ,is

i F

j M∈

∈∪

therefore , , .s xs s xs s xs s xsm m m m m m m mL I− − − −= − The remaining rows (that have indices ,j

is

i F

j H C M∈

∈ = −∪ ) are identically zero, , , .s xs s xsm m m m m mL 0− −= Now (2.43) can be rewritten as

1, ,1

, 1, ,

.s xs s xs s xs

C C

s xs

m m m m m m m

n nm m m m m

G IB

G 0− − −

⎛ ⎞−⎜ ⎟=⎜ ⎟⎝ ⎠

(2.63)

61

We permute the blocks of 1,C Cn nB in (2.63) so that the invertible square matrix

,s xs s xsm m m mI − −− is now located in the lower right corner,

1, ,1

1,, ,

.s xs

C C

s xs s xs s xs

m m m m m

n nm m m m m m m

G 0B

G I−

− − −

⎛ ⎞⎜ ⎟=

−⎜ ⎟⎝ ⎠

(2.64)

We next apply Lemma 2.1 on (2.64) to obtain

( ) ( ) ( )1 1 1 1, , , , , ,det det det .

C C s xs s xs s xs s xs s xs s xsn n m m m m m m m m m m m m m m m mB I G 0 I G−− − − − − −= − + (2.65)

Note that we previously denoted 1,m mG by ,m mD . Then using (2.62) and (2.65) we

write

( ) ( ) ( ) ( )1 1, , ,det det det det .

C Cn n m m m mB B G D= = = (3.66)

Case 2: There is no way to find in B a subset of the ijs columns that can be combined

with the ijx columns to obtain an identity matrix of size ( ),C F C Fn n n n in

( )1: , 1:C C C F C C C FB n n n n n n n n+ + + + i.e., we cannot obtain an identity matrix in

( )1: , 1:C C C F C C C FB n n n n n n n n+ + + + that is solely composed of ijx and ijs columns.

In this case, we apply the same re-ordering of the columns as in Case 1 but we do

not perform ERO (2.7). Considering B instead of B , we obtain

( ), , ,

, , ,

, , ,

det det .C C C C F C F

C F C C F C F C F F

F C F C F F F

ys xs tyn n n n n n n

ys xs tyn n n n n n n n n n

ys xs tyn n n n n n n

B B B

B B B B

B B B

⎛ ⎞⎜ ⎟⎜ ⎟= ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(2.67)

Using the same notation as in (2.46), we obtain

62

( ), , , 1 2

, ,

3 4, , ,, ,

, , ,

det det det .C C C C F C F

C C C C F F

C F C C F C F C F F

C F F C C F F C F F

F C F C F F F

ys xs tyn n n n n n n

n n n n n nys xs tyn n n n n n n n n n

n n n n n n n n n nys xs tyn n n n n n n

B B BB B

B B B BB B

B B B

+

+ + +

⎛ ⎞⎜ ⎟ ⎛ ⎞⎜ ⎟ ⎜ ⎟= =

⎜ ⎟⎜ ⎟ ⎝ ⎠⎜ ⎟⎝ ⎠

(2.68)

Since we are not able to create , ,C F C F C F C F

xsn n n n n n n nB I= (refer to (2.36) in Case 4 of

the proof of Lemma 2.5), then clearly ( )4,det 0.

C F F C F Fn n n n n nB + + = Further, because we did

not proceed with ERO (2.7), we know that , ,C C

ysn nB which is composed of the first Cn

components of the ijs or iy columns will be a matrix of zeros, i.e., , , ,C C C C

ysn n n nB 0= see

(2.6.b) and (2.40.c). It follows that B in (2.68) is composed of blocks of matrices where

the blocks on the diagonal, 1,C Cn nB and 4

, ,C F F C F Fn n n n n nB + + have determinant zero. Therefore,

B is singular. This concludes the proof.

2.5 Constructing UFLP Instances of Desired Determinant

Theorem 2.1: Let B be an arbitrary square submatix of A of size ( )C C F Fn n n n+ + such

that:

• ,y t Fm m n+ >

• ( ) ,y tF M M− =∅∪ i.e., it is possible to obtain an identity matrix of size ( ),F Fn n in

the lower right corner of ,B and

• { } ( )1,…, ,C C C F x sn n n n V V+ + − =∅∪ i.e., we can obtain an identity matrix of size

( ),C F C Fn n n n in ( )1: , 1: .C C C F C C C FB n n n n n n n n+ + + +

Then, ( ) ( )det det .B D= Further, Algorithm UFLP-DET describes how to compute

the matrix D from the submatrix .B

63

Algorithm UFLP-DET. Input: A square submatrix B of A of size ( )C C F Fn n n n+ + such that ,y t Fm m n+ >

( ) ,y tF M M− =∅∪ and { } ( )1,…, .C C C F x sn n n n V V+ + − = ∅∪ Sets of indices of variables

, , ,ij ij ix s y and it ( , , ,i ix s yM M M and tM ) that compose .B

Output: Matrix D such that ( ) ( )det det .B D=

1: ty tM F M= −

2: y tyM M M= −

3: ,i F∀ ∈ i ixs xM C M= −

4: ,i F∀ ∈ i i is s xsM M M= −

5: ( ) ( ) ( ) , : ,. ,. ,.ix Ci F h M B h B h B in h∀ ∈ ∈ ← − +

6: is

i F

H C M∈

= −∪

7: Consider the iy columns with indices ,i .i M∈ D is obtained by selecting rows with indices ,j ,j H∈ from these columns.

Algorithm UFLP-DET is mimicking the steps of the proof and the section that

precedes Lemma 2.6; see also Example 2-5. It should be noted that if the first condition

of Theorem 2.1 does not hold, then B is singular or unimodular, see Lemma 2.3, (2.23),

(2.27), and (2.35). Further, if any of the last two conditions of the same theorem do not

hold then B is singular.

Next, we determine the maximum size of matrix .D Then we give an algorithm to

produce bases of UFLP of desired determinant.

Corollary 2.1: For given Cn and ,Fn the maximum size of D is n where

{ }min , .C Fn n n=

Proof: From (2.43) m,mD is a submatrix of ( )1: ,1:C CB n n that is formed by the lower

block of its iy columns. Hence, the maximum number of columns that m,mD may have

is Fn (the maximum number of iy columns). Further, m,mD is a submatrix of

64

Algorithm UFLP-BASIS. Input: A nonsingular (0,1) matrix m,mD such that ( )det .m,mD d= Parameters ,Cn and

Fn such that Cn m≥ and .Fn m≥ Output: Sets of indices of variables , , ,ij ij ix s y and it ( , , ,i i

x s yM M M and tM ) that yields a

basis B with ( )det .B d=

1: Let M F⊆ such that M m=

2: Let H C⊆ such that H m= 3: M F M= − 4: 1M M⊆ ( 1M is a subset of M that can be chosen of any size) 5: 1tM M M= ∪

6: 1tyM M M= −

7: y tyM M M= ∪

8: Let { }1 2 mh h h H< < < ∈… and { }1 2 mk k k M< < < ∈… 9: For i = 1 to m

10: For j = 1 to m 11: If ( ),D j i =1, then let ,i ik k

x x jM M h= ∪ else let i ik ks s jM M h= ∪

12: End For 13: End For 14: H C H= − 15: ,j H∀ ∈ let r r

x xM M j= ∪ and r rs sM M j= ∪ where r F∈ ( r can be any element in

F ).

16: { }ix

x Ci F j M

V in j∈ ∈

= +∪ ∪

17: { }is

s Ci F j M

V in j∈ ∈

= +∪ ∪

18: { } ( )1,…,C C C F x sV n n n n V V= + + − ∪

19: ,v V∀ ∈ let ixj M∈ or i

sj M∈ where , CC

vi j v inn⎢ ⎥

= = −⎢ ⎥⎣ ⎦

(for every i and ,j we

can choose ijx variable ( ixj M∈ ) or ijs variable ( i

sj M∈ ) but we cannot choose both).

( )1: ,1: ,C CB n n therefore the maximum number of rows that m,mD may have is Cn (the

size of ( )1: ,1:C CB n n ). Also, m,mD is a square matrix of size ( ), .m m It follows that the

maximum size of D is ( ),n n where { }min , .C Fn n n=

65

Theorem 2.2: Let m,mD be an arbitrary (0,1) nonsingular matrix such that

( )det .m,mD d= Then for any Cn m≥ and Fn m≥ the basis B constructed by Algorithm

UFLP-BASIS is such that ( )det .B d=

We next explain the steps of Algorithm UFLP-BASIS. Firstly, we emphasize that

we will use D as a submatrix of the upper left corner of B of size ( ), ;C Cn n see (2.43).

In Step 1, we specify a set, ,M of the indices i of iy variables, for ,i F∈ such that

.M m= The basis B that will be obtained as output of Algorithm UFLP-BASIS will have

iy columns with indices ,i .i M∈ Matrix D will be a submatrix of these columns. In

Step 2, we do an operation similar to Step 1 but in terms of rows instead of columns,

i.e., we determine a set of indices, ,H of rows such that D will be a submatrix of these

rows.

Steps 1 and 2 are concerned with finding the left vertical block of B in Figure 2-

4. Steps 3 to 7 focus on constructing the right vertical block of B in the same figure. We

select columns corresponding to it variables that have the same indices as the iy

variables that were selected. To obtain an identity matrix of size ( ),F Fn n in the lower

right corner of ,B we supplement (if necessary) the selected it variables by extra

columns that may be either it or .iy Clearly, the indices i of the additional columns

should be chosen from .F M−

66

Steps 8 to 13 use the elements of D to select variables ijx and :ijs if

( ), 1,D j i = then we select the variables ijx to be included in the basis and if

( ), 0,D j i = we select the variables ijs to be included in B . The indices i and j of the

ijx and ijs variables depend on the elements of M and .H

In Step 14, we determine the set of indices, ,H of the rows of ( )1: ,1:C CB n n that

are not associated with D (where we should have an identity matrix in the same

diagonal of ( )1: ,1:C CB n n as D ). Step 15 obtains such an identity matrix by selecting

two variables ijx and ijs with the same indices i and j for each row corresponding to

,H so (2.12.a) holds.

For the middle vertical block of B in Figure 2-4, Steps 16 to 18 determine the

subset of indices of rows, in the range { }1,…, ,C C C Fn n n n+ + that have no component

equal to one that is associated with any of the ijx or ijs variables. Finally, Step 19

selects ijx or ijs variables to supplement that range of rows such that there is one

component that is equal to one in each row.

When ,C Fn n m= = the above algorithm simplifies tremendously. In this case, the

algorithm simply selects all the iy and it variables to be included in .B Further, for

each ( ), 1D j i = we select the column corresponding to variable ijx and for each

( ), 0D j i = the column corresponding to variable ,ijs , .i F j C∀ ∈ ∈

We demonstrate the use of Algorithm UFLP-BASIS in the following example.

67

Example 2.6: Let 1

1 1 01 0 1 .0 1

D =⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

Clearly, ( )det 2.D = We next show how to construct

a basis B of UFLP with 5Cn = and 4,Fn = such that ( )det 2.B =

We first choose { }1,2,3 ,M = { }3,4,5 ,H = and { }4 .M = Since { }1,2,3,4 ,tM =

we select 1 2 3 4, , , .t t t t B∈ Because tyM = ∅ and { }1,2,3 ,yM = we choose 1 2 3, , .y y y B∈

We now consider matrix .D

As ( )1

.,1 1 ,0

D =⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

we set { }1 3,4 ,xM = { }1 5 ,sM = i.e., we set 13 14 15, , .x x s B∈

Similarly, for ( )1

.,2 0 ,1

D =⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

we select { }2 3,5 ,xM = { }2 4 ,sM = which implies that

23 25 24, , .x x s B∈

Also, for ( )0

.,3 1 ,1

D =⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

we select { }3 4,5 ,xM = { }3 3 ,sM = which implies that

34 35 33, , .x x s B∈

Finally, we write

{ }1,2 ,H = { }4 1,2 ,xM = { }4 1,2 ,sM = 41 42 41 42, , , ,x x s s B∈ and

{ }8,9,13,15,19,20,21,22 ,xV = { }10,14,18,21,22sV = with

{ }6,7,11,12,16,17,23,24,25 ,V = we then choose 11 12 21 22 31 32 43 44 45, , , , , , , , .x s x s x s x s x B∈

68

Now, we are able to use Algorithm UFLP-BASIS to construct UFLP bases from

any given nonsingular (0,1) matrix m,mD for instances of UFLP with parameters Cn m≥

and .Fn m≥ Given , ,CD n and ,Fn we next determine the number of different bases of

UFLP (with given Cn and Fn ) that we can construct from the same matrix D using

Algorithm UFLP-BASIS. In other words, we want to determine the number of different

bases of UFLP (with given Cn and Fn ) that we can construct that are such that

Algorithm UFLP-DET will produce matrix D when applied to these bases.

Proposition 2.2: Let m,mD be an arbitrary (0,1) nonsingular matrix. Then for given Cn

and Fn such that Cn m≥ and ,Fn m≥ the number of different UFLP bases that we can

construct that produce matrix m,mD when inputed to Algorithm UFLP-DET is equal to

( ) ( )2

2 .CC F C F CFn mn n m n n nn

F m mn C C−− − +

Proof: Denote by R the number of different UFLP bases that we can construct that

have matrix m,mD . These matrices can be computed from Algorithm UFLP-BASIS. We

next consider the steps of this Algorithm. For these steps we compute the number of

choices 1 2 5, , , .R R R… We then compute R as 5

1h

h

R=∏ .

In Step 1, we select a subset M of .F The number of ways to select M equals

,FMC and therefore 1 .Fn

mR C= In Step 2, we perform the same operation for H and C

and so 2 .CnmR C=

69

From Step 3 to 7, we decide whether the elements of M will correspond to either

it or iy variables. Since, we have two choices for each element that belongs to ,M we

write 3 2 2 .FM n mR −= =

To obtain an identity matrix of size ( ),C F C Fn n n n in

( )1: , 1: ,C C C F C C C FB n n n n n n n n+ + + + at least one component must equal one in each

of the rows with indices { }1,…, .C C C Fn n n n+ + Every column corresponding to either ijx

or ijs variable gives one component in one of those rows. Steps 8 to 13, select 2m

columns corresponding to either ijx or ijs variables depending on the values of the

components of .D At this point, the number of rows that have at least one component

that is equal to one corresponding to either ijx or ijs variables is 2.m

In Steps 14 and 15 we select columns corresponding to ijx and ijs variables with

the same indices. The indices j are determined to be the elements of ,H while the

indices i are chosen from .F It follows that we have ( ) ( ) CH n mF Fn n −= ways to choose

the indices i and ,j and so ( )4 .Cn mFR n −=

The number of columns corresponding to ijx and ijs that are selected in Steps

14 and 15 is equal to ( )2 2 .CH n m= − The ijx and ijs variables with the same indices

give two components equal to one in the same row and we are counting the rows that

have at least one component that is equal to one. Hence, only half of those columns are

counted. The number of rows that have at least one component that is equal to one is

2 .Cm n m+ −

70

For the remaining rows, ( )2 ,C F Cn n m n m− + − Steps 16 to 19 may select either

ijx or ijs columns to supplement these rows. It follows that we have two choices for

each of these rows and so ( )2

5 2 .C F Cn n m n mR − + −=

Finally, we write,

( ) ( )25

1

2 CC F C F CFn mn n m n n nn

h F m mh

R R n C C−− − +

=

= =∏ (2.69)

yielding the result.

It should be noted that for given matrix ,D ,Cn and ,Fn Algorithm UFLP-BASIS

produces R different bases of UFLP (with given Cn and Fn ). The fact that they are

different is due to the following argument. In Steps 1, 2, and 4 of Algorithm UFLP-

BASIS, we choose arbitrary elements for the subsets ,M ,H and 1.M Also, in Step 15

we choose arbitrary value for .r The elements of ,M ,H and 1,M and the value of .r

determine the indices of the variables that will be selected to be included in .B

Therefore, every setting of these parameters produces unique basis. It follows that for

given input matrix, all the output bases are different from each other. Further, for two

input matrices we may have two identical settings of these parameters. However, every

input matrix produces bases that are different from the bases produced by the other

matrix as long as the two input matrices are not identical. This is because that the

components of the input matrices determine whether the selected variables are ijx or

ijs ( ijx for every component equals one and ijs for every components equals zero).

Since the two input matrices are not identical, the identical settings of the

71

abovementioned parameters determine identical indices but with different types of

variables ( ijx or ijs ).

Example 2.6-continued: Given D as defined before, the number of different UFLP

bases B with 5Cn = and 3,Fn = that we can construct from matrix D is equal to

( ) ( )24 3 53 32 3 1440R C C= = .

It follows that if we know all (0,1) matrices of size ( ), ,m m for { }min , ,C Fm n n≤

that have determinant absolute value equal to ,d then we are able to compute the

number of UFLP bases with given Cn and Fn (such that Cn m≥ and Fn m≥ ) that have

the same determinant absolute value.

Theorem 2.3: For given Cn and ,Fn let ( ),N m d denote the function that returns the

number of different (0,1) nonsingular matrices, ,m,mD of size ( ), ,m m for

{ }min , ,C Fm n n≤ that have determinant absolute value equal to .d Then the number of

different UFLP bases, ,B with the given Cn and Fn that are such that ( )det B d= is

( ) ( ) ( )2

2 , .CC F C F CFn mn n m n n nn

F m mn C C N m d−− − +

Using Theorem 2.2, for given Cn and Fn we can construct UFLP bases from

any nonsingular (0,1) matrix, ,D of size ( ),m m such that Cn m≥ and Fn m≥ . Also, the

absolute value of the determinant of a basis of the LPR of UFLP can be calculated by

72

computing its matrix D using Theorem 2.1. We use these results to obtain information

about the maximum possible determinant of bases of the LPR of UFLP.

Corollary 2.2: Given Cn and ,Fn the absolute value of the maximum possible

determinant of a basis of the LPR of UFLP is equal to the absolute value of the

maximum determinant of a (0,1) matrix of size ( ),n n where { }min , .C Fn n n=

The proof simply uses Algorithm UFLP-BASIS.

In Chapter 3, we investigate how efficient it is to solve the group relaxations of

UFLP using standard algorithms. The running time of these algorithms is directly

affected by the absolute value of the determinant of the LP optimal basis. In turn, the

value of this determinant is a function of ,m the size of the matrix D see Theorem 2.1.

Corollary 2.3: Given ,Cn ,Fn and m such that Fn m≥ and ,Cn m≥ the absolute value

of the maximum possible determinant of a basis of the LPR of UFLP that has a D

matrix of size ( ),m m is equal to the absolute value of the maximum determinant of a

(0,1) matrix of size ( ), .m m

We next give (0,1) matrices of size ( ), ,h h 2,h ≥ that can be used to construct

UFLP bases with determinant equal to 1, 2, .., 1h − for any instance with Cn h≥ and

.Fn h≥ Let the matrix ,h hU be formed by subtracting an identity matrix from a matrix of

ones of the same size, i.e.,

73

,

, , ,

0 1 11 1 1

.1

1 1 0h h

h h h h h hU E I

⎛ ⎞⎜ ⎟⎜ ⎟= − =⎜ ⎟⎜ ⎟⎝ ⎠

(2.70)

We next introduce some results that we will use to compute the determinant of

,h hU .

Lemma 2.7: Let ,h qT E= and ,q kW E= be matrices of ones, then , .h kTW qE=

Proof: Define , .h kQ TW= For any { }1, ,i h∈ … and { }1, , ,j k∈ … we write,

( ) ( ) ( ),1 1

, , , 1 .q q

h kr r

Q i j T i r W r j q= =

= = =∑ ∑

Lemma 2.8: Let , , , ,h h h h h hU E I= − then ,1, , .

1h h

h h h h

EU I

h− = −

Proof: Define ,, , .

1h h

h h h h

EQ I

h= −

− We write

( )2

, , ,, , , , , , , .

1 1 1h h h h h h

h h h h h h h h h h h h h h

E E EU Q E I I E I

h h h⎛ ⎞

= − − = − − +⎜ ⎟− − −⎝ ⎠ (2.71)

Using Lemma 2.7 to compute 2, ,h hE (2.71) can be rewritten as

, ,, , , , .

1 1h h h h

h h h h h h h h

hE EU Q E I

h h= − − +

− − (2.72)

Expression (2.72) then reduces to

, , , , ,11 .

1 1h h h h h h h h h hhU Q E I I

h h⎛ ⎞= − − + =⎜ ⎟− −⎝ ⎠

(2.73)

Lemma 2.9: Let 2.h ≥ If , , , ,h h h h h hU E I= − then ( ),det 1.h hU h= −

74

Proof: We prove this lemma by induction. For 1,h = 1,1 0,U = and therefore

( )1,1det 0.U = (2.74)

We now assume the result holds for 1,…,h k= and prove that it holds when

1.h k= + We have

1, 1

1, 1 1, 1 1, 1

0 1 11 1 1

.1

1 1 0k k

k k k k k kU E I

+ +

+ + + + + +

⎛ ⎞⎜ ⎟⎜ ⎟= − =⎜ ⎟⎜ ⎟⎝ ⎠

(2.75)

Next, we decompose 1, 1k kU + + into blocks so as to isolate a matrix ,k kU in the

lower right block, i.e.,

1, 1

1,1, 1

,1 ,

1, 1

0 1 11 0 1 1

0.1 1

1 11 1 1 0

k k

kk k

k k k

k k

EU

E U+ +

+ +

+ +

⎛ ⎞⎜ ⎟⎜ ⎟ ⎛ ⎞⎜ ⎟= = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎜ ⎟⎜ ⎟⎝ ⎠

(2.76)

Applying Lemma 2.1 on (2.76), we obtain

( ) ( ) ( )11, 1 , 1, , ,1det det det 0 .k k k k k k k kU U E U E−+ + = − (2.77)

From our induction hypothesis, we know that ( ),det 1.k kU k= − Further, Lemma

2.8 gives an exact form for 1,k kU − that we substitute in (2.77) to obtain

( ) ( )

( ) .

,1, 1 1, , ,1

1, , ,11, ,1

det 1 det 01

1 det 01

k kk k k k k k

k k k kk k

EU k E I E

k

E E Ek E E

k

+ +

⎛ ⎞⎛ ⎞= − − −⎜ ⎟⎜ ⎟−⎝ ⎠⎝ ⎠

⎛ ⎞= − − + ⎡ ⎤⎜ ⎟⎣ ⎦−⎝ ⎠

(2.78)

75

We next compute the part enclosed in square brackets in expression (2.78).

From Lemma 2.7 we know that 1, ,1k kE E k= and so (2.78) reduces to

( ) ( ) 1, , ,11, 1det 1 det .

1k k k k

k k

E E EU k k

k+ +

⎛ ⎞= − −⎜ ⎟−⎝ ⎠

(2.79)

Again, using Lemma 2.7, we know that 1, , 1, .k k k kE E kE= Therefore,

( ) 21, , ,1 1, ,1 .k k k k k kE E E kE E k= = Substituting in (2.79), we obtain

( ) ( ) ( ) ( )2

1, 1det 1 det 1 det 1 .1 1 1k k

k k kU k k k k kk k k+ +

⎛ ⎞ − −⎛ ⎞= − − = − = − =⎜ ⎟ ⎜ ⎟− − −⎝ ⎠⎝ ⎠ (2.80)

For 2h ≥ and { }1,…, 1 ,k h∈ − define ,kh hU to be a matrix obtained from ,h hU by

replacing 1h k− − of the one elements of its first column with zeros. The number of one

elements in the first column of ,kh hU is therefore equal to .k Matrix ,

kh hU can be used to

construct UFLP bases with determinant equal to 1, 2, .., 1h − for all instances of UFLP

with Cn h≥ and .Fn h≥

Lemma 2.10: For 2h ≥ and { }1,…, 1 ,k h∈ −

( ),det .kh hU k=

Proof: We decompose the matrix ,kh hU as in (2.76), i.e.,

,

1, 1,

1,1 1, 1

,

0 1 11 0 1 1 0

.1 11 1

1 1 1 0h h

hkh h k

h h h

h h

EU

E U−

− − −

⎛ ⎞⎜ ⎟⎜ ⎟ ⎛ ⎞⎜ ⎟= = ⎜ ⎟⎜ ⎟⎜ ⎟ ⎝ ⎠⎜ ⎟⎜ ⎟⎝ ⎠

(2.81)

76

In (2.81), 1,1khE − is a (0,1) vector of size ( )1,1h − that has exactly k one elements.

We apply Lemma 2.1 on (2.81) to obtain

( ) ( ) ( )1, 1, 1 1, 1 1, 1 1,1det det det 0 .k k

h h h h h h h hU U E U E−− − − − − −= − (2.82)

Again, using Lemmas 2.8 and 2.9, we write

( ) ( )

( ) .

1, 1, 1, 1 1, 1 1,1

1, 1 1, 1 1,11, 1 1,1

det 2 det 02

2 det 02

h hk kh h h h h h

kh h h h k

h h

EU h E I E

h

E E Eh E E

h

− −− − − −

− − − −− −

⎛ ⎞⎛ ⎞= − − −⎜ ⎟⎜ ⎟−⎝ ⎠⎝ ⎠

⎛ ⎞= − − +⎜ ⎟⎜ ⎟−⎝ ⎠

(2.83)

Let 1, 1 1,1.k

h hQ E E− −= Since 1,1khE − has only k one elements and the remaining

elements are all zero, then ( ) ( )1

1, 1 1,11

., ,. .h

kh h

rQ E r E r k

− −=

= =∑ Substituting in (2.83) we

obtain

( ) ( ) 1, 1 1, 1 1,1,det 2 det .

2

kh h h hk

h h

E E EU h k

h− − − −

⎛ ⎞= − −⎜ ⎟⎜ ⎟−⎝ ⎠

(2.84)

Similarly, let 1, 1 1,1.k

h h hQ E E− − −= Then for { }1, , 1 ,j h∈ −…

( ) ( ) ( )1

1, 1 1,11

,. , ,. .h

kh h h

rQ j E j r E r k

− − −=

= =∑ (2.85)

Hence, 1,1 1,1.h hQ kE− −= Also, if 1,1 1, 1 1, 1 1,1,k

h h h hW E E E− − − −= then from Lemma 2.7

( )1,1 1, 1 1,1 1, 1 1,1 1 .h h h hW E Q kE E k h− − − −= = = − Finally, we write

( ) ( ) ( ) ( ),

1det 2 det 2 det .

2 2kh h

k h kU h k h kh h

−⎛ ⎞ −⎛ ⎞= − − = − =⎜ ⎟ ⎜ ⎟− −⎝ ⎠⎝ ⎠ (3.86)

77

It should be noted that Lemma 2.9 is a special case of Lemma 2.10 as

1, ,

hh h h hU U− = and it follows that ( ) ( )1

, ,det det 1.hh h h hU U h−= = − Further, the result is not

limited to changing values of elements in the first column of , .h hU It can be verified that

the same result holds when applying the same changes to any single row or any single

column of the matrix.

The following example illustrates Lemmas 2.9 and 2.10.

Example 2.7: For =4,h 4,4 4,4 4,4,U E I= − i.e.,

4,4

0 1 1 11 0 1 1

.1 1 0 11 1 1 0

U

⎛ ⎞⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎝ ⎠

From Lemma 2.9 and 2.10, ( ) ( )34,4 4,4det det 3.U U= = For each one element in

the first column that is changed to zero, the determinant value drops by one. For =2k

and =1,k we write

24,4

0 1 1 10 0 1 1

,1 1 0 11 1 1 0

U

⎛ ⎞⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎝ ⎠

for which it can be verified that ( )24,4det 2,U =

and

14,4

0 1 1 10 0 1 1

,0 1 0 11 1 1 0

U

⎛ ⎞⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎝ ⎠

for which we have ( )14,4det 1.U =

78

Proposition 2.3: For 2h ≥ and { }1,…, 1 ,k h∈ − we can construct UFLP bases with

determinant absolute value equal to k for any instance with Cn h≥ and .Fn h≥

Algorithms UFLP-INSTANCE describes how to obtain such an instance.

Algorithm UFLP-INSTANCE. Input: h and k such that 2h ≥ and { }1,…, 1 .k h∈ − Parameters ,Cn and Fn such that

Cn h≥ and .Fn h≥ Output: Sets of indices of variables , , ,ij ij ix s y and it ( , , ,i i

x s yM M M and tM ) that yields a

basis B with ( )det .B k=

1: Let , , ,kh h h h h hU E I= −

2: For i = 2 to 1h k− + 3: ( ), ,1 0k

h hU i ← 4: End For 5: Apply Algorithm UFLP-BASIS on ,

kh hU , Cn , and Fn to obtain basis .B

79

CHAPTER 3 MAXIMUM POSSIBLE DETERMINANT OF BASES OF THE LINEAR PROGRAMMING

RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM

As observed in Corollary 2.2, we need to determine the MPD of a (0,1) matrix to

determine the MPD of bases of the LPR of UFLP. This is the purpose of this chapter

which is organized as follows. In Section 3.1, we study the MPD of a ±1 matrix since it is

related to the MPD of (0,1) matrices. In Section 3.2, we discuss the MPD of a (0,1)

matrix. In Section 3.3, we determine the MPD of bases of the LPR of UFLP. Section 3.4

shows that the solutions corresponding to the bases of the LPR of UFLP with MPD we

create are feasible. In Section 3.5, we conclude with comments on the efficiency of

using group relaxations to solve UFLP.

3.1 Computing the MPD of ±1 Matrices

The problem of determining the maximum possible determinant of a matrix

whose elements are ±1 is a well-known problem called “Hadamard Problem”. This

name pays tribute to Hadamard [35] who first discussed in 1893 how to obtain upper

bounds on the determinant of these matrices.

Lemma 3.1 [35]: Let ,h hK be a matrix of size ( ),h h whose components are ±1. Then,

( ),det hh hK h≤ .

This bound is not attained unless h is equal to 1, 2, or is a multiple of 4, [36]. We

define ( )Q h to be the function that returns the absolute value of the maximum possible

determinant of ,h hK of size ( ), .h h Table 3-1 shows the exact values of ( );Q h see [37].

80

Table 3-1 [37]. Maximum possible determinant of a ±1 square matrix of size ( ),h h . h ( )Q h h ( )Q h 1 1 10 73728 2 2 11 327680 3 4 12 2985984 4 16 13 14929920 5 48 14 77635584 6 160 15 418037760 7 576 16 4294967296 8 4096 17 21474836480 9 14336 18 14602888806

Further, we refer to a Hadamard Matrix as any ±1 matrix that achieves this

bound. Golomb and Baumert [38] show how to construct Hadamard matrices whose

dimensions are multiples of 4. The construction requires that any two rows or columns

have half of their components of the same sign and half of their components of different

sign. The rows are therefore pairwise orthogonal. Using a Kronecker product

construction [39], Algorithm HADAMARD shows how to construct Hadamard matrices of

size ( ),h h where 2 ,rh = .r +∈

Algorithm HADAMARD [39]. Input: h where 2 ,rh = .r +∈

Output: Hadamard matrix, , ,h hJ such that ( ),det .hh hJ h=

1: ( )1J =

2: If 2log 1h = , return, else 3: For i = 1 to 2log h

4: J J

JJ J⎛ ⎞

= ⎜ ⎟−⎝ ⎠

5: End For Hadamard matrices have the following interesting property.

Lemma 3.2 [38]: If ,h hJ is a Hadamard matrix of size ( ), ,h h then , , , .Th h h h h hJ J hI=

81

Lemma 3.2 holds for all Hadamard matrices. For Hadamard matrices obtained by

Algorithm HADAMARD, the following properties also hold.

Corollary 3.1: If ,h hJ is a Hadamard matrix of size ( ),h h that has been obtained using

Algorithm HADAMARD, then

1, ,

1 ,h h h hJ Jh

− = (3.1)

( ) ,1.,1 ,hJ E= ( ) 1,1,. ,hJ E= (3.2)

{ }2, , ,k h∀ ∈ … ( ) ( )1 1

, , 0,h h

i iJ i k J k i

= =

= =∑ ∑ (3.3)

and

{ }2, , ,k h∀ ∈ … ( ) ( )2 2

, , 1.h h

i iJ i k J k i

= =

= = −∑ ∑ (3.4)

Example 3.1: For =4,h the Hadamard matrix obtained using Algorithm HADAMARD is

constructed as follows:

• 1,1 1,J =

• 1,1 1,12,2

1,1 1,1

1 1,

1 1J J

JJ J⎛ ⎞ ⎛ ⎞

= =⎜ ⎟ ⎜ ⎟− −⎝ ⎠⎝ ⎠ and

• 2,2 2,24,4

2,2 2,2

1 1 1 11 1 1 1

.1 1 1 11 1 1 1

J JJ

J J

⎛ ⎞⎜ ⎟− −⎛ ⎞ ⎜ ⎟= =⎜ ⎟ ⎜ ⎟− − −⎝ ⎠ ⎜ ⎟

− −⎝ ⎠

Note that ( ) ( )4,4det 4 16,J Q= = matching the value in Table 3-1.

82

3.2 Computing the MPD of (0,1) Matrices

A problem similar to Hadamard’s is the problem of finding the maximum

determinant of square (0,1)-matrix. For this problem, we have the following result.

Lemma 3.3 [40]: Let ,h hS be a (0,1) matrix of size ( ), .h h Then,

( ) ( ) ++≤

1

,

1det .

2

h

h h h

hS

Let ( )U h to be the function that returns the absolute value of the MPD of ,h hS of

size ( ), .h h Also, denote by ( )N h the number of square (0,1) matrices of size ( ),h h

that have determinant equal to ( ).U h Table 3-2 shows the values of ( )U h and ( ).N h

These values are obtained from [41] and [42].

Table 3-2 [41 and 42]. MPD of a (0,1) square matrix of size ( ),h h and number of square (0,1) matrices that have the MPD.

h ( )U h ( )N h 1 1 1 2 1 6 3 2 6 4 3 120 5 5 7200 6 9 1058400 7 32 151200 8 56 391910400 9 144 27433728000 10 320 11 1458 12 3645 13 9477 14 25515 15 131072 16 327680 17 1114112

83

In [41] and [42], the authors count the matrices that have the positive value of

( ).U h However, we are interested in the number of matrices that have determinant

absolute value equal to ( ).U h Therefore, the values of ( )N h for 2h ≥ in Table 3-2 are

double the values in [41] and [42].

We denote by , ,h hW any binary matrix that achieves ( ).U h If 1, 1h hJ + + is a

Hadamard matrix that is obtained using Algorithm HADAMARD, then a binary matrix

that achieves the maximum possible determinant, , ,h hW can be obtained from 1, 1h hJ + + as

shown in Algorithm BINARY.

Algorithm BINARY [40]. Input: Hadamard matrix, 1, 1,h hJ + + where ( ) ( ) 1

1, 1det 1 .hh hJ h ++ + = +

Output: A binary matrix, , ,h hW such that ( ) ( ) 1

,

1det .

2

h

h h h

hW

++=

1: For i = 2 to 1h + 2: ( ) ( ) ( )., ., .,1J i J i J← − 3: End For

4: 1

2J J−=

5: ( ), 2 : 1,2 : 1h hW J h h= + + Steps 1 to 3 in Algorithm BINARY subtract the first column of J from all other

columns. From (3.2) we know that the components of the first column of J are all ones.

Hence, the components of all columns of J (except the first column) are changed from

{ }-1,1 to { }-2,0 . After dividing J by -2 in Step 4, these components change from { }-2,0

to { }-1,0 . Step 5, selects the submatrix of size ( ),h h in the lower right corner of J and

denote it by , .h hW

84

Example 3.2: We next apply Algorithm BINARY to obtain 3,3.W Consider the matrix 4,4J

obtained by Algorithm HADAMARD in Example 3.1. Steps 1 to 3 of the algorithm

subtract the first column of 4,4J from all other columns, i.e.,

( ) ( ) ( ).,2 .,2 .,1 ,J J J← − ( ) ( ) ( ).,3 .,3 .,1 ,J J J← − and ( ) ( ) ( ).,4 .,4 .,1 .J J J← −

We obtain

4,4

1 0 0 01 2 0 2

.1 0 2 21 2 2 0

J

⎛ ⎞⎜ ⎟− −⎜ ⎟=⎜ ⎟− −⎜ ⎟

− −⎝ ⎠

We then divide 4,4J by -2 to obtain

4,4

0.5 0 0 00.5 1 0 1

.0.5 0 1 10.5 1 1 0

J

−⎛ ⎞⎜ ⎟−⎜ ⎟=⎜ ⎟−⎜ ⎟−⎝ ⎠

Finally, 3,3W is obtained by selecting the lower right block of 4,4J of size ( )3,3 ,

i.e.,

3,3

1 0 10 1 1 .1 1 0

W⎛ ⎞⎜ ⎟= ⎜ ⎟⎜ ⎟⎝ ⎠

Note that ( ) ( )3,3det 3 2,W U= = matching the value in Table 3-2.

3.3 Computing the MPD of Bases of the LPR of UFLP

Using Corollary 2.2 and Lemma 3.3, we can derive an upper bound for the MPD

of an arbitrary basis of the LPR of UFLP.

85

Theorem 3.1: Given Cn and ,Fn the absolute value of the determinant of any basis of

UFLP is less than or equal to

( ) ( ) 112

n

n

nU n

++=

where { }min , .C Fn n n=

Similarly, if a restriction is made on ,m the size of the matrix D (see Corollary

2.3), then the upper bound can be improved as follows.

Theorem 3.2: Given ,Cn ,Fn and m such that Fn m≥ and ,Cn m≥ the absolute value

of the determinant of any basis of UFLP for which D has size ( ),m m is less than or

equal to

( ) ( ) 11.

2

m

m

mU m

++=

Example 3.3: For 100Cn = and 8,Fn = the absolute value of the determinant of bases

of the LPR of UFLP is less than or equal to 56.

Example 3.4: For 100Cn = and 8,Fn = the absolute value of the determinant of bases

of the LPR of UFLP for which 3m = is less than or equal to 2.

3.4 On The Feasibility of The LP Solution to UFLP that has The MPD

In this section we show that the basic solution associated with the basis

produced by the Algorithm BINARY and Algorithm UFLP-BASIS is feasible for UFLP.

86

We also show the surprising result that this basic solution is a 11h +

multiple of an

integer vector, where 2 ,rh = 0,r > showing that, although the determinant is large, the

corresponding solution is not very fractional.

In Section 3.1, we presented how to obtain Hadamard matrices of size h (where

2 ,rh = r +∈ ) that have the MPD. From these matrices, in Section 3.2, we described a

way to obtain (0,1) matrices of size 1h − (where 2 ,rh = 0r > ) that have the largest

possible determinant. Further, we can use the obtained binary matrices to construct

bases of UFLP that have the same determinant using Algorithm UFLP-BASIS. In the

remainder of this section, we study the basic feasible solutions associated with these

bases.

Algorithm BINARY describes how to obtain a (0,1) matrix, , ,h hW from a

Hadamard matrix, 1, 1h hJ + + obtained by Algorithm HADAMARD. We observe that using

an elementary column operation (ECO) we can achieve the same result as Algorithm

BINARY. In particular, we observe that

( ), 1, 1 2 : 1,2 : 1h h h hW = J h h+ + + + (3.5)

where

( )1, 1 1, 11, 1 2

h h h hh h

J ECJ + + + +

+ + =−

(3.6)

and where

( ) ( )( ) ( )

( )1, 1

, 1 if 1 1, 3.7.a1, -1 if 2 1, 3.7.b

0 otherwise. 3.7.ch h

EC j j j hEC EC j j h

+ +

= ≤ ≤ +⎧⎪= = ≤ ≤ +⎨⎪⎩

(3.7)

87

Example 3.2-continued: We show how to obtain 4,4J from 4,4J using (3.6) and (3.7).

Consider

4,4

1 1 1 11 1 1 1

.1 1 1 11 1 1 1

J

⎛ ⎞⎜ ⎟− −⎜ ⎟=⎜ ⎟− −⎜ ⎟

− −⎝ ⎠

Then

4,4

1 1 1 10 1 0 0

.0 0 1 00 0 0 1

EC

− − −⎛ ⎞⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎝ ⎠

Using (3.6) we obtain

( )4,4 4,44,4

1 1 1 1 1 1 1 1 0.5 0 0 01 1 1 1 0 1 0 0 0.5 1 0 11 .1 1 1 1 0 0 1 0 0.5 0 1 12 21 1 1 1 0 0 0 1 0.5 1 1 0

J ECJ

− − − −⎛ ⎞⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟− − − −− ⎜ ⎟⎜ ⎟ ⎜ ⎟= = =⎜ ⎟⎜ ⎟ ⎜ ⎟− − −⎜ ⎟⎜ ⎟ ⎜ ⎟

− − −⎝ ⎠⎝ ⎠ ⎝ ⎠

Using (3.5) we obtain

3,3

1 0 10 1 11 1 0

W⎛ ⎞⎜ ⎟= ⎜ ⎟⎜ ⎟⎝ ⎠

as desired.

We next compute the inverses of 1, 1h hJ + + and ,h hW as they will be used later. We

emphasize that the Hadamard matrices, 1, 1,h hJ + + we used here are obtained by

Algorithm HADAMARD.

88

1, 1h hJ + + is obtained by subtracting the first column of 1, 1h hJ + + from all the other

columns. Given the specific structure of Hadamard matrices , 1, 1,h hJ + + the elements of

the first row of 1, 1h hJ + + will now be all zero except for the first element. Also, the

elements of the first columns remain unchanged. As we divide all the elements by -2, all

the elements of the first column now become equal to -0.5. We decompose 1, 1h hJ + + into

four blocks in such a way that we have ,h hW in the lower right block, i.e.,

1,1, 1

,1 ,

0.5.0.5

hh h

h h h

0J E W+ +

−⎛ ⎞= ⎜ ⎟⎜ ⎟−⎝ ⎠

(3.8)

We then apply Lemma 2.2 on (3.8) to obtain

1,11 11, 1

, ,1 ,

2.h

h hh h h h h

0J W E W

−− −+ +

−⎛ ⎞= ⎜ ⎟⎜ ⎟−⎝ ⎠

(3.9)

Since we do not know 1, ,h hW − we use (3.6) to compute 1

1, 1h hJ −+ + and hence we

obtain more information about 1, .h hW − We write

( ) ( )1

1, 1 1, 11 1 11, 1 1, 1 1, 12 .

2h h h h

h h h h h h

J ECJ EC J

+ + + +− − −+ + + + + +

⎛ ⎞= = −⎜ ⎟⎜ ⎟−⎝ ⎠

(3.10)

From Corollary 3.1, we know a relation between 1, 1h hJ + + and 11, 1,h hJ −+ + i.e.,

( ) ( )1

1, 1 1, 11 11, 1 1, 1 1, 1

2 .2 1

h h h hh h h h h h

J ECJ EC J

h

+ + + +− −+ + + + + +

⎛ ⎞ −= =⎜ ⎟⎜ ⎟− +⎝ ⎠

(3.11)

Further, it is easily verified that the inverse of the simple matrix 1, 1h hEC + + is equal

to

89

( ) ( )( ) ( )

( )

11, 1

, 1 if 1 1, 3.12.a1, 1 if 2 1, 3.12.b

0 otherwise. 3.12.ch h

EC j j j hEC EC j j h−

+ +

= ≤ ≤ +⎧⎪= = ≤ ≤ +⎨⎪⎩

(3.12)

When we multiply a matrix by 1, 1h hEC + + on the right, 1, 1h hEC + + is subtracting the

first column of that matrix from all other columns. Instead, when multiplying a matrix by

11, 1h hEC −+ + on the left, all rows are added to the first while other rows remain unchanged.

Using these observations together with (3.3), we decompose the product

( )11, 1 1, 1h h h hEC J−+ + + + as follows

( ) ( )1,1 1

1, 1 1, 1 1, 1,1 1, 1

12 2 .2 : 1,2 : 11 1h

h h h h h hh h h

h 0J EC J E J h hh h

− −+ + + + + +

+ +

+⎛ ⎞− −= = ⎜ ⎟⎜ ⎟+ ++ + ⎝ ⎠

(3.13)

Equating (3.9) and (3.13), we have

( )

1,1,1

1 11, 1 ,1, ,1 , 1, 1

22.2 2 2 : 1,2 : 1

1 1

hh

h h hh h h h h h h

00J EW E W J h h

h h

−− −+ +

+ +

−⎛ ⎞−⎛ ⎞ ⎜ ⎟= = − −⎜ ⎟ ⎜ ⎟⎜ ⎟− + +⎝ ⎠ ⎜ ⎟+ +⎝ ⎠

(3.14)

We conclude that

( )1, 1, 1

2 2 : 1,2 : 1 .1h h h hW J h h

h−

+ +

−= + +

+ (3.15)

We then use ,h hW as input to Algorithm UFLP-BASIS to obtain UFLP bases that

have the same determinant absolute value as , .h hW For simplicity we assume that

.C Fn n h= = Therefore, the obtained basis, say ,B will have C Fn n columns

corresponding to the ijx and ijs variables, Fn columns corresponding to the iy

variables, and Fn columns corresponding to the it variables.

90

We apply ERO (2.7) on B and permute its columns so that we have all the ijx

and ijs columns to the left, all the it columns to the right, and all the iy columns in the

middle. Then B has the following structure

, , ,

,1 , 1 ,1

, ,

,1 , 1 ,1

, , ,

.

C C F C F

C C F C

C F C F C F F

C C F C

F C F F F F F

n n n h h n n

n n n n

n n n n n n n

n n n n

n n n n n n n

0 W 0

E 0 0

I 0B0 0 E

0 I I

⎛ ⎞⎜ ⎟

−⎜ ⎟⎜ ⎟

= ⎜ ⎟⎜ ⎟−⎜ ⎟⎜ ⎟⎝ ⎠

(3.16)

We permute the horizontal blocks to locate an invertible block on the lower right

corner of size ( ), .C F C Fn n n n+ + We group some of the blocks and introduce the

following notation to ease the computation of the inverse of B

,1 , 1 ,1

, , 1, ,

,1 , 1 ,1 2, ,

, , ,

, , ,

.

C C F C

C F C F C F F

C F C F C F C F

C C F C

C F C F C F C F

C C F C F

F C F F F F F

n n n n

n n n n n n nn n n n n n n n

n n n nn n n n n n n n

n n n h h n n

n n n n n n n

E 0 0

I 0I B

0 0 EB0 B

0 W 0

0 I I

+−

+ + +

−⎛ ⎞⎜ ⎟⎜ ⎟ ⎛ ⎞⎜ ⎟− ⎜ ⎟= =⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠⎜ ⎟⎜ ⎟⎝ ⎠

(3.17)

In particular,

,1 , 1

1, ,

, 1 ,1

C C F

C F C F C F F

C F C

n n n

n n n n n n n

n n n

E 0

B 0

0 E

+

⎛ ⎞−⎜ ⎟

= ⎜ ⎟⎜ ⎟⎜ ⎟−⎝ ⎠

(3.18)

and

, ,2,

, ,.C F

C F C FF F F F

h h n n

n n n nn n n n

W 0B I I+ +

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠

(3.19)

We apply Lemma 2.2 on (3.19) to obtain

91

( )1

1 , ,2, 1

, ,

.C F

C F C F

F F

h h n n

n n n nh h n n

W 0B

W I

−−

+ + −

⎛ ⎞⎜ ⎟=−⎜ ⎟⎝ ⎠

(3.20)

Again we apply Lemma 2.2 on (3.17) and write

( )( )

11 2, , ,

112

, ,

.C F C F C F C F C F C F

C F C F C F C F

n n n n n n n n n n n n

n n n n n n n n

I B BB

0 B

+ + +−

+ + +

⎛ ⎞−⎜ ⎟= ⎜ ⎟⎜ ⎟⎝ ⎠

(3.21)

We use (3.18) to compute ( ) 11 2, , .

C F C F C F C Fn n n n n n n nB B−

+ + +− 1,C F C Fn n n nB + is formed of

C Fn n rows such that all the elements of every row are zero except for one element that

is equal to -1. The product ( ) 11 2, ,C F C F C F C Fn n n n n n n nB B

+ + +− will be formed of rows of

( ) 12,C F C Fn n n nB

+ + such that for every -1 element in 1,C F C Fn n n nB + with column index ,k the thk

row of ( ) 12,C F C Fn n n nB

+ + is included in that product, { }1,…, ,Fk n∈ i.e.,

( )

( ) ( )

( ) ( )

( ) ( )

( ) ( )

12,

12,

,11 2

, ,

12,

12,

,

1,.

1,.

.

,.

,.

C F C F

C F C FC C F

C F C F C F C F

C F C F

C F C FC C F

n n n n

n n n nn n n

n n n n n n n n

n n n n F

n n n n Fn n n

B

B

B B

B n

B n

+ +

+ ++

+ + +

+ +

+ ++

⎛ ⎞⎡ ⎤⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎣ ⎦⎜ ⎟

⎜ ⎟− = ⎜ ⎟⎜ ⎟⎡ ⎤⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎣ ⎦⎝ ⎠

(3.22)

Since we assumed that ,C Fn n h= = we know from (3.20) that the first Fn rows of

( ) 12,C F C Fn n n nB

+ + are ( )1, , .

C Fh h n nW 0− Hence,

92

( )

( )

( )

( )

( )

1, 1,

1, 1,

,11 2

, ,

1, 1,

1, 1,

,

1,.

1,.

.

,.

,.

F

FC C F

C F C F C F C F

F

FC C F

h h n

h h nn n n

n n n n n n n n

h h n

h h nn n n

W 0

W 0

B B

W h 0

W h 0

+−

+ + +

+

⎛ ⎞⎡ ⎤⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎣ ⎦⎜ ⎟

− = ⎜ ⎟⎜ ⎟⎡ ⎤⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎣ ⎦⎝ ⎠

(3.23)

Substitute (3.20) and (3.23) into (3.21), we obtain

( )

( )

( )

( )

1, 1,

1, 1,

,

,

11 , 1,

1, 1,

,

1, ,

, 1, ,

1,.

1,.

,.

,.

F

FC C F

C F C F

F

FC C F

C F

C F C F

F F

h h n

h h nn n n

n n n n

h h n

h h nn n n

h h n n

n n n nh h n n

W 0

W 0

I

W h 0B

W h 0

W 00

W I

+

−−

+

+ −

⎛ ⎞⎛ ⎞⎡ ⎤⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎣ ⎦⎜ ⎜ ⎟⎜ ⎜ ⎟⎜ ⎜ ⎟⎡ ⎤= ⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎣ ⎦⎝ ⎠⎜⎜ ⎛ ⎞⎜ ⎜ ⎟⎜ −⎜ ⎟⎜ ⎝ ⎠⎝

.

⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(3.24)

We now compute the associated basic feasible solution by multiplying 1B − by b

in (1.6). Since we permuted the rows in (3.17), we apply the same row permutation on

(1.6) to obtain

,1 1, 1, 1,, , .C C F F C F C F

T

n n n n n n n nb 0 E E+ + ⎡ ⎤= ⎣ ⎦ (3.25)

The LP solution corresponding to B is obtained by computing 1B b− , i.e.,

93

( )

( )

( )

( )

1, 1,

1, 1,

,

,

11 , 1,

1, 1,

,

1, ,

, 1, ,

1,.

1,.

,.

,.

F

FC C F

C F C F

F

FC C F

C F

C F C F

F F

h h n

h h nn n n

n n n n

h h n

h h nn n n

h h n n

n n n nh h n n

W 0

W 0

I

W h 0X B b

W h 0

W 00

W I

+

−−

+

+ −

⎛ ⎛ ⎞⎡ ⎤⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎣ ⎦⎜ ⎜ ⎟⎜ ⎜ ⎟⎜ ⎜ ⎟⎡ ⎤= = ⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎢ ⎥⎜ ⎜ ⎟⎣ ⎦⎝ ⎠⎜⎜ ⎛ ⎞⎜ ⎜ ⎟

−⎜ ⎟⎝ ⎠⎝

( )

( )

( )

( )

( )

( )

,1

,1

,1

1,

1

1,

1 ,

1,

1

1,

1 ,

1,

1

1,

1

1,

1,

,

,

1,

,

C F

C

F

C C F

C C F

n n

n

n

h

h hj

h

h hj n n n

h

h hj

h

h hj n n n

h

h hj

h hj

0

E

E

W j

W j

W h j

W h j

W j

W h j

=

= +

=

= +

=

=

⎞⎟⎟⎟⎟⎟⎟⎛ ⎞⎟⎜ ⎟⎟⎜ ⎟⎟⎜ ⎟⎜ ⎟⎟⎝ ⎠⎟⎟⎟⎟

⎜ ⎟⎜ ⎟⎠

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦=

( )

( )

.

,1

1,

1

1,

1 ,1

1 1,

1 ,

C

F

h

n

h

h hj

h

h hj n

W j

W h j

=

=

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎡ ⎤⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎣ ⎦⎜ ⎟⎜ ⎟⎡ ⎤⎜ ⎟−⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥−⎜ ⎟⎢ ⎥⎜ ⎟⎣ ⎦⎝ ⎠

(3.26)

94

As we have shown that ( )1, 1, 1

2 2 : 1,2 : 11h h h hW J h h

h−

+ +

−= + +

+ in (3.15), the sum of

the elements of the thk row of 1,h hW − is equal to 2

1h−+

times the sum of the elements of

the ( )1 thk + row of 1, 1h hJ + + except the first element in that row, { }1,…, .k h∈ We

described in (3.4) that the sum of the elements of the ( )1 thk + row of 1, 1h hJ + + except the

first element in that row is equal to -1. It follows that, the sum of the elements of the thk

row of 1,h hW − is equal to 2 .

1h +

We use these observations to simplify (3.26) as follows

,1

,1

21 .

211

C F C

F

n n n

n

EhX

Eh

+⎡ ⎤⎢ ⎥+⎢ ⎥=⎛ ⎞⎢ ⎥−⎜ ⎟⎢ ⎥+⎝ ⎠⎣ ⎦

(3.27)

Theorem 3.3: For 2 ,rh = 0,r > let ,h hW be a (0,1) matrix obtained from Hadamard

matrix 1, 1,h hJ + + using Algorithm BINARY. Also, let B be a basis of the LPR of UFLP

obtained from ,h hW using Algorithm UFLP-BASIS. Then the basic solution associated

with B is feasible and equal to

,1

,1

21 .

211

C F C

F

n n n

n

EhX

Eh

+⎡ ⎤⎢ ⎥+⎢ ⎥=⎛ ⎞⎢ ⎥−⎜ ⎟⎢ ⎥+⎝ ⎠⎣ ⎦

As a result, the LP solution to a UFLP instance associated with the matrix

produced by Algorithm BINARY is always feasible. It is interesting to observe that,

95

although the MPD is very high, the corresponding LP solution is not very fractional since

it is a multiple of 11h +

while the determinant is equal to ( ) 11

.2

h

h

h ++

Example 3.5: The binary matrix of size (15,15) that is obtained from the Hadamard

matrix of size (16,16) obtained using Algorithm HADAMARD has determinant absolute

value equal to 131072. A basis of the UFLP that is constructed from that binary matrix

such that 15C Fn n= = has the same determinant. Although the basis has a determinant

of value 131072 , the LP solution corresponding to that basis is an integer multiple of

18

:

,1 ,1

,1 ,1

2 11 8 .

2 711 8

C F C C F C

F F

n n n n n n

n n

E EhX

E Eh

+ +

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥+⎢ ⎥= = ⎢ ⎥⎛ ⎞⎢ ⎥ ⎢ ⎥−⎜ ⎟⎢ ⎥ ⎢ ⎥+ ⎣ ⎦⎝ ⎠⎣ ⎦

3.5 Solving Group Relaxations of UFLP

The running time of shortest path algorithms to solve group relaxations of UFLP

will be determined in most part by the MPD of the optimal basis of the LPR of UFLP. Let

( ),C Fg n n be the function that returns the MPD among the bases of the LPR of the

UFLP for a given Cn and ,Fn i.e., ( ) ( ),C Fg n n U n= where { }min , .C Fn n n= Theorem

3.1 shows that ( ),C Fg n n is exponential. It follows that applying traditional techniques to

solve the group relaxations of UFLP yields exponential algorithms. However, we next

describe results that show that determinants of bases of UFLP are typically small.

96

First, the upper bound defined in Theorem 3.1 is a function of n where

{ }min ,C Fn n n= . Therefore, no matter how big Cn or Fn is, the MPD depends only on

the smallest of the two. As an example, ( ) ( ) ( ) ( )10000,5 100,5 5,5 5 5.g g g U= = = =

Next, we give arguments supporting the claim that most of the bases of the LPR

of UFLP have small determinants and the upper bound of Theorem 3.1 is attained by

very few bases.

As mentioned in Section 2.1, A basis of ,2 2C C F F C F Fn n n n n n nA + + + is a square submatrix

that is invertible and that has ( )C C F Fn n n n+ + rows and columns. As before, denote by

B a square submatrix of A of size ( ).C C F Fn n n n+ + The assumption made at the

beginning of Section 2.1 imposes that the first 2 C Fn n columns correspond to ijx and ijs

variables and the remaining 2 Fn columns correspond to iy and it variables. Lemmas

2.3, 2.4, 2.5 and 2.6 establish that the determinant of B depends on the way we select

the iy and it variables to be included in .B

We define k as the number of columns corresponding to iy and it variables

included in ,B in other words, .y tk m m= + Also, we define kT as the total number of

submatrices of A of size ( )C C F Fn n n n+ + that have k columns corresponding to iy

and it variables.

An upper bound on the total number of bases that we may obtain in (2.1) is given

by

2 22 2

0 0.

F FC F F

C C F F

n nn n n

k n n n n k kk k

T T C C+ + −= =

≤ =∑ ∑ (3.28)

97

The inequality is because the matrices in kT are not required to have determinant

different from 0. In (2.1) we obtained a simple formula by counting the total number of

ways of selecting a set of columns of A whose number is equal to the number of rows,

regardless of what variables correspond to these columns. In (3.28), we obtain the

result by counting the number of the columns corresponding to ijx and ijs variables

separately from the number of columns corresponding to iy and it variables.

We now consider (3.28). For the case where 0,k = the basic columns selected

correspond only to ijx and .ijs Therefore B is singular according to Lemma 2.3. For

{ }1,…, 1 ,Fk n∈ − we have only k columns corresponding to iy or it variables.

Similarly, Lemma 2.3 implies that B is singular. It follows that the number of

submatrices of A of size ( )C C F Fn n n n+ + that are singular up to this point is equal to

0 1 1.FnT T T −+ + +…

A nonzero value of ( )det B can only occur when .Fk n≥ Lemma 2.5 establishes

that ( ) { }det 0,1B ∈ when .Fk n= Therefore, there are FnT submatrices of A of size

( )C C F Fn n n n+ + with determinant 0 or 1.

As k becomes greater than Fn ( ),y t Fm m n+ > then we know using Lemma 2.6

that we have Fk n− columns corresponding to iy variables that are associated with ,M

i.e., .Fm k n= − Theorem 3.2 implies that the MPD of B is obtained by

( ) ( ).FU m U k n= −

98

Table 3-3 shows the MPD of B for different values of ,k for { }0,…,2 ,Fk n∈ and

kT (the number of submatrices of A of size ( )C C F Fn n n n+ + that have k columns

corresponding to iy and it variables).

Table 3-3. Maximum possible determinant of B and kT for given number of columns corresponding to iy and it variables.

k MPD of B kT 0 0 2 2

0C F F

C C F F

n n nn n n nC C+ +

1 0 2 21 1

C F F

C C F F

n n nn n n nC C+ + −

.. .. .. 1Fn − 0 2 2

1 1C F F

C C F F

n n nn n n nC C+ + −

Fn 1 2 2C F F

C C F F

n n nn n n nC C+

1Fn + ( )1 1U = 2 21 1

C F F

C C F F

n n nn n n nC C+ − +

2Fn + ( )2 1U = 2 22 2

C F F

C C F F

n n nn n n nC C+ − +

3Fn + ( )3 2U = 2 23 3

C F F

C C F F

n n nn n n nC C+ − +

4Fn + ( )4 3U = 2 24 4

C F F

C C F F

n n nn n n nC C+ − +

.. .. .. Fn n+ ( )U n 2 2C F F

C C F F

n n nn n n n n nC C+ − +

Table 3-3 stops at Fn n+ assuming that C Fn n≥ and therefore 2 .F Fn n n+ =

However, if ,C Fn n< i.e., ,Cn n= then for { }1,…,2F Fk n n n∈ + + MPD will be equal to

( ).U n

It is clear that as we increase ,k the MPD either increases or remain unchanged.

Hence, for ,Fk n h≤ +0

Fn h

kk

T+

=∑ is a lower bound on the number of submatrices of A of

size ( )C C F Fn n n n+ + whose determinants are less than or equal to ( ).U h For

,Fk n h> + although the MPD is greater than ( ),U h there are still a number of

submatrices of ,A say ,T whose determinants are less than or equal to ( )U h .

99

However, we do not know the value of .T It follows that the total number of submatrices

of A whose MPD is less than or equal to ( )U h is 0 0

.F Fn h n h

k kk k

T T T+ +

= =

+ ≥∑ ∑

We define a pseudo-basis to be a square submatrix of ,2 2C C F F C F Fn n n n n n nA + + + of size

( ).C C F Fn n n n+ + Note that every basis is a pseudo-basis but singular submatrices are

pseudo-bases that are not bases.

We are interested in the proportion of bases whose determinants absolute values

are less than or equal to ( ).U h As a proxy, we will compute the proportion of pseudo-

bases that have determinants absolute values less than or equal to ( ).U h

Let ( ) ( ),C Fp g n n U h≤⎡ ⎤⎣ ⎦ denote the probability that a pseudo-basis of the LPR

of the UFLP for given Cn and Fn chosen uniformly at random is less than or equal to

( ).U h Using the observations in Table 3-3 and (2.1) we write

( ) ( ) ( )

( )

2 2

0 02 2 , 3.29.a,

1 . 3.29.b

F FC F F

C C F F

C F F

C C F F

n h n hn n n

k n n n n k kk k

n n nC Fn n n n

T C Ch np g n n U h T C

h n

+ +

+ + −= =

++ +

⎧⎪⎪≥ = <≤⎡ ⎤ ⎨⎣ ⎦⎪⎪= =⎩

∑ ∑ (3.29)

Table 3-4 shows the value of ( ) ( ),C Fp g n n U h≤⎡ ⎤⎣ ⎦ for given values of Cn and

Fn up to 35 for different h values.

100

Table 3-4. Probability that the MPD of pseudo bases of the LPR of the UFLP for given Cn and Fn is less than or equal ( )U h .

h 2 3 4 5 6 7 8 9 10 ( )U h 1 2 3 5 9 32 56 144 320

( ),C Fg n n

( )2,2g 1 - - - - - - - -

( )3,3g 0.963 1 - - - - - - -

( )4,4g 0.918 0.990 1 - - - - - -

( )5,5g 0.881 0.974 0.998 1 - - - - -

( )6,6g 0.850 0.956 0.992 0.999 1 - - - -

( )7,7g 0.825 0.938 0.985 0.998 1 1 - - -

( )8,8g 0.804 0.920 0.976 0.995 0.999 1 1 - -

( )9,9g 0.787 0.904 0.966 0.991 0.998 0.999 1 1 -

( )10,10g 0.772 0.890 0.957 0.987 0.997 0.999 1 1 1

( )11,11g 0.759 0.876 0.947 0.982 0.995 0.999 1 1 1

( )12,12g 0.748 0.864 0.938 0.976 0.993 0.998 0.999 1 1

( )13,13g 0.738 0.853 0.929 0.971 0.990 0.997 0.999 1 1

( )14,14g 0.729 0.843 0.920 0.965 0.987 0.996 0.999 1 1

( )15,15g 0.721 0.833 0.912 0.959 0.984 0.994 0.998 0.999 1

( )16,16g 0.714 0.825 0.904 0.953 0.980 0.993 0.998 0.999 1

( )17,17g 0.708 0.817 0.896 0.947 0.976 0.991 0.997 0.999 1

( )18,18g 0.702 0.809 0.889 0.942 0.973 0.989 0.999 0.999 1

( )19,19g 0.696 0.802 0.882 0.936 0.969 0.986 0.995 0.998 0.999

( )20,20g 0.691 0.795 0.875 0.931 0.965 0.984 0.994 0.998 0.999

( )21,21g 0.687 0.789 0.869 0.925 0.961 0.982 0.992 0.997 0.999

( )22,22g 0.682 0.783 0.862 0.920 0.957 0.979 0.991 0.996 0.999

( )23,23g 0.678 0.778 0.857 0.915 0.953 0.977 0.989 0.996 0.998

( )24,24g 0.674 0.773 0.852 0.910 0.950 0.974 0.988 0.995 0.998

( )25,25g 0.671 0.768 0.847 0.905 0.946 0.971 0.986 0.994 0.997

( )26,26g 0.668 0.763 0.842 0.901 0.942 0.969 0.984 0.993 0.997

( )27,27g 0.664 0.759 0.837 0.896 0.938 0.966 0.982 0.992 0.996

( )28,28g 0.661 0.754 0.832 0.892 0.935 0.963 0.981 0.990 0.996

101

h 2 3 4 5 6 7 8 9 10 ( )U h 1 2 3 5 9 32 56 144 320

( ),C Fg n n

( )29,29g 0.658 0.750 0.827 0.887 0.931 0.960 0.979 0.989 0.995

( )30,30g 0.656 0.747 0.823 0.883 0.927 0.958 0.977 0.988 0.994

( )31,31g 0.653 0.743 0.819 0.879 0.924 0.955 0.975 0.987 0.994

( )32,32g 0.651 0.739 0.815 0.875 0.920 0.952 0.973 0.985 0.993

( )33,33g 0.648 0.736 0.811 0.871 0.917 0.949 0.971 0.984 0.992

( )34,34g 0.646 0.733 0.807 0.868 0.914 0.947 0.969 0.983 0.991

( )35,35g 0.644 0.730 0.804 0.864 0.910 0.944 0.967 0.981 0.990

Although Theorem 3.1 implies that the MPD of bases of the LPR of UFLP for

35Cn = and 35Fn = can be as large as ( ) 1735 3 10 ,U = × Table 3-4 shows that more

than 99% of the pseudo-bases of the LPR of UFLP for 35Cn = and 35Fn = will have

determinant less than or equal to ( )10 320U = . It should be noted that 320 is not a large

number when compared to the size of the matrix A of UFLP for 35Cn = and 35,Fn =

( )1295,2520 . Further, as ( )U n is function of n where { }min ,C Fn n n= , we have that

( ) ( ) ( ) ( ) ( ) ( )100000,35 10 35,100000 10 35,35 10 0.99p g U p g U p g U≤ = ≤ = ≤ ≥⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦ ⎣ ⎦ .

102

CHAPTER 4 SPECIAL CASES

In this chapter, we study two small instances of UFLP. In the first, we assume

that we have two customers and/or two facilities and we show that the LPR of UFLP

always describes the convex hull of its integer solutions. In the second, we assume that

we have three customers and three facilities and we show that the convex hull of integer

solutions can be obtained by adding six inequalities to its LP formulation.

4.1 Case 1: Two Customers and/or Two Facilities

If 2Cn = and/or 2,Fn = then 2.n = Using Theorem 3.1 and Table 3-2, we know

that the absolute value of the MPD of bases of the LPR of the UFLP with ( 2Cn = and/or

2Fn = ) is less than or equal ( )2 1.U = This means that as long as 2Cn = and/or

2Fn = , there always is an optimal solution to the LPR of UFLP that is integer.

Theorem 4.1: If 2Cn = and/or 2Fn = , then the LPR of UFLP describes the convex hull

of its integer solutions.

We note that this result had been obtained before in the literature; see [15].

4.2 Case 2: Three Customers and Three Facilities

If 3,C Fn n= = then 3.n = Using Theorem 3.1 and Table 3-2, we know that the

absolute value of the MPD of bases of the LPR of the UFLP with ( 3C Fn n= = ) is less

than or equal ( )3 2.U = Further, Table 3-2 establishes that there are exactly six (0,1)

matrices of size (3,3) that have determinant equal to 2. It follows from Theorem 2.3 that

there are exactly six bases of the LPR of UFLP for ( 3C Fn n= = ) that have a

determinant whose absolute value is equal to 2. We show next that the basic feasible

103

solutions corresponding to these bases are fractional and feasible. We then construct

six inequalities that can be added to the LP formulation of UFLP to remove these

fractional solutions and produce the convex hull of integer solutions.

First, we show how to construct the above-mentioned six bases. A (0,1) matrix of

size (3,3) that has determinant absolute value that is equal to 2 can be obtained using

Lemma 2.9,

3,3

0 1 11 0 1 .1 1 0

U⎛ ⎞⎜ ⎟= ⎜ ⎟⎜ ⎟⎝ ⎠

(4.1)

According to Table 3-2, there are only six (0,1) matrices of size (3,3) that have

determinant whose absolute value is equal to 2. It is easily verified that these six (0,1)

matrices can be obtained by considering all the possible column or all the possible row

permutations of 3,3U in (4.1). In fact, since the matrix 3,3U has the form of (4.1), then all

the possible column permutations of 3,3U produce six matrices that are identical to the

six matrices produced using all the possible row permutations.

It follows that the six (0,1) matrices of size (3,3) that have a determinant of

absolute value equal to 2 are

0 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 01 0 1 , 1 1 0 , 0 1 1 , 0 1 1 , 1 1 0 , and 1 0 1 .1 1 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

(4.2)

Given that 3,C Fn n= = we use Algorithm UFLP-BASIS to construct one basis for

each of the matrices shown in (4.2); see Figure 4-1. Theorem 2.2 implies that these

bases have determinant whose absolute value is equal to 2.

104

ijx ijs iy

it

11x 12x

13x 21x

22x 23x 31x

32x 33x

11s 12s 13s 21s 22s 23s 31s 32s 33s 1y 2y

3y 1t

2t 3t

• • • • • • • • • • • • • • •

ijx ijs iy

it

11x 12x

13x 21x

22x 23x 31x

32x 33x

11s 12s 13s 21s 22s 23s 31s 32s 33s 1y 2y

3y 1t

2t 3t

• • • • • • • • • • • • • • •

ijx ijs iy

it

11x 12x

13x 21x

22x 23x 31x

32x 33x

11s 12s 13s 21s 22s 23s 31s 32s 33s 1y 2y

3y 1t

2t 3t

• • • • • • • • • • • • • • •

ijx ijs iy

it

11x 12x

13x 21x

22x 23x 31x

32x 33x

11s 12s 13s 21s 22s 23s 31s 32s 33s 1y 2y

3y 1t

2t 3t

• • • • • • • • • • • • • • •

ijx ijs iy

it

11x 12x

13x 21x

22x 23x 31x

32x 33x

11s 12s 13s 21s 22s 23s 31s 32s 33s 1y 2y

3y 1t

2t 3t

• • • • • • • • • • • • • • •

ijx ijs iy

it

11x 12x

13x 21x

22x 23x 31x

32x 33x

11s 12s 13s 21s 22s 23s 31s 32s 33s 1y 2y

3y 1t

2t 3t • • • • • • • • • • • • • • •

Figure 4-1. The six bases of the LPR of UFLP (with 3C Fn n= = ) that have determinant

absolute values equal to 2.

For each of the above bases, we compute the corresponding basic solution 1B b−

where b is the right-hand-side shown in (1.6). The basic solution associated with each

of the six bases is non-degenerate and it can be verified that all basic variables have

value 0.5. It follows that each of these LP vertices has exactly 9 neighbor vertices.

We next show that for each of these basic solutions, all neighboring basic

solutions correspond to integer solutions.

Consider the bases in Figure 4-1, it can be verified that every basis is different

from all other bases in more than two basic variables. We know that two bases are

neighbors only if they are different in one basic variable (as we do one pivot in the

simplex algorithm to obtain a neighbor basis). It follows that each of the above

105

“fractional” bases does not have any neighbor that corresponds to another “fractional”

basis.

Since we know that these six bases are the only fractional bases in the LPR of

UFLP and they are not neighbors to each other, we conclude that for each of the six

bases in Figure 4-1 all the neighbors in the LPR correspond to integer solutions.

For each of these six bases, we can therefore pass a plane through its nine

neighbors to obtain a valid inequality for the convex hull of integer solutions of UFLP. It is

easily verified that this cut is facet-defining for the convex hull of integer solutions to UFLP.

We next describe how to obtain the cut corresponding to the first basis in Figure 4-1.

The same argument can be applied to the other bases after permuting appropriate indices.

The basic solution corresponding to the first basis in Figure 4-1 is as follows

11 12 21 23 32 33 13 22 31 1 2 3 1 2 3 0.5.x x x x x x s s s y y y t t t= = = = = = = = = = = = = = = (4.3)

The simplex tableau corresponding to this solution is

( ) ( )( ) ( )( ) ( )

11 13 22 31 11 12 21 23 32 33

12 13 22 31 11 12 21 23 32 33

21 13 22 31 11 12 21 23 32 33

23 13 22 31 11 12 21 23 3

0.5 0.5 , 4.4.a0.5 0.5 , 4.4.b0.5 0.5 , 4.4.c0.5 0.5 0

x x x x s s s s s sx x x x s s s s s sx x x x s s s s s sx x x x s s s s s

= − − + + + − − + − += − − + + − + − + − += − − + − + + − + −= − − + − + − + +( ) ( )

( ) ( )( ) ( )( ) ( )

2 33

32 13 22 31 11 12 21 23 32 33

33 13 22 31 11 12 21 23 32 33

13 13 22 31 11 12 21 23 32 33

22 13 22 31 11 12 2

, 4.4.d0.5 0.5 , 4.4.e0.5 0.5 , 4.4.f0.5 0.5 , 4.4.g0.5 0.5

sx x x x s s s s s sx x x x s s s s s ss x x x s s s s s ss x x x s s s

−= − + − + − + − + −= − + − + − + − − += − + + − − − + − += − + + − + −( ) ( )

( ) ( )( ) ( )( ) ( )

1 23 32 33

31 13 22 31 11 12 21 23 32 33

1 13 22 31 11 12 21 23 32 33

2 13 22 31 11 12 21 23 32 33

3 13 22 31 11

, 4.4.h0.5 0.5 , 4.4.i0.5 0.5 , 4.4.j0.5 0.5 , 4.4.k0.5 0.5

s s ss x x x s s s s s sy x x x s s s s s sy x x x s s s s s sy x x x s

− + −= − + + + − + − − −= − − + + − − − + − += − − + − + − − + −= − + − + −( ) ( )

( ) ( )( ) ( )( ) ( )

12 21 23 32 33

1 13 22 31 11 12 21 23 32 33

2 13 22 31 11 12 21 23 32 33

3 13 22 31 11 12 21 23 32 33

, 4.4.l0.5 0.5 , 4.4.m0.5 0.5 , 4.4.n0.5 0.5 . 4.4.o

s s s s st x x x s s s s s st x x x s s s s s st x x x s s s s s s

+ − − −= − − − + + + − + −= − − + − + − + + − += − − − + − + − + + +

(4.4)

106

When pivoting variable 13x inside of the basis, we obtain that 13 1x = while all

other non-basic variables stay at zero. A similar observation can be made for each of

the other non-basic variables. It follows that the cutting plane we are looking for passes

through the points

13 22 31 11 12 21 23 32 33

13 22 31 11 12 21 23 32 33

13 22 31 11 12 21 23 32 33

13 22

1, 0, 0, 0, 0, 0, 0, 0, 0,0, 1, 0, 0, 0, 0, 0, 0, 0,0, 0, 1, 0, 0, 0, 0, 0, 0,0, 0,

x x x s s s s s sx x x s s s s s sx x x s s s s s sx x

= = = = = = = = =

= = = = = = = = =

= = = = = = = = =

= = 31 11 12 21 23 32 33

13 22 31 11 12 21 23 32 33

13 22 31 11 12 21 23 32 33

13 22 31 11

0, 1, 0, 0, 0, 0, 0,0, 0, 0, 0, 1, 0, 0, 0, 0,0, 0, 0, 0, 0, 1, 0, 0, 0,0, 0, 0, 0,

x s s s s s sx x x s s s s s sx x x s s s s s sx x x s

= = = = = = =

= = = = = = = = == = = = = = = = =

= = = = 12 21 23 32 33

13 22 31 11 12 21 23 32 33

13 22 31 11 12 21 23 32 33

0, 0, 1, 0, 0,0, 0, 0, 0, 0, 0, 0, 1, 0,0, 0, 0, 0, 0, 0, 0, 0, 1.

s s s s sx x x s s s s s sx x x s s s s s s

= = = = =

= = = = = = = = =

= = = = = = = = =

(4.5)

Therefore, the inequality we are looking for is of the form

22 31 11 23 33 13 12 21 32 1.x x s s s x s s s+ + + + + + + + ≥ (4.6)

We conclude that the inequality (4.6) is a facet for the convex hull of integer solution

to UFLP that removes the fractional solution corresponding to the first basis in Figure 4-1.

Repeating the aforementioned steps for the other bases in Figure 4-1, we obtain

the following inequalities

( )( )( )( )

13 22 31 11 12 21 23 32 33

13 21 32 11 12 22 23 31 33

12 23 31 11 13 21 22 32 33

12 21 33 11 13 22 23 31 32

11 22 33 12 13 21 23

1, 4.7.a1, 4.7.b1, 4.7.c1, 4.7.d

x x x s s s s s sx x x s s s s s sx x x s s s s s sx x x s s s s s sx x x s s s s

+ + + + + + + + ≥+ + + + + + + + ≥+ + + + + + + + ≥+ + + + + + + + ≥+ + + + + + + ( )

( )31 32

11 23 32 12 13 21 22 31 33

1, 4.7.e1. 4.7.f

s sx x x s s s s s s

+ ≥+ + + + + + + + ≥

(4.7)

In summary, we have established that for the case where 3,C Fn n= = there are

exactly six basic feasible solutions that are fractional for the LPR of UFLP. The

107

inequalities in (4.7) remove these fractional solutions to form the convex hull of integer

solutions to UFLP.

Theorem 4.2: If 3,C Fn n= = then the convex hull of integer solutions can be obtained

by adding the six inequalities in (4.7) to the LPR of UFLP.

We next analyze the inequalities in (4.7) with the view of generalizing them to

other instances of UFLP. To interpret the inequalities in (4.7), we use (1.4.c) to

substitute the slack variables, ,ijs by .i ijy x− We write the first inequality (4.7.a) as

( ) ( ) ( )13 22 31 11 21 12 32 23 33 1 2 32 2 2 1.x x x x x x x x x y y y+ + − + − + − + + + + ≥ (4.8)

From (1.4.b) we know that 11 21 31 1.x x x+ + = Therefore, ( )11 21x x+ can be

substituted by 311 .x− Applying a similar transformation to ( )12 32x x+ and ( )23 33 ,x x+

we obtain

13 22 31 1 2 3 2,x x x y y y+ + + + + ≥ . (4.9)

which is equivalent to (4.7.a).

The same procedure can be applied to all inequalities in (4.7), resulting in the

following six inequalities.

( )( )( )( )( )( )

13 22 31 1 2 3

13 21 32 1 2 3

12 23 31 1 2 3

12 21 33 1 2 3

11 22 33 1 2 3

11 23 32 1 2 3

2, 4.10.a2, 4.10.b2, 4.10.c2, 4.10.d2, 4.10.e2. 4.10.f

x x x y y yx x x y y yx x x y y yx x x y y yx x x y y yx x x y y y

+ + + + + ≥+ + + + + ≥+ + + + + ≥+ + + + + ≥+ + + + + ≥+ + + + + ≥

(4.10)

We interpret the first inequality of (4.10) as follows. A matching of the set of

facilities, ,F to the set of customers, ,C ( )13 22 31, , x x x is first selected. Any solution

108

( ),x y for which 13 22 31 1x x x+ + ≥ clearly satisfies (4.10.a) since we have

1 2 3 1.y y y+ + ≥ Now consider any solution ( ),x y with 13 22 31 0,x x x= = = then clearly

1 2 3 2y y y+ + ≥ since no single facility can handle all customers given the condition that

13 22 31 0.x x x= = = The remaining inequalities (4.10.b) to (4.10.f) are obtained using

different permutations of the matching of F to .C

For UFLP with 3,C Fn n= = a complete bipartite graph between the set of

facilities, ,F and the set of customers, ,C is given in Figure 4-2. Each inequality in

(4.10) gives a matching between the set of facilities and the set of customers. We show

in Figure 4-3(a) to (f), the bipartite graph between the set of facilities and the set of

customers that corresponds to the matching associated with each of the inequalities in

(4.10).

Figure 4-2. A complete bipartite graph between the set of facilities and the set of

customers for UFLP with 3C Fn n= = .

109

a) b) c)

d) e) f)

Figure 4-3. A complete bipartite graph between the set of facilities and the set of

customers that corresponds to the matching associated with each of the inequalities in (4.10). a) (4.10.a), b) (4.10.b), c) (4.10.c), d) (4.10.d), e) (4.10.e), and f) (4.10.f).

Using this interpretation, we next provide a generalization of the inequalities to

instances where 3Cn ≥ and 3.Fn ≥

Definition: Let ( )ˆ ˆ, ,G F C E be a complete bipartite graph between the set of facilities,

,F and the set of customers, ,C we denote by E the set of all edges in ˆ.G We say that

ˆ ˆE E′ ⊆ “covers the facilities” if ,i F∀ ∈ ,j C∃ ∈ s.t., ( ) ˆ, .i j E ′∈

Note that in each bipartite graph in Figure 4-3(a) to (f), the set of edges “covers

the facilities”.

Theorem 4.3: Assume that E ′ covers the facilities. Then, the following inequality

110

( ) ˆ,

2.ij ii Fi j E

x y∈′∈

+ ≥∑ ∑ (4.11)

is valid for UFLP.

Proof: Consider any feasible solution ( ),x y to UFLP. If any of the ijx variables that are

corresponding to ( ) ˆ,i j E ′∈ is equal to 1, then (4.11) is satisfied as 1ii F

y∈

≥∑ is satisfied

by all feasible solutions to UFLP. Further, if ( ) ˆ,

0,iji j E

x′∈

=∑ i.e., ( ) ˆ0 , ,ijx i j E ′= ∀ ∈ we

claim that 2ii F

y∈

≥∑ showing that (4.11) is satisfied.

Assume for contradiction that there exists a feasible solution with

( ) ˆ0 ,ijx i j E ′= ∀ ∈ and 1.ii F

y∈

=∑ Let k F∈ be the only index such that 1.ky = Because

E ′ covers ,F j C∃ ∈ such that 0.kjx = Since \

1 ,ij iji F i F k

x x∈ ∈

= =∑ ∑ there exists k k F≠ ∈

such that 1.kjx = It follows that 1,ky = a contradiction to the fact that 1.ii F

y∈

=∑

Theorem 4.3 gives a large family of inequalities for UFLP. This family leads to a

convex hull description of the integer solutions to the problem when 3.C Fn n= = An

interesting direction of future research is to determine whether it leads to other convex

hull description for 3Cn ≥ and/or 3.Fn ≥

111

CHAPTER 5 EXPERIMENTAL RESULTS

In this chapter we present experimental results on solving the group relaxation of

UFLP. We construct multiple instances of different size. The parameters Cn and Fn are

chosen as follows.

First we choose Fn to take all possible values from 4 to 50. Then, for each value

of ,Fn we create several values of .Cn When { }4, ,9Fn ∈ … we generate all values of Cn

from Fn to 5.Fn + When { }10, ,50Fn ∈ … we generate all values of Cn from 5Fn − to

5.Fn +

Table 5-1 describes the selection of the parameters Cn and Fn in the construction

of experiments.

Table 5-1. Selection of the parameters Cn and Fn in the construction of UFLP experiments.

Cn Fn Cn Fn Cn Fn Cn Fn Cn Fn 4 4 5 5 9 9 8 10 14 10 5 4 .. 5 .. 9 9 10 15 10 6 4 10 5 14 9 10 10 .. .. 7 4 6 6 5 10 11 10 45 50 8 4 .. .. 6 10 12 10 .. 50 9 4 .. .. 7 10 13 10 55 50

For each setting of the parameters, Cn and ,Fn we generate 1000 instances with

different cost vectors. In all instances, the cost satisfies the triangular inequality.

For each facility ,i F∈ we generate random coordinates ( ),i ih k that are

uniformly distributed over the range ( )40,10 . Also, for each customer ,j C∈ we

generate random coordinates ( ),j jh k that are uniformly distributed over the range

112

( )40,10 . Then, the metric distance ijd between each facility i and each customer j is

computed as

( ) ( )2 2.ij i j i jd h h k k= − + − (5.1)

Similarly, we then generate the cost per unit distance α and the cost of opening

facility ,i ,if such that they are uniformly distributed over the range ( )40,10 . Finally, we

compute the cost of assigning customer j to facility ,i ,ijc as .ij ijc αd=

Let Lpx be an optimal solution of the LPR of UFLP and let Lpz be its

associated optimal value. In each instance, if Lpx is integer then the instance is

ignored, otherwise, lpx is non integer and we solve the corner relaxation of UFLP

corresponding to that instance. In our experiments, LP problems are solved using

CPLEX.

For every parameters setting ( ), ,C Fn n we denote by R the number of instances

whose Lpx was non integer. If 0,R = we obtained an integer solution for the LPR of

UFLP in all of the 1000 trials. Therefore, we do not need to solve any corner relaxation

of UFLP and hence this parameter setting is omitted from our result table.

In Table 5-2 we present results for settings where 0.R ≠ Every row represents

the instances of a single parameter setting ( ), .C Fn n The first two columns present the

values of Cn and Fn corresponding to every instance while the third column gives the

R value for this setting.

113

For any specific instance where 0R ≠ we compute the absolute value of the

determinant of the basis corresponding to .Lpx The column maxD contains the

maximum of these determinants.

In each instance where Lpx is non-integer, we compute an optimal solution of

the group minimization problem. This is done using our implementation of Dijkstra's

algorithm in Matlab. Denote by Grx and Grz the solution and objective value

corresponding to this problem. For each instance, we check the feasibility of Grx by

checking if it is nonnegative. The inf column in Table 5-2 represents the number of

instances for a parameter setting where Grx is infeasible.

Let Ipx be an optimal integer solution of the particular instance of UFLP and

denote by Ipz its associated optimal value. If Grx is feasible then we know that it is an

optimal solution to the MIP formulation of UFLP and hence Grx Ipx= and .Grz Ipz=

Otherwise, we compute it using CPLEX.

For each of the fractional instances, The value ( ) ( )/Grz Lpz Ipz Lpz− −

represents how close Grz is to .Lpz Similarly, ( ) ( )/Ipz Grz Ipz Lpz− − represents how

close Grz is to Ipz . The last columns of Table 5-2 give the average ( )μ x and

standard deviation ( )σ x of Grz LpzIpz Lpz

−−

and Ipz GrzIpz Lpz

−−

respectively.

Next, we interpret the first row of Table 5-2. For parameters setting 5Cn = and

10,Fn = 7 instances ( 7R = ) out of the 1000 instances generated were fractional.

Hence, Grx and Ipx were computed for each of these instances. The group relaxation

solution was infeasible for 6 of these instances ( inf =6). On average for these

114

instances, the Grz value represents ( )0.3251Lpz Ipz Lpz+ − and

( )0.6749 .Ipz Ipz Lpz− − This shows that group relaxations from problems of these

sizes close 32% of the gap that exist between the LP and IP values when LPR does not

solve the IP.

Table 5-2. Experimental Results.

Cn Fn R maxD inf Grz Lpz− Ipz Grz−

μ σ μ σ 5 10 7 2 6 0.3251 0.3841 0.6749 0.3841 9 12 6 2 5 0.071141 0.16158 0.92886 0.16158 38 42 9 2 4 0.19018 0.12144 0.80982 0.12144 24 19 4 2 2 0.19828 0.11136 0.80172 0.11136 33 31 5 2 2 0.28837 0.16097 0.71163 0.16097 49 53 9 2 3 0.009701 0.082563 0.9903 0.08256316 17 8 2 6 0.28879 0.009713 0.71121 0.0097134 5 10 2 6 0.4661 0.23046 0.5339 0.23046 40 38 10 2 4 0.053436 0.055581 0.94656 0.05558119 14 8 2 5 0.36607 0.28093 0.63393 0.28093 30 32 4 2 2 0.48526 0.143 0.51474 0.14310 5 3 2 1 0.30444 0.11459 0.69556 0.11459 4 8 6 2 5 0.35983 0.26111 0.64017 0.26111 8 10 7 2 3 0.15138 0.26122 0.84862 0.26122 42 37 6 2 5 0.22951 0.060884 0.77049 0.06088416 21 10 2 5 0.024014 0.041141 0.97599 0.04114120 15 7 2 3 0.19268 0.060913 0.80732 0.06091333 35 6 2 2 0.18086 0.21959 0.81914 0.21959 41 43 5 2 3 0.14379 0.29466 0.85621 0.29466 39 43 6 2 1 0.40836 0.17853 0.59164 0.17853 19 15 8 2 4 0.22526 0.058017 0.77474 0.05801710 13 7 2 2 0.40332 0.25001 0.59668 0.25001 33 28 5 2 3 0.39509 0.31746 0.60491 0.31746 15 20 5 2 3 0.14148 0.076488 0.85852 0.07648832 29 8 2 6 0.034155 0.30484 0.96585 0.30484 30 27 9 2 6 0.027465 0.074143 0.97253 0.07414317 21 8 2 8 0.31876 0.29283 0.68124 0.29283 50 48 9 2 8 0.21214 0.30076 0.78786 0.30076 37 38 8 2 6 0.45277 0.18318 0.54723 0.18318 22 19 10 2 6 0.20866 0.16708 0.79134 0.16708 42 38 7 2 2 0.077029 0.31146 0.92297 0.31146 18 22 10 2 1 0.27 0.022777 0.73 0.02277728 23 3 2 1 0.46855 0.17831 0.53145 0.17831 14 9 7 2 3 0.33048 0.21728 0.66952 0.21728

115

Cn Fn R maxD inf Grz Lpz− Ipz Grz−

μ σ μ σ 35 39 4 2 1 0.19733 0.25675 0.80267 0.25675 25 23 8 2 7 0.1295 0.080499 0.8705 0.08049910 12 10 2 3 0.42396 0.007655 0.57604 0.00765511 15 6 2 1 0.47253 0.048646 0.52747 0.04864617 12 3 2 2 0.1885 0.26933 0.8115 0.26933 48 46 6 2 6 0.03364 0.33942 0.96636 0.33942 27 26 10 2 6 0.090791 0.13538 0.90921 0.13538 13 10 7 2 2 0.28787 0.3477 0.71213 0.3477 22 19 8 2 4 0.092943 0.11424 0.90706 0.11424 34 30 8 2 6 0.14572 0.048006 0.85428 0.048006

116

CHAPTER 6 CONCLUSION AND FUTURE RESEARCH

In this thesis, we obtained results about the linear programming and group

relaxations of UFLP. We proved that the maximum possible determinant is exponential

in terms of min{ },C Fn n but also gave theoretical and experimental arguments for why

most of the bases of the LPR of UFLP have small determinants. It follows that the size

of the shortest path problem associated with the group relaxation of UFLP is typically

small. Moreover, we have shown that even when the bases of the LPR of UFLP have

the MPD, the LP solution corresponding to these bases might not always be very

fractional.

Based on our results about the LPR of UFLP, we gave a new proof that the LPR of

UFLP describes the convex hull of its integer solutions for the case where 2Cn = and/or

2.Fn = We also gave six inequalities that can be added to the LP formulation of UFLP

to obtain the convex hull of integer solutions when 3.C Fn n= = We believe that the

methodology used can be extended to find the convex hull o UFLP for more general

instances where min{ },C Fn n =3.

This work opens new avenues of research. One direction is to develop

heuristics/approximation algorithms to obtain good quality feasible solutions based on

the optimal solution of the group minimization problem. Another direction is to extend

the study of the group relaxations to other variants of the facility location problem.

Finally, we could seek to generalize the family of inequalities developed for the case

where 3C Fn n= = to obtain facets of the convex hull of integer solutions of instances

where 3Cn ≥ and 3.Fn ≥

117

LIST OF REFERENCES

1. Kuehn, A. and Hamburger, M.: A heuristic program for locating warehouses. Manag. Sci. 9, 643-666 (1963)

2. Stollsteimer, J.: A working model for plant numbers and locations. J. Farm Econ. 45, 631-645 (1963)

3. Hakimi, S. L.: Optimum locations of switching centers and the absolute centers and medians of a graph. Oper. Res. 12, 450–459 (1964)

4. Hakimi, S. L.: Optimum distribution of switching centers in a communication network and some related graph theoretic problems. Oper. Res. 13, 462–475 (1965)

5. Kaufman, L., Eede, M. V., and Hansen, P.: A plant and warehouse location problem. 4OR. 28, 547-557 (1977)

6. Maranzana, F. E.: On the location of supply points to minimize transport costs. Oper. Res. 15, 261–270 (1964)

7. Daskin, M. S.: Network and Discrete Location: Models, Algorithms, and Applications. Wiley-Interscience, New York (1995)

8. ReVelle, C. S., Eiselt, H. A., and Daskin, M. S.: A bibliography for some fundamental problem categories in discrete location science. European J. Oper. Res. 184, 817–848 (2008)

9. Farahani, R. Z. and Hekmatfar, M. (eds.): Facility Location: Concepts, Models, Algorithms and Case Studies, Contributions to Management Science. Springer-Verlag, Berlin (2009)

10. Cornuejols, G., Nemhauser, G., and Wolsey, L.: The Uncapacitated Facility Location Problem. In: Mirchandani, P. and Francis, R. (eds.) Discrete Location Theory, pp. 119-171. John Wiley and Sons (1990)

11. Nemhauser, G. and Wolsey, L.: Integer and Combinatorial Optimization. John Wiley and Sons (1990)

12. Guignard, M.: Fractional vertices, cuts and facets of the simple plant location problem. Math. Program. 12, 150-162 (1980)

13. Cornuejols, G. and Thizy, J.-M.: Some facets of the simple plant location polytope. Math. Program. 23, 50-74 (1982)

14. Cho, D. C., Johnson, E. L., Padberg, M., and Rao, M. R.: On the uncapacitated plant location problem-I: Valid inequalities and facets. Math. Oper. Res. 8, 579-589 (1983)

118

15. Cho, D. C., Padberg, M., and Rao, M. R.: On the uncapacitated plant location problem-II: Facets and lifting theorems. Math. Oper. Res. 8, 590-612 (1983)

16. Leung J. M. Y. and Magnanti, T. L.: Valid inequalities and facets of the capacitated plant location problem. Math. Program. 44, 271-291 (1989)

17. Aardal, K., Pochet, Y., and Wolsey, L. A.: Capacitated facility location: valid inequalities and facets. Math. Oper. Res. 20, 562-582 (1995)

18. Cánovas, L., Landete, M., and Marín, A.: On the facets of the simple plant location packing polytope. Discrete Appl. Math. 124, 27-53 (2002)

19. Hochbaum, D. S.: Heuristics for the fixed cost median problem. Math. Program. 22, 148-162 (1982)

20. Chudak, F. and Shmoys, D.: Improved approximation algorithms for the uncapacitated facility location problem. Unpublished manuscript (1998)

21. Shmoys, D., Tardos, E., and Aardal, K.: Approximation algorithms for facility location problems. Proceedings of the 29th Annual ACM Symposium on Theory of Computing, pp. 265-274 (1997)

22. Sviridenko, M.: An improved approximation algorithm for the metric uncapacitated facility location problem. Proceedings of the 9th International IPCO Conference on Integer Programming and Combinatorial Optimization, pp. 240-257, May 27-29 (2002)

23. Charikar, M., Khuller, S., Mount, D., and Narasimhan, G.: Facility location with outliers. Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete Algorithms, Washington DC, January (2001)

24. Jain, K. and Vazirani, V.: Primal-dual approximation algorithms for metric facility location and k-median problems. Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, pp. 2-13, October (1999)

25. Guha, S. and Khuller, S.: Greedy strikes back: improved facility location algorithms. J. Algorithms. 31, 228-248 (1999)

26. Mahdian, M., Ye, Y., and Zhang, J.: Improved approximation algorithms for metric facility location problems. Proceedings of the 5th International Workshop on Approximation Algorithms for Combinatorial Optimization, pp. 229–242 (2002)

27. Korupolu, M., Plaxton, C., and Rajaraman, R.: Analysis of a local search heuristic for facility location problems. Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1-10, January (1998)

28. Cocking, C.: Solutions to facility location–network design problems. Ph.D. thesis, University of Heidelberg, Germany (2008)

119

29. Schrijver, A.: Theory of Linear and Integer Programming. John Wiley and Sons, Chichester, (1986)

30. Richard, J. P. P. and Dey, S. S.: The Group-Theoretic Approach in Mixed Integer Programming. In: Jünger, M., Liebling, T. M., Naddef, D., Nemhauser, G. L., Pulleyblank, W. R., Reinelt, G., Rinaldi, G., and Wolsey, L. A. (eds.) Fifty Years of Integer Programming 1958-2008 From early years to the state-of-the-art. Springer (2009)

31. Kannan, R. and Bachem, A.: Polynomial algorithms for computing the Smith and Hermite normal forms of an integer matrix. SIAM J. Comput. 8, 499-507 (1979)

32. Shapiro, J.F.: Dynamic programming algorithms for the integer programming problem - I: The integer programming problem viewed as a knapsack type problem. Oper. Res., 16, 103–121 (1968)

33. Silvester, J. R.: Determinants of block matrices. Gaz. Math. 84, 460–467 (2000)

34. Brookes, M.: The Matrix Reference Manual. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html (2005). Accessed 01 March 2010.

35. Hadamard, J.: Résolution d'une question relative aux determinants. Bull. Sci. Math. 17, 240-246 (1893)

36. Brenner, J.: The Hadamard maximum determinant problem. Amer. Math. Monthly. 79, 626-630 (1972)

37. The On-Line Encyclopedia of Integer Sequences, Sequence A003433. http://www.research.att.com/~njas/sequences/A003433 (2010). Accessed 01 March 2010.

38. Golomb, S. W. and Baumert, L. D.: The search for Hadamard matrices. Amer. Math. Monthly. 70, 12-17 (1963)

39. Seberry, J. and Yamada, M.: Hadamard Matrices, Sequences, and Block Designs. In Dinitz, J. F. and Stinson, D. R. (eds.) Contemporary Design Theory: A Collection of Surveys. Wiley-Interscience (1992)

40. Williamson, J.: Determinants whose elements are 0 and 1. Amer. Math. Monthly. 53, 427–434 (1946)

41. ˇZivkovi´c, M.: Classification of small (0,1) matrices, Linear Algebra Appl. 414, 310–346 (2006)

42. The On-Line Encyclopedia of Integer Sequences, Sequence A003432. http://www.research.att.com/~njas/sequences/A003432 (2010). Accessed 01 March 2010.

120

BIOGRAPHICAL SKETCH

Mohammad Khalil graduated from Cairo University at Fayoum (CUF), Egypt, with

a B.Sc. degree in Industrial Engineering in May 2004. He then worked as a teaching

assistant at the Industrial Engineering Department at CUF. He also had a consultant

position in supply chain management, quality, environmental, and safety management

systems.

After graduation, Mohammad joined the M.Sc. program at Cairo University in

Mechanical Design and Production Engineering. In 2008, Mohammad was awarded a

Fulbright fellowship to pursue his M.Sc. at the University of Florida. He decided to

pursue his M.Sc. in Industrial and Systems Engineering.

Mohammad is very interested in Operations Research especially integer

programming. He plans to continue his Ph.D. in the same field. His PhD dissertation will

focus on logistics or scheduling problems. His objective is to pursue an academic career

in Operations Research.