sensitivity of the optimum to perturbations of the profit or weight of an item in the binary...

22
Journal of Combinatorial Optimization, 10, 239–260, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Sensitivity of the Optimum to Perturbations of the Profit or Weight of an Item in the Binary Knapsack Problem MHAND HIFI HEDI MHALLA SLIM SADFI LaRIA, Universit´ e de Picardie Jules Verne, 5 rue du Moulin Neuf, 80000 Amiens, France Received February 28, 2003; Revised June 7, 2004; Accepted May 13, 2005 Abstract. In the binary single constraint Knapsack Problem, denoted KP, we are given a knapsack of fixed capacity c and a set of n items. Each item j , j = 1,..., n, has an associated size or weight w j and a profit p j . The goal is to determine whether or not item j , j = 1,..., n, should be included in the knapsack. The objective is to maximize the total profit without exceeding the capacity c of the knapsack. In this paper, we study the sensitivity of the optimum of the KP to perturbations of either the profit or the weight of an item. We give approximate and exact interval limits for both cases (profit and weight) and propose several polynomial time algorithms able to reach these interval limits. The performance of the proposed algorithms are evaluated on a large number of problem instances. Keywords: combinatorial optimization, knapsacks, optimality, sensitivity analysis 1. Introduction The classical binary Knapsack Problem, denoted KP, consists of packing a subset of n given items in a knapsack of capacity c. Each item has a profit p j and a weight w j , for j = 1,..., n. The objective is to maximize the profit yielded by the subset of packed items without exceeding the capacity c of the knapsack. Let x j be a binary decision variable with x j = 1 if the j -th item is selected, and x j = 0 otherwise. The KP can be formulated as a 0-1 integer linear program: KP = max n j =1 p j x j : n j =1 w j x j c, x j ∈{0, 1}, j = 1,..., n . The KP is an NP-hard combinatorial optimization problem with a wide range of applica- tions (Gilmore and Gomory, 1966; Martello and Toth, 1990; Morabito and Arenales, 1995). It often arises as a component of more complex combinatorial optimization problems. Its induced structure in complex problems allows the computation of upper bounds and the design of heuristic and exact methods for these complex problems. For instance, in cutting stock problems, where a lot of work has been done since the early 1960’s (Gilmore and Gomory, 1966), knapsack problems account for 60–80% of the total computing time (Hifi, 2001; Val´ ero de Carvalho and Ridrigues, 1995).

Upload: mhand-hifi

Post on 15-Jul-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Journal of Combinatorial Optimization, 10, 239–260, 2005c© 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands.

Sensitivity of the Optimum to Perturbations of theProfit or Weight of an Item in the Binary KnapsackProblem

MHAND HIFIHEDI MHALLASLIM SADFILaRIA, Universite de Picardie Jules Verne, 5 rue du Moulin Neuf, 80000 Amiens, France

Received February 28, 2003; Revised June 7, 2004; Accepted May 13, 2005

Abstract. In the binary single constraint Knapsack Problem, denoted K P, we are given a knapsack of fixedcapacity c and a set of n items. Each item j, j = 1, . . . , n, has an associated size or weight w j and a profit p j . Thegoal is to determine whether or not item j, j = 1, . . . , n, should be included in the knapsack. The objective is tomaximize the total profit without exceeding the capacity c of the knapsack. In this paper, we study the sensitivity ofthe optimum of the K P to perturbations of either the profit or the weight of an item. We give approximate and exactinterval limits for both cases (profit and weight) and propose several polynomial time algorithms able to reach theseinterval limits. The performance of the proposed algorithms are evaluated on a large number of problem instances.

Keywords: combinatorial optimization, knapsacks, optimality, sensitivity analysis

1. Introduction

The classical binary Knapsack Problem, denoted K P, consists of packing a subset of ngiven items in a knapsack of capacity c. Each item has a profit p j and a weight w j , forj = 1, . . . , n. The objective is to maximize the profit yielded by the subset of packed itemswithout exceeding the capacity c of the knapsack. Let x j be a binary decision variable withx j = 1 if the j-th item is selected, and x j = 0 otherwise. The K P can be formulated as a0-1 integer linear program:

K P ={

maxn∑

j=1

p j x j :n∑

j=1

w j x j ≤ c, x j ∈ {0, 1}, j = 1, . . . , n

}.

The K P is an NP-hard combinatorial optimization problem with a wide range of applica-tions (Gilmore and Gomory, 1966; Martello and Toth, 1990; Morabito and Arenales, 1995).It often arises as a component of more complex combinatorial optimization problems. Itsinduced structure in complex problems allows the computation of upper bounds and thedesign of heuristic and exact methods for these complex problems. For instance, in cuttingstock problems, where a lot of work has been done since the early 1960’s (Gilmore andGomory, 1966), knapsack problems account for 60–80% of the total computing time (Hifi,2001; Valero de Carvalho and Ridrigues, 1995).

240 HIFI, MHALLA AND SADFI

The knapsack problem has been thoroughly studied in the last few decades. Different so-lution approaches—exact and approximate—have been proposed in the literature. These in-clude tree search, branch and bound, dynamic programming, and hybrid approaches. Amongthe most successful branch and bound implementations are the algorithms of Horowitz andSahni (1974), and Martello and Toth (1977). They are based upon depth-first search strate-gies to limit the space search. To solve some large-scale problem instances, Balas and Zemel(1980) proposed “guessing” the optimal solution values of several decision variables, andfocusing the branching strategy on the most interesting variables. The subset of these se-lected items, called the core of the problem, is then solved by applying a depth-first searchbranch-and-bound algorithm. Other effective algorithms based on a core problem were de-veloped (Fayard and Plateau, 1982; Martello and Toth, 1988; Martello et al., 2000; Pisingerand Toth, 1998; Pisinger, 1999; Sadfi, 1999). A hybrid approach, combining dynamic pro-gramming with strong bounds, has been proposed in Martello et al. (1999).

Previous papers have investigated approaches to the general case where the numberof constraints is not limited to one. This problem, called the multidimensional (or multiconstraint) knapsack problem is reviewed in Chu and Beasley (1998) and Pisinger (1999).The bidimensional knapsack problem, a special case where the number of constraints islimited to two knapsack constraints, is investigated in Freville and Plateau (1997).

Most problems assume that the weight and profit parameters are deterministic constants.However, in several real industrial applications, these parameters are unknown. Their de-clared values are very coarse approximations. In such cases, finding the optimal solution isnot sufficient. A decision maker needs to know the extent of the validity of this solution andits sensitivity to perturbations of each parameter of the problem, in particular to perturbationsof the profit and weight coefficients. Of course, the decision maker would like to have suchinformation handy; i.e., without any further computation or resolution of a new problem.Indeed, previous research shows that solving a knapsack problem is computationally expen-sive. As such, solving knapsack problems should be avoided whenever a viable alternativeis available. This is particularly true when only a subset of the problem parameters change.

In this paper, we analyze the sensitivity of the optimum solution of a binary knapsackproblem to perturbations of the profit or weight coefficient of a selected item. We subse-quently establish sensitivity intervals for the profit and weight parameters. The sensitivityinterval of a parameter denotes the range where the parameter can vary without affectingthe structure of the optimal solution. Formally, suppose that x = (x1, ..., xs, . . . , xn) is anoptimal solution for K P . Let K P ′ and K P ′′ be two binary knapsack problems. K P ′ isobtained by varying the s-th profit ps in K P, i.e., by setting ps equal to ps + �ps, where�ps , the perturbation of the profit of item s, is integer. K P ′′ is obtained by perturbingthe weight ws of item s in K P, i.e., by setting ws equal to ws + �ws, where �ws , theperturbation of the weight of item s, is integer. More specifically, we determine:

1. if x remains a valid optimal solution for K P ′ and K P ′′, and2. the sensitivity intervals of ps and ws .

This paper is organized as follows. In Section 2, we analyze the sensitivity of the optimalsolution of K P to the perturbation of the profit of a selected item. In Section 3, we analyze

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 241

the sensitivity of the optimum to the perturbation of the weight of an item. In Section 4, wetest the performance of the results proven in the previous two sections. Finally, in Section 5,we summarize the main results obtained in this paper, and outline some potential extensions.

2. Sensitivity of the optimum to a perturbation of ps

Herein, we discuss the effect of the perturbation of the profit ps of an item s, s = 1, . . . , n,

on an optimal solution of K P . In fact, we analyze the extent of the validity of an optimalsolution of K P to the new problem K P ′ given as follows:

K P ′ ={

maxn∑

j=1j �=s

p j x j + (ps + �ps)xs :

n∑j=1j �=s

w j x j + ws xs ≤ c, x j ∈ {0, 1}, j = 1, . . . , n

}.

2.1. Some results for �ps

We first introduce the following lemma which relates the feasible regions of both problemsand the values of a feasible solution of K P and K P ′.

Lemma 2.1.

(i) Both problems K P and K P ′ have the same set of feasible solutions.(ii) If x = (x1, . . . , xs, . . . , xn) is a feasible solution for both K P and K P ′, and Z (x)

(resp. Z ′(x)) its solution value in KP (resp. KP′), then Z ′(x) = Z (x) + �ps xs .

Proof:

(i) Every item has the same weight in both KP and KP′, and the knapsack capacity cremains unchanged in both problems. It is therefore obvious that both KP and KP′ havethe same set of feasible solutions.

(ii) Let x be a feasible solution of KP and KP′, and let Z (x) (resp., Z ′(x)) its solution valuein KP (resp., in KP′). Z (x) = ∑n

j=1 p j x j and

Z ′(x) =n∑

j=1

p j x j + �ps xs = Z (x) + �ps xs .

We use the results of Lemma 2.1 to construct the limits of the sensitivity interval of ps .The limits of this interval depend only on the extent of the validity of the objective function

242 HIFI, MHALLA AND SADFI

of KP′, when x is an optimal solution since both problems have the same set of feasiblesolutions. That is, these limits will simply guarantee that Z ′(x) ≤ Z ′(x).

To determine these limits, we consider the two possible cases: (i) the profit of item sincreases (i.e., �ps ≥ 0), and (ii) the profit of item s decreases (i.e., �ps ≤ 0). Thefollowing theorem points out some results concerning the new solution value in both cases.

Theorem 2.1. If x is an optimal solution for KP and (i) (�ps ≥ 0 and xs = 1) or (i i)(�ps ≤ 0 and xs = 0) then x is an optimal solution for KP′.

Proof: First, let’s suppose that condition (i) holds and prove that x remains an optimalsolution. Let x = (x1, . . . , xs, . . . , xn) be a feasible solution of KP. It is easy to see that forxs = 1, xs ≤ xs . Since �ps ≥ 0, then 0 ≤ �ps xs ≤ �ps xs .

Since xs is a feasible solution while xs is an optimal solution (with value Z ), thenZ (x) ≤ Z . Subsequently,

Z (x) + �ps xs ≤ Z + �ps xs ;

which is equivalent to Z ′(x) ≤ Z ′(x). Hence, x is an optimal solution for KP′.

Second, let’s suppose that condition (ii) holds and prove that x remains an optimalsolution. Let x = (x1, . . . , xs, . . . , xn) be a feasible solution. Since �ps ≤ 0, then �ps xs ≤0. Therefore,

Z (x) + �ps xs ≤ Z . (1)

Setting xs = 0 and since Z (x) ≤ Z , we get Z + �ps xs = Z . Equation (1) is equivalent to

Z ′(x) ≤ Z ′(x).

Hence, x is an optimal solution for KP′

Applying Theorem 2.1, we infer that x remains optimal for KP′, if xs = 1 (resp. xs = 0)and ps increases (resp. decreases) without limit. Using Lemma 2.1, we can affirm that theoptimal solution value of KP′ equals (i) Z + �ps, if xs is fixed to 1 in KP, and (ii) Z , if xs

is fixed to 0 in KP.

Therefore, we need to consider the following alternative possible cases: (i) xs = 1 whileps decreases and (ii) xs = 0 while ps increases. Prior to introducing the two theorems thataddress these two cases, we introduce the following notations. Let F represent the set offeasible solutions of KP, F s

0 ⊆ F such that F s0 = {x ∈ F and xs = 0} and, F s

1 ⊆ F suchthat F s

1 = {x ∈ F and xs = 1}.

Theorem 2.2. If x is an optimal solution for KP and, �ps ≤ 0 and xs = 1, then thefollowing assertions are equivalent:

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 243

(i) Z − |�ps | ≥ Z (w0), where w0 ∈ F s0 and ∀x ∈ F s

0 , Z (w0) ≥ Z (x).(ii) x is an optimal solution of KP′.

Proof: First, let’s prove that statement (i) implies statement (ii). Let x ∈ F be a feasiblesolution for KP. Therefore, Z (x) ≤ Z (x), where x is an optimal solution for KP. Adding�ps xs to both sides of the last inequality yields Z (x) + �ps xs ≤ Z (x) + �ps xs .

According to Lemma 2.1,

Z (x) + �ps xs = Z ′(x). (2)

Therefore, Z ′(x) ≤ Z (x) + �ps xs .

The variable xs is binary. So, when xs = 1,

�ps xs = �ps xs ⇒ Z ′(x) ≤ Z (x) + �ps xs .

But Z (x) + �ps xs = Z ′(x); therefore, Z ′(x) ≤ Z ′(x).When xs = 0, x ∈ F s

0 , and Z (x) ≤ Z (w0). In this case (xs = 0), Eq. (2) is reduced toZ ′(x) = Z (x). Therefore,

Z ′(x) ≤ Z (w0). (3)

Combining statement (i) with Eq. (3) implies that Z ′(x) ≤ Z (x) − |�ps |.Since �ps ≤ 0, then we have Z (x)−|�ps | ≤ Z (x)−�ps . Using the fact that �ps ≤ �ps xs,

we deduce that

Z (x) − �ps ≤ Z ′(x) ⇔ Z ′(x) ≤ Z ′(x).

Having proven the first implication, we now prove that statement (ii) implies (i).Suppose that x is an optimal solution for KP′. Then,

∀x ∈ F s0 , Z ′(x) ≥ Z ′(x) ⇔ Z + �ps xs ≥ Z (x) + �ps xs . (4)

Substituting xs = 1 and xs = 0 in the right hand side of the equality of Eq. (4), we getZ + �ps ≥ Z (x). Since �ps ≤ 0, then

Z − |�ps | ≥ Z (x). (5)

Now, substituting x and w0 (feasible solution) in Eq. (5), we get Z − |�ps | ≥ Z (w0).

Theorem 2.2 states that x remains optimal when xs = 1 and ps decreases as long as it(i.e., x) satisfies the following inequality Z − |�ps | ≥ Z (w0); differently stated, as long as|�ps | ≤ Z − Z (w0). It follows that �ps can vary in the interval [Z (w0) − Z , 0] withoutaffecting the optimal solution (recall that �ps ≤ 0 and Z (w0) ≤ Z ). Thus, ps + Z (w0) − Zis the extent or limit to which we can decrease ps without affecting the optimal solution. It

244 HIFI, MHALLA AND SADFI

is the lower bound of the perturbation of ps . To compute this lower bound, we only need todetermine Z (w0) which represents the best feasible solution of KP when xs = 0.

The following result determines the limit values for ps, when ps increases and xs isfixed to zero.

Theorem 2.3. If x is an optimal solution for KP and, �ps ≥ 0 and xs = 0, then thefollowing two assertions are equivalent:(i) Z − |�ps | ≥ Z (w1), where w1 ∈ F s

1 and ∀x ∈ F s1 , Z (w1) ≥ Z (x).

(ii) x is an optimal solution of KP′.

Proof: First, we prove that statement (i) implies statement (ii). Let x be a feasible solutionsuch that x ∈ F . Lemma 2.1 implies that

Z ′(x) = Z (x) + �ps xs . (6)

Recall that xs is a binary variable.When xs = 1, x ∈ F s

1 and Z (x) ≤ Z (w1); thus, Z (x) + �ps xs ≤ Z (w1) + �ps xs .

Combining this result with Eq. (6), we get

Z ′(x) ≤ Z (w1) + �ps xs . (7)

Condition (i) further confirms that Z (w1) ≤ Z − |�ps |; therefore Eq. (7) becomes

Z ′(x) ≤ Z − |�ps | + �ps xs . (8)

Since xs = 1, and �ps ≥ 0, Eq. (8) reduces to Z ′(x) ≤ Z . Since xs = 0, then Z ′(x) = Z ;therefore Z ′(x) ≤ Z ′(x). Hence, x remains optimal for KP′.

If, on the other hand, xs = 0, then �ps xs = 0. Since Z ′(x) = Z (x)+�ps xs, and xs = 0,

then Z (x) + �ps xs ≤ Z (x).Given that x is an optimal solution for KP, then Z (x) ≤ Z (x). But, Z (x) = Z ′(x)−�ps xs

and so, using the fact that xs = 0, yields that

Z ′(x) ≤ Z ′(x),

and x is an optimal solution for KP′.Having proven that statement (i) implies (ii), we need now to prove that statement (ii)

implies (i). Suppose that x is an optimal solution for KP′.Let x = (x1, . . . , xs, . . . , xn) ∈ F s1 .

Then,

Z ′(x) ≥ Z ′(x) ⇒ Z + �ps xs ≥ Z (x) + �ps xs . (9)

Since xs = 0, xs = 1, and �ps ≥ 0, then Eq. (9) reduces to

Z ≥ Z (x) + �ps ⇒ Z − |�ps | ≥ Z (x).

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 245

The inequality Z −|�ps | ≥ Z (x) is true for all x ∈ F s1 ; in particular for w1, where w1 ∈ F s

1and ∀x ∈ F s

1 , Z (w1) ≥ Z (x). That is, Z − |�ps | ≥ Z (w1).

Theorem 2.3 simply states that x is an optimal solution if xs = 0 and ps increases, aslong as it (i.e., x) satisfies

Z − |�ps | ≥ Z (w1) ⇔ |�ps | ≤ Z − Z (w1).

It follows that �ps can vary in the interval [0, Z − Z (w1)] without affecting the optimalsolution. Thus, ps + Z (w1) is the limit to which we can increase ps without affecting theoptimal solution. It is the upper bound of the perturbation of ps . To compute this upperbound, we only need to determine Z (w1) which denotes the best feasible solution of KPwhen xs is fixed to 1.

2.2. An algorithm for determining the interval limits for �ps

To determine the sensitivity interval limits, we distinguish the four possible combinations:(i) �ps ≥ 0 and xs = 1, (ii) �ps ≤ 0 and xs = 0, (iii) �ps ≤ 0 and xs = 1, and (iv)�ps ≥ 0 and xs = 0.

The first two cases are directly solved with Theorem 2.1 while the last two cases ((iii) and(iv)) are solved using the proposed algorithm (noted Algo 1). For these last cases, Algo 1tries to estimate the limits of the sensitivity interval.

Let I Os = [I O−s , I O+

s ] be the optimal interval limits and Is = [I −s , I +

s ] be the estimatedinterval limits given by Algo 1. The estimated set is initialized to the set of positive integernumbers; that is I −

s = 1, and I +s = +∞, where I −

s (resp. I +s ) is the minimal (resp. maximal)

value of ps that guarantees that the optimal solution remains unchanged. The limits of theinterval Is are later updated using the results of Theorems 2.2 and 2.3 for cases (iii) and (iv)respectively. In what follows, we simplify the notations by setting Is = I Os for the cases(i) and (ii).

Now, for both cases ((iii) and (iv)), the estimated interval limits are determined as follows:When �ps ≤ 0 and xs = 1, (i.e., for case (iii)), Theorem 2.2 implies that Z − |�ps | ≥

Z (w0), where w0 ∈ F s0 and ∀x ∈ F s

0 , Z (w0) ≥ Z (x). In particular,

Z − |�ps | ≥ U (KP\{s}), (10)

where U (KP\{s}) is an upper bound for KP with capacity c and xs = 0, noted as KP\{s}.Rearranging the terms of Eq. (10) yields

−|�ps | ≥ −Z + U (KP\{s}) ⇔ �ps ≥ Z − U (KP\{s}) (since �ps ≤ 0).

Hence, I −s = Z − U (KP\{s}).

When �ps ≥ 0 and xs = 0, (i.e., for case (iv)), Theorem 2.3 implies that Z − |�ps | ≥Z (w1), where w1 ∈ F s

1 and ∀x ∈ F s1 , Z (w1) ≥ Z (x). In particular,

Z − |�ps | ≥ U (KP[c − ws]\{s}) + ps, (11)

246 HIFI, MHALLA AND SADFI

Algo 1. Used to compute Is , the sensitivity interval of the profit of item s.

where U (KP[c −ws]\{s}) is an upper bound for KP with capacity c −ws and xs = 0, notedas KP[c − ws]\{s}. Rearranging the terms of Eq. (11) and since �ps ≤ 0, then

−|�ps | ≥ −Z + U (KP[c − ws]\{s}) + ps ⇔ �ps ≤ Z − U (KP[c − ws]\{s}) − ps .

Hence, I +s = Z − U (KP[c − ws]\{s}) − ps .

Note that both optimal solutions Z (w0) and Z (w1) of respectively KP\{s} and KP[c −ws]\{s} are unknown. Algo 1 avoids computing the optimal solutions of both problems byusing upper bounds of these two problems; that is, using U (KP[c−ws]\{s}) and U (KP\{s}),respectively. These upper bounds can be computed using any of the approaches proposedin the literature. Herein, we compute both bounds using the approach of Dantzig (1957).Algo 1 refers to this constructive approach as Compute-Bound(). This approach generatesan upper bound for any input problem KP.

For a given item s, Algo 1 calls the procedure Compute-Bound() only once. In addi-tion, Compute-Bound() is polynomial (Dantzig, 1957). Therefore, Algo 1 is a polynomialalgorithm.

3. Sensitivity of the optimum to a perturbation of ws

In this section, we analyze the effects of the perturbation of the weight of an items ∈ {1, . . . , n} on the optimal solution of KP. When ws is replaced by ws + �ws, KPis changed into the following knapsack problem KP′′:

KP′′ ={

maxn∑

j=1j �=s

p j x j + ps xs :

n∑j=1j �=s

w j x j + (ws + �ws)xs ≤ c, x j ∈ {0, 1}, j = 1, . . . , n

}.

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 247

Consider that x is an optimal solution of KP and for all y an optimal solution,c(y) ≤ c(x), where c(x) = c − ∑n

j=1 w j x j , and x = (x1, . . . , xs, . . . , xn) denotes afeasible solution. Differently stated, if there are multiple optimal solutions, x is the solutionwith the greatest residual value c. If x is an optimal solution of KP, then its value is

Z (KP) = max{Z (KP\{s}), Z (KP[c − ws]\{s})}. (12)

When we substitute KP′′ for KP in Eq. (12), the solution value of KP′′ becomes

Z (KP′′) = max{Z (KP′′\{s}), Z (KP′′[c − ws]\{s})}. (13)

3.1. Some results for �ws

Lemma 3.1. Let s be a fixed item of both KP and KP′′. Then

Z (KP\{s}) = Z (KP′′\{s}).

Proof: Recall that KP′′ is obtained from KP by setting ws equal to ws + �ws . So, it iseasy to see that both problems KP\{s} and KP′′\{s} are equivalent; thus, have the sameoptimal solution value.

Herein, we distinguish two cases: when the weight of item s increases, and when itdecreases. In what follows, we suppose (Section 3.2) that �ws ≥ 0 and determine the exactsensitivity interval limits for a selected item s. In Section 3.3, we take the case �ws ≤ 0and we establish an exact interval limits when x is fixed to one and, an approximate intervallimits when x is fixed to zero.

3.2. Increasing ws

In this part, we suppose that �ws ≥ 0 and we determine the sensitivity interval of ws ; i.e.,the interval where ws can vary while the optimal solution of KP remains valid for KP′′.

Proposition 3.1. Let �ws ≥ 0 and x ′′ be a feasible solution for KP′′. Then

(i) x ′′ is a feasible solution for KP.(ii) if Z ′′(x ′′) = Z (KP), then x ′′ is an optimal solution for KP′′.

Proof:

(i) If x ′′ is a feasible solution of KP′′, then

n∑j=1

w j x′′j + x ′′

s �ws ≤ c.

248 HIFI, MHALLA AND SADFI

Since �ws ≥ 0, then

n∑j=1

w j x′′j ≤ c ⇒ x ′′ is a feasible solution of KP.

Hence, x ′′ is a feasible solution for both KP′′ and KP.

(ii) According to the point (i), we can show that an optimal solution of KP is a feasiblesolution for KP′′.

Thus, an optimal solution for KP is also an optimal solution for KP′′.

The first point (i) of Proposition 3.1 infers that each feasible solution of KP′′ remainsfeasible for KP; in particular, the optimal solution of KP′′ is feasible for KP. Therefore, theoptimal solution value of KP is an upper bound for KP′′; i.e., Z (KP′′) ≤ Z (KP).

Now, according to the second point (ii) of Proposition 3.1, the limits of the sensitivityinterval of ws coincide with the limits of the feasibility of x in KP′′. These limits dependon the value of xs ; i.e., on whether xs = 0 or 1.

In what follows, we first explicitly determine the limits of the sensitivity interval of ws

when xs is fixed to zero. Second, we explicitly determine the limits of sensitivity of theoptimal solution of KP when xs is fixed to one.

The upper limit of the sensitivity interval is given by the point (i) of Proposition 3.1.It is either the value of the upper bound of KP or the value of the optimal solution ofKP. It states that if x, the optimal solution of KP, is a feasible solution to KP′′, thenit is also optimal to KP′′. Starting from these results, we extend the sensitivity inter-val until x becomes infeasible to KP′′. We then prove that this interval has the widestrange.

Theorem 3.1. Let �ws ≥ 0 and x be an optimal solution for KP. Then,

(i) if xs = 0, then x remains an optimal solution for KP′′.(ii) if xs = 1, then x is an optimal solution for KP′′ if and only if �ws ∈ [0, c(x)].

Proof: The case (i): according to the point (ii) of Proposition 3.1, we have Z (KP′′) ≤Z (KP). Since xs = 0, then Z (KP) = Z (KP\{s}) = Z (KP′′\{s}). And using Eq. (13), wededuce that

Z (KP′′) = Z (KP′′\{s}) = Z (KP\{s}) = Z (KP).

The case (ii): since x is an optimal solution of KP, then,

n∑j=1

w j x j ≤ c. (14)

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 249

If x is an optimal solution for KP′′, then it is necessarily feasible for KP′′; thus,

n∑j=1

w j x j + xs�ws ≤ c. (15)

Equations (14) and (15) imply that �ws xs ≤ c − ∑nj=1 w j x j = c.

Substituting xs = 1 in the last inequality, we get 0 ≤ �ws ≤ c.Now, we prove that the above interval of sensitivity of the optimum has the widest range.

We prove that if �ws ≥ c(x), then the optimal solution of KP is no longer feasible to KP.Indeed, if �ws > c then x is no longer a feasible solution to KP′′. In fact, if �ws > c then

�ws > c −s−1∑j=1

w j x j ⇔s−1∑j=1

w j x j + �ws > c.

Subsequently, x is no longer a feasible solution to KP′′.Moreover, we have already mentioned that if the optimal solution is not unique, then

we just choose the solution with the largest residue c. Hence, the obtained interval hasnecessarily the widest range.

Differently stated, when �ws ≥ 0, then x is an optimal solution for KP′′ if and only if�ws ≤ c(x).

The case (i) of Theorem 3.1 indicates that both KP and KP′′ have the same optimalsolution if �ws ≥ 0 and xs = 0. It means that, the later optimal solution remains optimalwhen �ws varies in the interval [1, ∞[. The case (ii) of Theorem 3.1 provides a sensitivityinterval directly from the resolution of KP regardless of the item s being considered. Inaddition, finding this interval is straightforward. Computing the residual of the optimalsolution of KP yields the sensitivity interval for every item s of KP.

3.3. Decreasing ws

In this section, we consider the case where �ws ≤ 0, and we determine the interval wherews can vary without altering the optimal solution of KP′′.

Theorem 3.2. Let �ws ≤ 0 and x be a feasible solution for KP. Then,

(i) x is a feasible solution for KP′′.(ii) if xs = 1 in KP, then x ′′

s = 1.

Proof: The case (i): If x is a feasible solution for KP, and �ws ≤ 0, then

n∑j=1

w j x j ≤ c andn∑

j=1

w j x j + xs�ws ≤ c;

250 HIFI, MHALLA AND SADFI

which implies that x is a feasible solution for KP′′. Hence, x is a feasible solution for bothKP and KP′′. So, it states that the optimal solution value of KP is a valid lower bound forKP′′. It is noteworthy that the set of feasible solutions of KP is a subset of the set of feasiblesolutions of KP′′.

The case (ii): to prove this result, we show that if �ws ≤ 0 and xs = 1 then, the valueof item s remains unchanged in the optimal solution of both KP and KP′′. According to thecase (i), we have

Z (KP) ≤ Z (KP′′).

And since xs = 1, then

Z (KP) = Z (KP; xs = 1) ≥ Z (KP; xs = 0) = Z (KP′′\{s}).Moreover,

Z (KP′′) = max{Z (KP′′; xs = 0), Z (KP′′; xs = 1)}⇒ Z (KP′′) = Z (KP′′; xs = 1)

⇒ x ′′s = 1

The case (ii) of Theorem 3.2 provides the value of item s, but infers no further informationregarding the value of any other item of the solution. We therefore need to determinesufficient conditions that guarantee that every other item keeps the same value in the optimalsolution of both KP and KP′′.

3.4. Two approaches for determining the sensitivity interval

To determine the sensitivity interval limits, we distinguish the two possible cases: (i) �ws

increases and, (ii) �ws decreases. The first case is directly solved with Theorem 3.1, whilethe second case is solved by applying the following algorithms. Indeed, in what follows, wedistinguish two cases: the first case is when the problem instance admits a unique optimalsolution and the second one is when the optimal solution can be multiple. For the firstcase, we can in fact determine if the problem instance admits a unique optimal solution ifa dynamic programming procedure is applied for solving KP.

In order to determine sufficient conditions that guarantee that every other item keeps thesame value in the optimal solution of both KP and KP′′, we use two different techniquesdepending on the number of optimal solutions of KP.

3.4.1. Unique optimal solution. To determine sufficient conditions that guarantee thatevery other item (different from s) keeps the same value in the optimal solution of both KPand KP′′, we use the variable elimination technique associated with Dantzig’s upper boundU . In this case, we apply the following result associated to Fayard and Plateau (1982).

Result 3.1. Let s be an index such that s ∈ {1, . . . , n}. If there exists a feasible solution xsatisfying U (KP[xs = 1− xs]) < Z (KP), then xs is fixed to 1− xs in each optimal solution.

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 251

The main idea consists in applying Result 3.1 for KP′′, when s is the chosen item.However, we recall that if xi is fixed to one in KP, then we have established a result whichshows that xi maintains its value in KP′′ and so, it is not necessarily to compute the intervallimits I s

s . Furthermore, if xi = 0 in KP, then we need to compute the interval limits I ss . For

the general case, the main principle of the approach can be described as follows:

(i) To determine the interval limits I si , i = 1, . . . , n, for which �ws can be changed while

maintaining the value of the item i fixed to its value xi in the optimal solution of KP;(ii) To take the intersection of these intervals I s

i , i = 1, . . . , n, which yields the sensitivityinterval, namely I, of the optimum of KP.

The first phase (i) is obtained as follows: Let x = (x1, . . . , xn} be an optimal solution forKP, and I denote the interval of perturbation of �ws for which xi is maintained equal toxi in any optimal solution of KP′′. Then, each interval I s

i is such that:

∀ i ∈ {1, . . . , n}, �ws ∈ I and U (KP′′[c − ws − �ws − (1 − xi )wi ]\{s})≤ Z (KP).

To determine each interval I si , i = 1, . . . , n, we compute the new upper bound using

Dantzig’s procedure. We can remark that the upper bound

U (KP′′[c − ws − �ws − (1 − xi )wi ]\{s})

depends directly of the value of �ws . So, we calculate each of these intervals by applyingthe following two steps:

Step 1. Since �ws ≤ 0, then U (KP) ≤ U (KP′′); which implies that

U (KP[c − ws − �ws − (1 − xi )wi ]\{s})≤ U (KP′′[c − ws − �ws − (1 − xi )wi ]\{s}).

Step 2. We search the smallest value �ws ≤ 0, satisfying

U (KP[c − ws − �ws − (1 − xi )wi ]\{s})= U (KP′′[c − ws − �ws − (1 − xi )wi ]\{s}).

This value is reasonably easy to compute. Indeed, consider that r is the critical itemsatisfying Dantzig’s upper bound. Then all elements i ≥ r, are not used in computingthe upper bound. The largest value of �ws, with s > r, is given as follows:

ps−1

ws−1>

ps

ws + �ws⇔ �ws >

ws−1 ∗ ps

ps−1− ws .

252 HIFI, MHALLA AND SADFI

In our study, we provided a straightforward procedure that takes advantage of the resultsof the dynamic programming approach at the different stages. This procedure is applicableonly when the optimal solution of KP is unique.

However, since KP problems, as most integer programming problems, generally admitmultiple optimal solutions, we propose a second algorithm that can be used in either case:unique or multiple optimal solutions.

3.4.2. Single or multiple optimal solutions. Recall that Theorem 3.2 provides the valueof the s-th item, but infers no further information regarding the value of any other itemof the solution. We are searching for sufficient conditions that guarantee that every otheritem keeps the same value in the optimal solution of both KP and KP′′. In what follows, weestablish some results in order to guaranty the optimality in KP′′.

Theorem 3.3. Let s be a fixed item of both KP and KP′′. Then

Z (KP′′[xs = 1] ≤ Z (KP[c − �ws]),

where �ws is the perturbation of the weight of the s-th item.

Proof: Let x ′′ be a feasible solution of KP′′ such that x ′′ ∈ F s1 (KP′′). Then,

s−1∑j=1

w j x′′j + (ws + �ws)x ′′

s +n∑

j=s+1

w j x′′j ≤ c.

This is equivalent to∑n

j=1 w j x ′′j + �ws x ′′

s ≤ c; which, in turn, (since x ′′s = 1) can be

reduced to:

n∑j=1

w j x′′j ≤ c − �ws . (16)

Equation (16) implies that if x ′′ is a feasible solution of KP′′[c − �ws], then every feasiblesolution x ′′ of KP′′ with x ′′ ∈ F s

1 (KP′′) is a feasible solution to KP′′[c −�ws]. This, in turn,implies that

∀x ′′ ∈ F s1 (KP′′), Z (x ′′) ≤ Z (KP[c − �ws]). (17)

Equation (17) remains valid for the optimal solution of (KP′′(c); xs = 1). Subsequently,Z (KP′′[xs = 1]) ≤ Z (KP[c − �ws]).

Theorem 3.3 provides an upper bound for the optimal solution of KP′′ with capacity cand xs = 1. Therefore, when x ′′

s = 1, an upper bound for the optimal solution of KP′′ is athand.

Theorem 3.4. Let x denote an optimal solution of KP, s be a fixed item of both KP andKP′′ and, consider that �ws ≤ 0. Then,

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 253

(i) if Z (KP) = Z (KP[c − �ws]), then Z (KP′′) = Z (KP).(ii) if Z (KP) = Z (KP[c − �ws]), then x is an optimal solution of KP′′.

Proof: The case (i): the optimal solution of KP, as previously defined, is:

Z (KP) = max{Z (KP[xs = 0]), Z (KP[xs = 1])} ⇔ Z (KP) ≥ Z (KP[xs = 0]).

(18)

The optimal solution value of KP′′ is

Z (KP′′) = max{Z (KP′′[xs = 0]), Z (KP′′[xs = 1])}. (19)

Lemma 3.1 states that Z (KP\{s}) = Z (KP′′\{s}); which is equivalent to

Z (KP′′[xs = 0]) = Z (KP[xs = 0]). (20)

Equations (18) and (20) imply that:

Z (KP′′[xs = 0]) ≤ Z (KP). (21)

Theorem 3.3 infers that Z (KP′′[xs = 1]) ≤ Z (KP[c − �ws]). Differently stated,

Z (KP′′[xs = 1]) ≤ Z (KP). (22)

Combining Eqs. (19), (21) and (22), we deduce that:

Z (KP′′) ≤ Z (KP). (23)

Since �ws ≤ 0, the case (i) of Theorem 3.2 implies that Z (KP) ≤ Z ′′(KP). Combining thisresult with Eq. (23), we conclude that:

Z (KP) = Z (KP[c − �ws]) ⇒ Z (KP) = Z ′′(KP).

That is, if �ws ≤ 0, then

Z (KP) = Z (KP[c − �ws]) ⇒ Z (KP′′) = Z (KP).

The case (ii): Since �ws ≤ 0, every feasible solution of KP is a feasible solution of KP′′.Thus, x, an optimal solution of KP, is a feasible solution of KP′′. It follows, when thehypotheses are satisfied, that

Z (KP) = Z (KP′′).

Subsequently, x is an optimal solution of KP′′.

254 HIFI, MHALLA AND SADFI

Theorem 3.4 means that, for every element s ∈ {1, . . . , n}, of both KP and KP′′, if�ws ≤ 0 and Z (KP) = Z (KP[c − �ws]), then x remains an optimal solution of KP′′.Thus, we simply need to compute the minimal value of �ws , denoted �Min, for which thehypotheses of Theorem 3.4 are valid. This minimal value represents the lower bound ofthe sensitivity interval. It means that the used procedure is independent of the item s beingconsidered.

The main principle of the proposed approach can be summarized as follows. In our study,when we solve KP using a dynamic programming procedure,

(i) We use the dynamic programming solutions for every stage, and proceed with thesolution stages for c′ ≥ c.

(ii) We stop the resolution process when the greatest value c′, realizing Z (KP) =Z (KP[c′]), is attainable.

In this way, �Min is simply c − c′. Note that finding �Min is independent of the item swhose weight is being perturbed. Therefore, this sensitivity interval remains valid for alls, s = 1, . . . , n. Since, the computation required at the different stages of the dynamicprogramming for KP has already been performed, the computation time of both of thisbound is not significant.

In what follows, we show (i) how we can affirm that the obtained valid interval realizesthe best one when xs is fixed to 1 and, (ii) how we can improve the sensitivity interval limitswhen xs = 0.

Theorem 3.5. Let �Min be the smallest negative value verifying

Z (KP[c − �Min]) = Z (KP).

Consider that xs = 1 and Z (KP[c − (�Min − 1)]; xs = 1) > Z (KP). Then �Min is thelargest perturbation of the s-th weight without altering the optimal solution of KP.

Proof: Consider that �ws = �Min − 1 and let x be the optimal solution of (KP[c − �ws];xs = 1). Then

Z (KP[c − �ws]; xs = 1) ≥ Z (KP) andn∑

j=1

w j x j ≤ c − �ws .

This, in turn, implies that (since xs = 1):

n∑j=1

w j x j + �ws xs ≤ c. (24)

Equation (24) implies that x is a feasible solution of KP′′ and when the hypothesis areverified, i.e. Z (KP[c − (�Min − 1)]; xs = 1) > Z (KP), then x is not optimal for KP′′.

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 255

Recall that, ∀s ∈ {1, . . . , n}, if �Min ≤ �ws ≤ 0, then x remains an optimal solution forKP′′. It follows, if the item satisfied the hypothesis, then the lower bound of the sensitivityinterval is at hand.

For the case xs = 1, s ∈ {1, . . . , n}, Theorem 3.5 indicates that the best sensitivityinterval it attainable for several items of the problem. In what follows, we describe theprocess which can improve the sensitivity interval when xs is fixed to zero.

In this case, if we consider that xs = 0, then we have Z (KP) = Z (KP; xs = 0). It meansthat Z (KP; xs = 1) ≤ Z (KP; xs = 0) . We recall that

Z (KP′′) = max{Z (KP′′; xs = 0), Z (KP′′; xs = 1)}. (25)

Since Z (KP′′; xs = 0) = Z (KP; xs = 0) and Z (KP′′; xs = 1) = Z (KP′′[c − ws −�ws]\{s}) + ps, Eq. (25) can be rewritten as follows:

Z (KP′′) = max{Z (KP), Z (KP′′[c − ws − �ws]\{s}) + ps}. (26)

The inequality of Eq. (27) implies that

Z (KP′′) = max{Z (KP), Z (KP[c − ws − �ws]\{s}) + ps}. (27)

Consequently, if we try to compute the smallest value of �ws for which Equality (28) isrealized, then x remains an optimal solution for KP′′ and so,

Z (KP) ≥ Z (KP[c − ws − �ws]\{s}) + ps . (28)

However, instead of computing the exact value of the limits, for the case xs = 0, weestimate the limits as explained in the following.

When �ws ≤ 0 and xs = 0, we can remark that KP([c = c − ws − �ws]\{s}) is asubproblem of KP([c = c − ws − �ws]). This implies that

Z (KP[c − ws − �ws]\{s}) ≤ Z (KP[c − ws − �ws]).

Now, if a dynamic programming procedure is used for solving KP(c), we have necessaryall the optimal solutions of the subproblems KP(c′), where 0 ≤ c′ ≤ c. By considering thesolution values of the different stages (when dynamic programming techniques are used),we can compute � representing the smallest value of �ws, realizing

Z (KP) ≥ Z (KP[c − ws − �ws]) + ps .

The above values ensure the validity of Eq. (28) is satisfied. Hence, we can deduce that thevalue corresponding to �s , the upper bound of the perturbation, is

�s = min{�, �Min}.

256 HIFI, MHALLA AND SADFI

4. Computational results

The purpose of this section is twofold. First, we illustrate the search procedures introducedin Sections 2.2 and 3.4.2 via a small twenty-item example whose profits and weights are ran-domly generated between 1 and 100, and whose knapsack capacity c is randomly taken from[ 1

4

∑nj=1 w j ,

12

∑nj=1 w j ]. Second, we analyze the behavior of these procedures on large

sized problems. We focus on the perturbation of ps prior to investigating the experimentalresults for the perturbation of ws .

4.1. Variation of ps

We illustrate the procedure of Section 2.2 via a twenty item example whose knapsackcapacity equals 420 in this case. Using the aforementioned search procedure of Section 2.2,we compute the limits I −

s and I +s of the sensitivity interval of each fixed item s, s = 1, . . . , n.

The items are considered in a decreasing order of the profit to weight ratio; that is, ps−1

ws−1>

ps

ws.

Table 1 displays the data (Columns 1, 2 and 3) and the limits of the computed sensitivityintervals (Columns 4 and 5).

Analyzing columns 4 and 5 of Table 1, we distinguish those items included in the knapsack(xs = 1 in the optimal solution and I +

s = +∞) from those not included (xs = 0 in theoptimal solution and I −

s = 1). The analysis of the “non trivial” limits -i.e., I −s when xs = 1

and I +s when xs = 0- is very interesting. The optimal solution is sensitive to the profit

perturbation of only 3 among the 20 variables. These variables correspond to item 6, item11, and item 13. For item 6, I −

s = ps = 30; which implies that p6 can not be decreasedwithout a change in the optimal solution; however, increasing p6 does not affect the optimalsolution (i.e., I +

s = +∞). For item 11, I −s = ps = 33; which simply implies that p11 can

not be decreased without a change in the optimal solution. On the other hand, for item 13,I +s = ps = 74; which implies that p14 can not be increased without a change in the optimal

solution. It can however be decreased to 1 since I −s = 1. The fact that all other sensitivity

intervals are wide shows that the optimal solution of KP is very robust in this case. Anextensive analysis of the results of this example is not significant as the problem is a verysmall instance.

In the second example, we consider larger problems. We randomly generate ten instancesof ten knapsack problems whose sizes vary between 1000 and 10000 in steps of 1000. Foreach problem, the profits and weights are randomly generated between 1 and 100. Thecapacity c varies in the interval [ 1

2

∑nj=1 w j ,

23

∑nj=1 w j ]. For each problem of size n, we

compute, as illustrated in Table 2 (Columns 2 and 3), the following entities:

• �−: the average allowable negative deviation of the profits of the items included in theoptimal solution; i.e., �− = 1

n1

∑ns=1(ps − I −

s )xs ; where n1 is the number of itemsincluded in the optimal solution.

• �+: the average allowable positive deviation of the profits of the items not included inthe optimal solution; i.e., �+ = 1

n−n1

∑ns=1 −(ps − I +

s )xs .

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 257

Table 1. Data and sensitivity intervals of the profit and the weight perturbationsof a twenty-item example.

Is �−s

Item Weight Profit I −s I +

s Computed Exact

1 4 80 17 +∞ 3 3

2 3 28 16 +∞ 2 2

3 25 54 28 +∞ 13 13

4 25 81 37 +∞ 13 13

5 12 31 25 +∞ 11 11

6 17 30 30 +∞ 5 1

7 24 39 36 +∞ 12 12

8 27 41 39 +∞ 15 15

9 51 68 61 +∞ 39 39

10 65 83 74 +∞ 53 53

11 30 33 33 +∞ 18 18

12 91 100 92 +∞ 79 79

13 76 74 1 74 64 64

14 44 41 38 +∞ 32 22

15 70 47 1 52 47 47

16 69 38 1 46 33 3

17 86 32 1 57 20 3

18 62 16 1 41 3 3

19 29 6 1 19 3 3

20 40 8 1 26 3 3

Table 2. Profit and weight perturbations of large knapsack problems.

n �− �+ Pr (%) Pr1 (%)

1000 3802 1508 79 94

2000 4044 1517 77 92

3000 3905 1443 76 89

4000 3934 1539 80 90

5000 3946 1454 77 90

6000 4016 1443 76 89

7000 3945 1501 76 90

8000 4010 1405 76 90

9000 3990 1431 77 89

10000 3935 1453 76 89

258 HIFI, MHALLA AND SADFI

Columns 2 and 3 of Table 2 show that the average positive and negative deviations areindependent of the size of the problem. In fact, the average negative deviation varies from3802 to 4044 with a mean value of 3953. The average positive deviation varies from 1405to 1539 with a mean value equal to 1470. The average profit is 5000 since all profits arerandomly generated from the Uniform [1,10000]. As such, the critical items can have onaverage their profits increased by 29.4% or decreased by 79.06% of their values prior toaltering the optimal solution. It is however to be noted that these values are only averagesas the sensitivity intervals start very wide prior to becoming very small around the criticalitem and becoming again wider. Wide sensitivity intervals that the optimal solution of theknapsack problem is quite robust with respect to the item being considered.

This observation is further confirmed by the percentage no changed items. This percentagestill very small and not significant. In fact, almost all of the items can have their profitsperturbed while the optimal solution remains stable. The number of critical items increasesas the size of the problem increases but that is due not to an increase of the percentage ofcritical items but to an increase of the problem size.

4.2. Variation of ws

The purpose of this section is twofold. We illustrate the search procedure introduced inSection 3.4.2 via the same example used in Section 4.1. We then analyze the behavior oflarge sized problems when the weight is perturbed. We consider that the problem instancecan have a unique or multiple optimal solutions.

We apply the procedure of Section 3.4.2 via the same twenty item example whose knap-sack capacity equals 420. Using the search procedure we compute the largest negativedeviation of the weight of item s, s = 1, . . . , n; that is I −

s . Then, using a complete enumer-ation, we identify the corresponding exact value of this maximum negative deviation �−

s .Table 1 (Columns 6 and 7) displays the data, the computed and the exact lower limits of thesensitivity intervals. Table 1 does not report the value of I +

s since according to Theorem 3.1,I +s = +∞ if xs = 0, and I +

s = c(x) if xs = 1, respectively. These values have beenconfirmed by the experimental results.

Column 6 of Table 1 shows that the computed lower bound of the negative deviation ofthe weight of item s, s = 1, . . . , n, is very close to the exact bound (Column 7 of Table 1).Indeed, we can remark that for 16 items through 20, these two values are similar, whichrepresents a percentage of 80% for the small example.

In the second example, we consider the same larger problems of Table 2. For eachproblem of size n, we compute, as illustrated in Table 2 (Columns 4 and 5), the followingentities:

• Pr1: the proportion of the item for which the computed w−s is equal to the exact value.

• Pr : the average deviation of the negative sensitivity interval computed as follows:Pr = 100 × ( 1

n

∑ns=1

w−s Computedw−

s exact).

Table 2 (Columns 4 and 5) displays the obtained results and the average ones comparedto the exact values. We can observe that the results, corresponding to the deviations, are also

SENSITIVITY OF OPTIMUM TO PERTURBATIONS OF PROFIT OR WEIGHT 259

independent of the size of the problem. Indeed, we can remark that the negative deviationPr1 varies from 76% to 80% with a global average percentage of 77%. Furthermore, theaverage percentage deviation Pr varies from 89% to 94% with a global average percentageof 90.2%. So, we can conclude that the behavior of the proposed algorithm is very interestingfor either small or large problem instances.

5. Conclusion

This paper studies the sensitivity of the optimal solution of a binary knapsack problem toperturbations of the profit or weight of a single item. It provides a search procedure thatdetermines the allowable interval of perturbation of the profit of a given item s, wheres = 1, . . . , n. It also proposes two search procedures for the allowable intervals of pertur-bations of the weights: one to be used only when the binary knapsack has a unique optimalsolution, and one that can be used when the binary knapsack has either a single solution ormultiple optimal solutions. The proposed search methods provide very interesting results,demonstrating the robustness of optimal solutions of the knapsack problems. Dealing withperturbations of the profit is a lot simpler than dealing with perturbations of the weight.

This work can be extended to different versions of the problem. The first direction ofresearch is to include the sensitivity of the optimum of the binary knapsack to the perturba-tions of the profit and/or weight of a subset of items. Generating the new optimal solutionwhen the perturbation is larger than the maximal allowable deviation would be a valuablecontribution. The second direction of research is to show the possibility of perturbing to-gether the profit and the weight of a selected (sub-set) of item(s). In this case, the study canbe more complex and also interesting.

Acknowledgment

Many thanks to anonymous referees for their helpful comments contributing to improve thepresentation of the paper.

References

E. Balas and E. Zemel, “An algorithm for large zero-one knapsack problems,” Operations Research, vol. 28,pp. 1130–1154, 1980.

P. Chu and J.E. Beasley, “A genetic algorithm for the multidimensional knapsack problem,” Journal of Heuristics,vol. 4, pp. 63–86, 1998.

G.B. Dantzig, “Discrete variable extremum problems,” Operations Research, vol. 5, pp. 266–277, 1957.D. Fayard and G. Plateau, “An algorithm for the solution of the 0–1 knapsack problem,” Computing, vol. 28,

pp. 269–287, 1982.A. Freville and G. Plateau, “The 0-1 bidimensional knapsack problem: Toward an efficient high-level primitive

tool,” Journal of Heuristics, vol. 2, pp. 147–167, 1997.P.C. Gilmore and R.E. Gomory, “The theory and computation of knapsack functions,” Operations Research,

vol. 13, pp. 879–919, 1966.M. Hifi, “Exact algorithms for large-scale unconstrained two and three staged cutting problems,” Computational

Optimization and Applications, vol. 18, pp. 63–88, 2001.

260 HIFI, MHALLA AND SADFI

E. Horowitz and S. Sahni, “Computing partitions with applications to the knapsack problem,” Journal of ACM,vol. 21, pp. 277–292, 1974.

S. Martello and P. Toth, Knapsack Problems: Algorithms and Computer Implementations, Wiley, Chichester:England, 1990.

S. Martello and P. Toth, “A new algorithm for the 0-1 knapsack problem,” Management Science, vol. 34, pp. 633–644, 1988.

S. Martello and P. Toth, “An upper bound for the zero-one knapsack problem and a branch and bound algorithm,”European Journal of Operational Research, vol. 1, pp. 169–175, 1977.

S. Martello, D. Pisinger, and P. Toth, “New trends in exact algorithms for the 0-1 knapsack problem,” EuropeanJournal of Operational Research, vol. 123, pp. 325–332, 2000.

S. Martello, D. Pisinger, and P. Toth, “Dynamic Programming and strong bounds for the 0-1 Knapsack Problem,”Management Science, vol. 45, pp. 414–424, 1999.

R. Morabito and M. Arenales, “Performance of two heuristics for solving large scale two-dimensional guillotinecutting problems,” INFOR, vol. 33, pp. 145–155, 1995.

D. Pisinger, “An exact algorithm for large multiple knapsack problems,” European Journal of Operational Re-search, vol. 114, pp. 528–541, 1999.

D. Pisinger and P. Toth, “Knapsack Problems,” in D.-Z. Du and P. Pardalos (eds.), Handbook of CombinatorialOptimization, Kluwer Academic Publishers, vol. 1, 1998, pp. 299–428.

D. Pisinger, “Core problems in knapsack algorithms,” Operations Research, vol. 47, pp. 570–575, 1999.S. Sadfi, Methodes adaptatives et methodes exactes pour des problemes de knapsack lineaires et non lineaires,

Thesis, LRI, Universite d’Orsay, 1999.J.M. Valero de Carvalho and A.J. Ridrigues, “An LP-based approach to a two-stage cutting stock problem,”

European Journal of Operational Research, vol. 84, pp. 580–589, 1995.