interval linear algebra and computational...
TRANSCRIPT
Interval Linear Algebra and ComputationalComplexity
Jaroslav Horacek
Department of Applied MathematicsCharles University, Prague
MatTriadSeptember 2015
The journey of Pi . . .
Our goal
Journey through interval linear algebra
4 / 59
Interval linear algebra
5 / 59
Interval linear algebra
I The key notion is an interval (real closed)
I We use the intervals instead of real coefficients
I Many reasons:I Floating point rounding (
√2, π)
I Data uncertaintyI Verification, computer assisted proofs
6 / 59
Interval linear algebra
I The key notion is an interval (real closed)
I We use the intervals instead of real coefficients
I Many reasons:I Floating point rounding (
√2, π)
I Data uncertaintyI Verification, computer assisted proofs
6 / 59
Interval linear algebra
I The key notion is an interval (real closed)
I We use the intervals instead of real coefficients
I Many reasons:I Floating point rounding (
√2, π)
I Data uncertaintyI Verification, computer assisted proofs
6 / 59
Interval linear algebra
Interval matrixA = [A,A] = {A | A ≤ A ≤ A}
I Can be also defined using midpoint matrix Ac and radiusmatrix ∆
A = [Ac −∆, Ac + ∆]
I A vector is a special type of the matrix
I Denoted by boldface!
7 / 59
Computational complexity
8 / 59
Computational complexity
PContains problems that can be solved in polynomial time.
I It is a class of tractable problems
I Examples: linear programming, sorting numbers, . . .
9 / 59
Computational complexity
NPContains problems where the solution can be verified inpolynomial time
I P ⊆ NPI Solving usually needs exponential many computationsI Examples: SAT, graph 3-coloring, . . .
10 / 59
P=NP?
11 / 59
Philosophical meaning of NP?
12 / 59
NP-hardI A problem is NP-hard if it is at least as hard as any
problem in NP
I Any problem in NP can be solved using some NP-hardproblem
I A problem is NP-complete if it is the hardest problem in NP
I All NP-complete problems can be reduced to each other
13 / 59
P, NP, NP-complete, NP-hard
14 / 59
co-NP
I Problem is in co-NP if its complement is in NP
I Examples: TAUT, . . .
15 / 59
P=NP?
16 / 59
co-NP-hard
I A problem is co-NP-hard if it is at least as hard as anyproblem in co-NP
I A problem is co-NP-complete if it is the hardest problem inco-NP
17 / 59
Difference between NP and co-NP?
I SAT = {φ | φ is a boolean satisfiable formula}
(NP-complete)
I TAUT = {φ | φ is a tautology}
(co-NP-complete)
18 / 59
Before we land . . .
19 / 59
. . . let’s mention the deeper goal of this talk
I We meet various difficult problems
I All of then have exponential ways to solve them
I Surprising sufficient conditions
I Surprising feasible cases
I Reducibility between problems
20 / 59
. . . let’s mention the deeper goal of this talk
I We meet various difficult problems
I All of then have exponential ways to solve them
I Surprising sufficient conditions
I Surprising feasible cases
I Reducibility between problems
20 / 59
. . . let’s mention the deeper goal of this talk
I We meet various difficult problems
I All of then have exponential ways to solve them
I Surprising sufficient conditions
I Surprising feasible cases
I Reducibility between problems
20 / 59
. . . let’s mention the deeper goal of this talk
I We meet various difficult problems
I All of then have exponential ways to solve them
I Surprising sufficient conditions
I Surprising feasible cases
I Reducibility between problems
20 / 59
. . . let’s mention the deeper goal of this talk
I We meet various difficult problems
I All of then have exponential ways to solve them
I Surprising sufficient conditions
I Surprising feasible cases
I Reducibility between problems
20 / 59
The map of interval linear algebra
21 / 59
Regularity and singularity
Regularity
A square interval matrix A is called regular if every A ∈ A isregular.
Singularity
Otherwise, it is called singular.
22 / 59
Regularity and singularity
I Regularity and singularity are complement problems.
I Checking matrix singularity is NP-complete
I Checking matrix regularity is co-NP-complete
23 / 59
Regularity and singularity
Regularity sufficient conditions
1. %(|A−1c |∆) < 1,
2. σmax(∆) < σmin(Ac),
3. ATc Ac − ‖∆T∆‖I is positive definite for some consistent
matrix norm ‖ · ‖.
Singularity sufficient conditions
1. maxj(∆|A−1c |)jj ≥ 1,
2. (∆− |Ac|)−1 ≥ 0,
3. ∆T∆ −ATc Ac is positive semidefinite.
24 / 59
Regularity and singularity
Regularity sufficient conditions
1. %(|A−1c |∆) < 1,
2. σmax(∆) < σmin(Ac),
3. ATc Ac − ‖∆T∆‖I is positive definite for some consistent
matrix norm ‖ · ‖.
Singularity sufficient conditions
1. maxj(∆|A−1c |)jj ≥ 1,
2. (∆− |Ac|)−1 ≥ 0,
3. ∆T∆ −ATc Ac is positive semidefinite.
24 / 59
Special classes
I interval M-matrices
I interval H-matrices
I If A,A are regular and A−1 ≥ 0, A−1 ≥ 0 then A is regular
25 / 59
Special classes
I interval M-matrices
I interval H-matrices
I If A,A are regular and A−1 ≥ 0, A−1 ≥ 0 then A is regular
25 / 59
Special classes
I interval M-matrices
I interval H-matrices
I If A,A are regular and A−1 ≥ 0, A−1 ≥ 0 then A is regular
25 / 59
Map of interval linear algebra
26 / 59
Full column rank
Full column rankAn m× n interval matrix A has full column rank if every A ∈ Ahas full column rank.
27 / 59
Full column rank
I For square matrices this becomes regularityI Checking full column rank is co-NP-complete?
28 / 59
Full column rank
An interval m× n matrix A = [Ac −∆, Ac + ∆] has full columnrank if some of these conditions holds
Sufficient conditions
1. Ac has full column rank and %(|(Ac)+|A∆) < 1,
2. ‖I −RA‖ < 1,
3. σmax(∆) < σmin(Ac),
4. ‖∆‖ < ‖A+c ‖−1.
29 / 59
Map of interval linear algebra
30 / 59
Solving linear systems
Linear system
A is an interval matrix, b an interval vector.
Ax = b,
Solution set
Σ = {x | Ax = b for some A ∈ A, b ∈ b}.
31 / 59
Solving linear systems
HullThe tightest possible interval vector (box) x such that
Σ ⊆ x.
EnclosureAn interval vector (box) x such that
Σ ⊆ x.
32 / 59
Solving linear systems
HullThe tightest possible interval vector (box) x such that
Σ ⊆ x.
EnclosureAn interval vector (box) x such that
Σ ⊆ x.
32 / 59
Solution set 2D
[5, 10]x + [−20,−5] y = [50, 100],[10, 15]x + [5, 10] y = [−50, 280].
33 / 59
Solution set 3D
[−5, 10]x + 10 y + [15, 20] z = [50, 100],10x + −5 y + [5, 15] z = [−50, 50],10x + [10, 25] y + [−10,−5] z = [50, 100].
34 / 59
Solving linear systems
I Computing the exact hull is NP-hardI Computing arbitrary ε-approximation of the hull is NP-hardI Even if we limit the widths of intervals of a matrix in a
system.
35 / 59
MethodsSquare systems
1. Gaussian elimination2. Jacobi, Gauss-Seidel, Krawczyk method3. Hansen-Bliek-Rohn method4. Hladik shaving method
Overdetermined systems
1. Gaussian elimination2. Rohn method3. Subsquares method4. Least squares5. Popova method
36 / 59
MethodsSquare systems
1. Gaussian elimination2. Jacobi, Gauss-Seidel, Krawczyk method3. Hansen-Bliek-Rohn method4. Hladik shaving method
Overdetermined systems
1. Gaussian elimination2. Rohn method3. Subsquares method4. Least squares5. Popova method
36 / 59
Special linear systems
I When A has full column rank and Ac is a diagonal matrixwith positive diagonal entries then HBR returns the exacthull
I When A is M-matrix Gauss-Seidel iteration converges tothe hull
I When A is M-matrix and b nonnegative, Gaussianelimination yields the hull
37 / 59
Special linear systems
I When A has full column rank and Ac is a diagonal matrixwith positive diagonal entries then HBR returns the exacthull
I When A is M-matrix Gauss-Seidel iteration converges tothe hull
I When A is M-matrix and b nonnegative, Gaussianelimination yields the hull
37 / 59
Special linear systems
I When A has full column rank and Ac is a diagonal matrixwith positive diagonal entries then HBR returns the exacthull
I When A is M-matrix Gauss-Seidel iteration converges tothe hull
I When A is M-matrix and b nonnegative, Gaussianelimination yields the hull
37 / 59
Map of interval linear algebra
38 / 59
Matrix inverse
A−1
An interval inverse to A is a matrix A−1 = [B,B], where
B = min{A−1, A ∈ A}
B = max{A−1, A ∈ A}
Enclosure of A−1
B, where A−1 ⊆ B
39 / 59
Matrix inverse
A−1
An interval inverse to A is a matrix A−1 = [B,B], where
B = min{A−1, A ∈ A}
B = max{A−1, A ∈ A}
Enclosure of A−1
B, where A−1 ⊆ B
39 / 59
Matrix inverse
I Interval matrix inverse can be computed by computing thehull of the systems Ax = ei
I Computing exact bounds of the interval inverse is NP-hardI Checking whether a matrix has nonnegative inverse is in P
40 / 59
Matrix inverse
I Enclosures of the systems Ax = ei
I Gaussian elimination on the matrix [A | I]
I Rohn formula for inverse enclosure
41 / 59
Matrix inverse
I Enclosures of the systems Ax = ei
I Gaussian elimination on the matrix [A | I]
I Rohn formula for inverse enclosure
41 / 59
Matrix inverse
I Enclosures of the systems Ax = ei
I Gaussian elimination on the matrix [A | I]
I Rohn formula for inverse enclosure
41 / 59
Matrix inverse
Special classes
If A,A are regular and A−1, A−1 ≥ 0, then A is regular and
A−1 = [A−1, A−1] ≥ 0.
I M-matrices
42 / 59
Map of interval linear algebra
43 / 59
Checking solvability
Solvability
An interval linear system Ax = b is (weakly) solvable if itssolution set Σ 6= ∅.
Strong solvability
An interval linear system Ax = b is strongly solvable if everyAx = b, A ∈ A, b ∈ b is solvable.
44 / 59
Checking solvability
Solvability
An interval linear system Ax = b is (weakly) solvable if itssolution set Σ 6= ∅.
Strong solvability
An interval linear system Ax = b is strongly solvable if everyAx = b, A ∈ A, b ∈ b is solvable.
44 / 59
Checking solvability
I Ax = b is not solvable if the matrix [A b] has full columnrank
I Checking solvability NP-completeI Checking strong solvability is in NP-complete
45 / 59
Checking solvability
I Use sufficient full column rank conditions
I Gaussian elimination, Jacobi method, Gauss-Seidel,subsquares method.
I If we have some enclosure x then clearly a system Ax = bis unsolvable if Ax ∩ b = ∅.
46 / 59
Checking solvability
I Use sufficient full column rank conditions
I Gaussian elimination, Jacobi method, Gauss-Seidel,subsquares method.
I If we have some enclosure x then clearly a system Ax = bis unsolvable if Ax ∩ b = ∅.
46 / 59
Checking solvability
I Use sufficient full column rank conditions
I Gaussian elimination, Jacobi method, Gauss-Seidel,subsquares method.
I If we have some enclosure x then clearly a system Ax = bis unsolvable if Ax ∩ b = ∅.
46 / 59
Interesting paradox
I In systems of linear equations
I Checking weak solvability NP-complete
I Checking strong solvability P!
47 / 59
Map of interval linear algebra
48 / 59
Positive definiteness
Symmetric matrix
Given A, with Ac,∆ are symmetric. ThenAS = {symmetric A ∈ A}
Positive definitenessA symmetric interval matrix AS is positive definite if xTAx > 0for each A ∈ AS and x 6= 0.
49 / 59
Positive definiteness
Symmetric matrix
Given A, with Ac,∆ are symmetric. ThenAS = {symmetric A ∈ A}
Positive definitenessA symmetric interval matrix AS is positive definite if xTAx > 0for each A ∈ AS and x 6= 0.
49 / 59
Positive definiteness
I Checking positive definiteness is co-NP-completeI Checking positive semi-definiteness is co-NP-complete
50 / 59
Positive definiteness
Sufficient and necessary condition
A symmetric interval matrix AS is positive definite if and only ifit is regular and contains at least one positive definite matrix.
I Equivalent to AS being regular and Ac positive definite
I Can be transformed to regularity checking
51 / 59
Positive definiteness
Sufficient and necessary condition
A symmetric interval matrix AS is positive definite if and only ifit is regular and contains at least one positive definite matrix.
I Equivalent to AS being regular and Ac positive definite
I Can be transformed to regularity checking
51 / 59
Positive definiteness
Sufficient and necessary condition
A symmetric interval matrix AS is positive definite if and only ifit is regular and contains at least one positive definite matrix.
I Equivalent to AS being regular and Ac positive definite
I Can be transformed to regularity checking
51 / 59
Positive definiteness
Sufficient conditions
1. Check regularity and positive definiteness of some matrix(e.g., Ac)
2. %(∆) < λmin(Ac)
52 / 59
Map of interval linear algebra
53 / 59
Eigenvalues
Eigenvalue
λ is an eigenvalue of A if it is an eigenvalue of some A ∈ A
Eigenvector
x 6= 0 is an eigenvector of A if it is an eigenvector of someA ∈ A
Eigenpair
(λ, x) is an eigenpair of A if Ax = λx for some some A ∈ A
54 / 59
Eigenvalues
Eigenvalue
λ is an eigenvalue of A if it is an eigenvalue of some A ∈ A
Eigenvector
x 6= 0 is an eigenvector of A if it is an eigenvector of someA ∈ A
Eigenpair
(λ, x) is an eigenpair of A if Ax = λx for some some A ∈ A
54 / 59
Eigenvalues
Eigenvalue
λ is an eigenvalue of A if it is an eigenvalue of some A ∈ A
Eigenvector
x 6= 0 is an eigenvector of A if it is an eigenvector of someA ∈ A
Eigenpair
(λ, x) is an eigenpair of A if Ax = λx for some some A ∈ A
54 / 59
Eigenvalues
I Checking whether λ = 0 is an eigenvalue of A is equal tochecking singularity
I Checking whether λ is an eigenvalue of A is NP-complete
I Checking whether x is an eigenvector of A is in P
I Checking whether (λ, x) is an eigenpair of A is in P
55 / 59
Eigenvalues
I Checking whether λ = 0 is an eigenvalue of A is equal tochecking singularity
I Checking whether λ is an eigenvalue of A is NP-complete
I Checking whether x is an eigenvector of A is in P
I Checking whether (λ, x) is an eigenpair of A is in P
55 / 59
Eigenvalues
I Checking whether λ = 0 is an eigenvalue of A is equal tochecking singularity
I Checking whether λ is an eigenvalue of A is NP-complete
I Checking whether x is an eigenvector of A is in P
I Checking whether (λ, x) is an eigenpair of A is in P
55 / 59
Eigenvalues
I Checking whether λ = 0 is an eigenvalue of A is equal tochecking singularity
I Checking whether λ is an eigenvalue of A is NP-complete
I Checking whether x is an eigenvector of A is in P
I Checking whether (λ, x) is an eigenpair of A is in P
55 / 59
Eigenvalues
I Checking whether λ = 0 is an eigenvalue of A is equal tochecking singularity
I Checking whether λ is an eigenvalue of A is NP-complete
I Checking whether x is an eigenvector of A is in P
I Checking whether (λ, x) is an eigenpair of A is in P
55 / 59
Eigenvalues of symmetric matrices
Eigenvalues of a symmetric A are realλ1(A) ≥ λ2(A) ≥ . . . λn(A)
Eigenvalues
λi(AS) = {λi(A) : A ∈ AS}
I Set of compact intervals
I Computing λ1(AS), λn(AS) is NP-hard
56 / 59
Special cases
I [λi(AS), λi(A
S)] ⊆ [λi(Ac)− %(∆), λi(Ac) + %(∆)]
I If Ac is nonnegative then λ1(AS) = λmax(A)
I If ∆ is diagonal
57 / 59
Special cases
I [λi(AS), λi(A
S)] ⊆ [λi(Ac)− %(∆), λi(Ac) + %(∆)]
I If Ac is nonnegative then λ1(AS) = λmax(A)
I If ∆ is diagonal
57 / 59
Special cases
I [λi(AS), λi(A
S)] ⊆ [λi(Ac)− %(∆), λi(Ac) + %(∆)]
I If Ac is nonnegative then λ1(AS) = λmax(A)
I If ∆ is diagonal
57 / 59
Our journey ends
58 / 59
59 / 59