core-chasing algorithms for the eigenvalue problem · leonardo robol raf vandebril david s. watkins...
TRANSCRIPT
Core-Chasing Algorithmsfor the Eigenvalue Problem
David S. Watkins
Department of MathematicsWashington State University
HHXX, Virginia Tech, June 20, 2017
David S. Watkins Core-Chasing Algorithms
Our International Research Group
Collaborators:
Jared Aurentz
Thomas Mach
Leonardo Robol
Raf Vandebril
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank. . . or no special structure
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank. . . or no special structure
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank. . . or no special structure
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank. . . or no special structure
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank. . . or no special structure
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank
. . . or no special structure
David S. Watkins Core-Chasing Algorithms
Today’s Topic
The matrix eigenvalue problem
A ∈ Cn×n
Find the eigenvalues (. . . vectors, invariant subspaces)
Possible structures
unitaryunitary-plus-rank-one (companion matrix)unitary-plus-low-rank. . . or no special structure
David S. Watkins Core-Chasing Algorithms
John Francis
photo: Frank Uhlig, 2009
invented the winning algorithm in 1959.
implicitly shifted QR algorithm
Our algorithms are all variants of this.
David S. Watkins Core-Chasing Algorithms
John Francis
photo: Frank Uhlig, 2009
invented the winning algorithm in 1959.
implicitly shifted QR algorithm
Our algorithms are all variants of this.
David S. Watkins Core-Chasing Algorithms
John Francis
photo: Frank Uhlig, 2009
invented the winning algorithm in 1959.
implicitly shifted QR algorithm
Our algorithms are all variants of this.
David S. Watkins Core-Chasing Algorithms
John Francis
photo: Frank Uhlig, 2009
invented the winning algorithm in 1959.
implicitly shifted QR algorithm
Our algorithms are all variants of this.
David S. Watkins Core-Chasing Algorithms
Francis’s algorithm . . .
. . . is a bulge chasing algorithm.
We turn it into a core chasing algorithm.
Instead of chasing bulges, we chase core transformations.
David S. Watkins Core-Chasing Algorithms
Francis’s algorithm . . .
. . . is a bulge chasing algorithm.
We turn it into a core chasing algorithm.
Instead of chasing bulges, we chase core transformations.
David S. Watkins Core-Chasing Algorithms
Francis’s algorithm . . .
. . . is a bulge chasing algorithm.
We turn it into a core chasing algorithm.
Instead of chasing bulges, we chase core transformations.
David S. Watkins Core-Chasing Algorithms
Francis’s algorithm . . .
. . . is a bulge chasing algorithm.
We turn it into a core chasing algorithm.
Instead of chasing bulges, we chase core transformations.
David S. Watkins Core-Chasing Algorithms
Core Transformations
What is a core transformation?
It’s a unitary matrix, and
it’s essentially 2× 2
C2 =
1× ×× ×
11
Ex: Givens rotator, reflector, . . .
We just wanted a generic term.
David S. Watkins Core-Chasing Algorithms
Core Transformations
What is a core transformation?
It’s a unitary matrix, and
it’s essentially 2× 2
C2 =
1× ×× ×
11
Ex: Givens rotator, reflector, . . .
We just wanted a generic term.
David S. Watkins Core-Chasing Algorithms
Core Transformations
What is a core transformation?
It’s a unitary matrix, and
it’s essentially 2× 2
C2 =
1× ×× ×
11
Ex: Givens rotator, reflector, . . .
We just wanted a generic term.
David S. Watkins Core-Chasing Algorithms
Core Transformations
What is a core transformation?
It’s a unitary matrix, and
it’s essentially 2× 2
C2 =
1× ×× ×
11
Ex: Givens rotator, reflector, . . .
We just wanted a generic term.
David S. Watkins Core-Chasing Algorithms
Core Transformations
What is a core transformation?
It’s a unitary matrix, and
it’s essentially 2× 2
C2 =
1× ×× ×
11
Ex: Givens rotator, reflector, . . .
We just wanted a generic term.
David S. Watkins Core-Chasing Algorithms
Core Transformations
1× ×× ×
× × ×× ×× ×
=
× × ×× ×0 ×
Abbreviated notation
�� × =
David S. Watkins Core-Chasing Algorithms
Core Transformations
1× ×× ×
× × ×× ×× ×
=
× × ×× ×0 ×
Abbreviated notation
�� × =
David S. Watkins Core-Chasing Algorithms
Core Transformations
1× ×× ×
× × ×× ×× ×
=
× × ×× ×0 ×
Abbreviated notation
�� × =
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
×××××
=
×××××
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
�� ×××××
=××××
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
���� ×
××××
= ×××
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
����
��×××××
=××
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
����
����
×××××
=
×
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
����
����
��
×××××
=
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
����
����
��
×××××
=
Now invert the core transformations.
David S. Watkins Core-Chasing Algorithms
Hessenberg QR decomposition
×××××
=
����������
Q R
David S. Watkins Core-Chasing Algorithms
Our algorithms operate on the matrix in QR decomposed form.
A = QR =
����������
This is not inefficient.
We apply Francis’s algorithm to this factored form.
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Fusion
� �� � ⇒ ��
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover
� ���
�� ⇔
[ ∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
]⇔ �
��
�� �
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
����������
��
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
����������
����
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
������
����
��
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
��
����
����
��
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
��
����������
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
����
����������
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation
��
����������
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation : Abbreviated notation
����������
��
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Turnover as a shift-through operation : Abbreviated notation
����������
����
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Passing a core transformation through a triangular matrix
∗ ∗ ∗ ∗∗ ∗ ∗∗ ∗∗
�� ⇔
∗ ∗ ∗ ∗∗ ∗ ∗+ ∗ ∗
∗
⇔ ��
∗ ∗ ∗ ∗∗ ∗ ∗∗ ∗∗
Cost is O(n) flops.
David S. Watkins Core-Chasing Algorithms
Operating on Core Transformations
Passing a core transformation through a triangular matrix
Abbreviated Notation:
�� �� �� ��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
�� ��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
����
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
����
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
�� ��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
������
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
�� ������
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
�� ������
David S. Watkins Core-Chasing Algorithms
Core Chasing
������������
David S. Watkins Core-Chasing Algorithms
Core Chasing
������������ ����
David S. Watkins Core-Chasing Algorithms
Core Chasing
���������� ��
David S. Watkins Core-Chasing Algorithms
Core Chasing
����������
David S. Watkins Core-Chasing Algorithms
Flop Count
Cost
O(n3) total flops
O(n2) storage
about the same as for standard Francis iteration.
David S. Watkins Core-Chasing Algorithms
Flop Count
Cost
O(n3) total flops
O(n2) storage
about the same as for standard Francis iteration.
David S. Watkins Core-Chasing Algorithms
Advantages
Are there any advantages?
superior deflation procedure
some structured cases
David S. Watkins Core-Chasing Algorithms
Advantages
Are there any advantages?
superior deflation procedure
some structured cases
David S. Watkins Core-Chasing Algorithms
Advantages
Are there any advantages?
superior deflation procedure
some structured cases
David S. Watkins Core-Chasing Algorithms
Deflation
Standard deflation criterion:
×××××
Set aj+1,j to zero if
|aj+1,j | < u (|aj ,j |+ |aj+1,j+1 |).
(u is unit roundoff.)
David S. Watkins Core-Chasing Algorithms
Deflation
Standard deflation criterion:
×××××
Set aj+1,j to zero if
|aj+1,j | < u (|aj ,j |+ |aj+1,j+1 |).
(u is unit roundoff.)
David S. Watkins Core-Chasing Algorithms
Deflation
Our deflation criterion:
����������
Qj =
I
cj −sjsj c j
I
Set sj to zero if |sj | < u.
(u is unit roundoff.)
David S. Watkins Core-Chasing Algorithms
Deflation
Our deflation criterion:
����������
Qj =
I
cj −sjsj c j
I
Set sj to zero if |sj | < u.
(u is unit roundoff.)
David S. Watkins Core-Chasing Algorithms
Deflation
Both criteria are normwise backward stable.
How does this affect eigenvalues?
Change in λ depends on condition number κ(λ).
Standard result: λ is perturbed to µ, where
|λ− µ | ≤ u κ(λ) ‖A‖+ O(u2).
This holds for both deflation criteria.
David S. Watkins Core-Chasing Algorithms
Deflation
Both criteria are normwise backward stable.
How does this affect eigenvalues?
Change in λ depends on condition number κ(λ).
Standard result: λ is perturbed to µ, where
|λ− µ | ≤ u κ(λ) ‖A‖+ O(u2).
This holds for both deflation criteria.
David S. Watkins Core-Chasing Algorithms
Deflation
Both criteria are normwise backward stable.
How does this affect eigenvalues?
Change in λ depends on condition number κ(λ).
Standard result: λ is perturbed to µ, where
|λ− µ | ≤ u κ(λ) ‖A‖+ O(u2).
This holds for both deflation criteria.
David S. Watkins Core-Chasing Algorithms
Deflation
Both criteria are normwise backward stable.
How does this affect eigenvalues?
Change in λ depends on condition number κ(λ).
Standard result: λ is perturbed to µ, where
|λ− µ | ≤ u κ(λ) ‖A‖+ O(u2).
This holds for both deflation criteria.
David S. Watkins Core-Chasing Algorithms
Deflation
Both criteria are normwise backward stable.
How does this affect eigenvalues?
Change in λ depends on condition number κ(λ).
Standard result: λ is perturbed to µ, where
|λ− µ | ≤ u κ(λ) ‖A‖+ O(u2).
This holds for both deflation criteria.
David S. Watkins Core-Chasing Algorithms
Deflation
But our criterion does better:
Theorem (Mach and Vandebril (2014) )
|λ− µ | ≤ u κ(λ) |λ |+ O(u2).
Relative perturbation in each λ is tiny.
This does not hold for standard deflation criterion.
David S. Watkins Core-Chasing Algorithms
Deflation
But our criterion does better:
Theorem (Mach and Vandebril (2014) )
|λ− µ | ≤ u κ(λ) |λ |+ O(u2).
Relative perturbation in each λ is tiny.
This does not hold for standard deflation criterion.
David S. Watkins Core-Chasing Algorithms
Deflation
But our criterion does better:
Theorem (Mach and Vandebril (2014) )
|λ− µ | ≤ u κ(λ) |λ |+ O(u2).
Relative perturbation in each λ is tiny.
This does not hold for standard deflation criterion.
David S. Watkins Core-Chasing Algorithms
Deflation
But our criterion does better:
Theorem (Mach and Vandebril (2014) )
|λ− µ | ≤ u κ(λ) |λ |+ O(u2).
Relative perturbation in each λ is tiny.
This does not hold for standard deflation criterion.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Fun Example:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
These eigenvalues are well conditioned.
Standard criterion deflates to[1 20 ε
].
Eigenvalues are µ1 = 1 and µ2 = ε.
Small eigenvalue is off by 200%.
David S. Watkins Core-Chasing Algorithms
Deflation
Example, continued:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
Our criterion:
A = QR ≈[
1 −εε 1
] [1 20 −ε
].
Deflates to [1 00 1
] [1 20 −ε
]=
[1 20 −ε
].
Eigenvalues are µ1 = 1 and µ2 = −ε.Both eigenvalues are accurate.
David S. Watkins Core-Chasing Algorithms
Deflation
Example, continued:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
Our criterion:
A = QR ≈[
1 −εε 1
] [1 20 −ε
].
Deflates to [1 00 1
] [1 20 −ε
]=
[1 20 −ε
].
Eigenvalues are µ1 = 1 and µ2 = −ε.Both eigenvalues are accurate.
David S. Watkins Core-Chasing Algorithms
Deflation
Example, continued:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
Our criterion:
A = QR ≈[
1 −εε 1
] [1 20 −ε
].
Deflates to [1 00 1
] [1 20 −ε
]=
[1 20 −ε
].
Eigenvalues are µ1 = 1 and µ2 = −ε.Both eigenvalues are accurate.
David S. Watkins Core-Chasing Algorithms
Deflation
Example, continued:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
Our criterion:
A = QR ≈[
1 −εε 1
] [1 20 −ε
].
Deflates to [1 00 1
] [1 20 −ε
]=
[1 20 −ε
].
Eigenvalues are µ1 = 1 and µ2 = −ε.Both eigenvalues are accurate.
David S. Watkins Core-Chasing Algorithms
Deflation
Example, continued:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
Our criterion:
A = QR ≈[
1 −εε 1
] [1 20 −ε
].
Deflates to [1 00 1
] [1 20 −ε
]=
[1 20 −ε
].
Eigenvalues are µ1 = 1 and µ2 = −ε.
Both eigenvalues are accurate.
David S. Watkins Core-Chasing Algorithms
Deflation
Example, continued:
A =
[1 2ε ε
](0 < ε < u)
λ1 = 1 + 2ε+ O(ε2) λ2 = −ε+ O(ε2)
Our criterion:
A = QR ≈[
1 −εε 1
] [1 20 −ε
].
Deflates to [1 00 1
] [1 20 −ε
]=
[1 20 −ε
].
Eigenvalues are µ1 = 1 and µ2 = −ε.Both eigenvalues are accurate.
David S. Watkins Core-Chasing Algorithms
Exploitation of Structure
Structures we can exploit
unitary
companion matrix (unitary-plus-rank-one)
unitary-plus-low-rank
David S. Watkins Core-Chasing Algorithms
Exploitation of Structure
Structures we can exploit
unitary
companion matrix (unitary-plus-rank-one)
unitary-plus-low-rank
David S. Watkins Core-Chasing Algorithms
Exploitation of Structure
Structures we can exploit
unitary
companion matrix (unitary-plus-rank-one)
unitary-plus-low-rank
David S. Watkins Core-Chasing Algorithms
Exploitation of Structure
Structures we can exploit
unitary
companion matrix (unitary-plus-rank-one)
unitary-plus-low-rank
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
Cost is O(n) flops per iteration, O(n2) flops total.
Storage requirement is O(n).
Gragg (1986)
Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W,. . .
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
Cost is O(n) flops per iteration,
O(n2) flops total.
Storage requirement is O(n).
Gragg (1986)
Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W,. . .
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
Cost is O(n) flops per iteration, O(n2) flops total.
Storage requirement is O(n).
Gragg (1986)
Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W,. . .
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
Cost is O(n) flops per iteration, O(n2) flops total.
Storage requirement is O(n).
Gragg (1986)
Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W,. . .
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
Cost is O(n) flops per iteration, O(n2) flops total.
Storage requirement is O(n).
Gragg (1986)
Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W,. . .
David S. Watkins Core-Chasing Algorithms
Unitary Case
A = QR =
����������
Cost is O(n) flops per iteration, O(n2) flops total.
Storage requirement is O(n).
Gragg (1986)
Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W,. . .
David S. Watkins Core-Chasing Algorithms
Companion Case
p(x) = xn + an−1xn−1 + an−2x
n−2 + · · ·+ a0 = 0
monic polynomial
companion matrix
A =
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
. . . get the zeros of p by computing the eigenvalues.
MATLAB’s roots command
Companion matrix is unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Companion Case
p(x) = xn + an−1xn−1 + an−2x
n−2 + · · ·+ a0 = 0
monic polynomial
companion matrix
A =
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
. . . get the zeros of p by computing the eigenvalues.
MATLAB’s roots command
Companion matrix is unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Companion Case
p(x) = xn + an−1xn−1 + an−2x
n−2 + · · ·+ a0 = 0
monic polynomial
companion matrix
A =
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
. . . get the zeros of p by computing the eigenvalues.
MATLAB’s roots command
Companion matrix is unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Companion Case
p(x) = xn + an−1xn−1 + an−2x
n−2 + · · ·+ a0 = 0
monic polynomial
companion matrix
A =
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
. . . get the zeros of p by computing the eigenvalues.
MATLAB’s roots command
Companion matrix is unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Companion Case
p(x) = xn + an−1xn−1 + an−2x
n−2 + · · ·+ a0 = 0
monic polynomial
companion matrix
A =
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
. . . get the zeros of p by computing the eigenvalues.
MATLAB’s roots command
Companion matrix is unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flopsseveral methods proposeddata-sparse representation + Francis’s algorithmOurs is fastest . . .. . . and we can prove backward stability.I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flopsseveral methods proposeddata-sparse representation + Francis’s algorithmOurs is fastest . . .. . . and we can prove backward stability.I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flops
several methods proposeddata-sparse representation + Francis’s algorithmOurs is fastest . . .. . . and we can prove backward stability.I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flopsseveral methods proposeddata-sparse representation + Francis’s algorithm
Ours is fastest . . .. . . and we can prove backward stability.I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flopsseveral methods proposeddata-sparse representation + Francis’s algorithmOurs is fastest . . .
. . . and we can prove backward stability.I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flopsseveral methods proposeddata-sparse representation + Francis’s algorithmOurs is fastest . . .. . . and we can prove backward stability.
I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Cost of solving companion eigenvalue problem
If structure not exploited:
O(n2) storage, O(n3) flopsFrancis’s algorithm
If structure exploited:
O(n) storage, O(n2) flopsseveral methods proposeddata-sparse representation + Francis’s algorithmOurs is fastest . . .. . . and we can prove backward stability.I spoke about this at the previous Householder symposium.
David S. Watkins Core-Chasing Algorithms
Representation of R
We store the QR decomposed form.
A = QR =
����������
where
R =
1 0 · · · −a1
1 −a2. . .
...−a0
.
This is unitary-plus-rank-one.
How do we store it?
David S. Watkins Core-Chasing Algorithms
Representation of R
We store the QR decomposed form.
A = QR =
����������
where
R =
1 0 · · · −a1
1 −a2. . .
...−a0
.
This is unitary-plus-rank-one.
How do we store it?
David S. Watkins Core-Chasing Algorithms
Representation of R
We store the QR decomposed form.
A = QR =
����������
where
R =
1 0 · · · −a1
1 −a2. . .
...−a0
.
This is unitary-plus-rank-one.
How do we store it?
David S. Watkins Core-Chasing Algorithms
Representation of R
We store the QR decomposed form.
A = QR =
����������
where
R =
1 0 · · · −a1
1 −a2. . .
...−a0
.
This is unitary-plus-rank-one.
How do we store it?
David S. Watkins Core-Chasing Algorithms
Representation of R
Add a row and column to R.
R =
1 0 · · · −a1 0
1 −a2 0. . .
......
−a0 1
0 0 · · · 0 0
.
This is still unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Representation of R
Add a row and column to R.
R =
1 0 · · · −a1 0
1 −a2 0. . .
......
−a0 1
0 0 · · · 0 0
.
This is still unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Representation of R
Add a row and column to R.
R =
1 0 · · · −a1 0
1 −a2 0. . .
......
−a0 1
0 0 · · · 0 0
.
This is still unitary-plus-rank-one.
David S. Watkins Core-Chasing Algorithms
Representation of R
R =
1 0 · · · 0 0
1 0 0. . .
......
0 1
0 0 · · · 1 0
+
0 0 · · · −a1 0
0 −a2 0. . .
......
−a0 0
0 0 · · · −1 0
.
David S. Watkins Core-Chasing Algorithms
Representation of R
R =
C ∗n · · ·C ∗
1 (B1 · · ·Bn + e1yT )
����
����
����
������������
+ · · ·
. . . and we don’t have to store the rank-one part!
This helps with backward stability.
Storage is O(n).
David S. Watkins Core-Chasing Algorithms
Representation of R
R = C ∗n · · ·C ∗
1 (B1 · · ·Bn + e1yT )
����
����
����
������������
+ · · ·
. . . and we don’t have to store the rank-one part!
This helps with backward stability.
Storage is O(n).
David S. Watkins Core-Chasing Algorithms
Representation of R
R = C ∗n · · ·C ∗
1 (B1 · · ·Bn + e1yT )
����
����
����
������������
+ · · ·
. . . and we don’t have to store the rank-one part!
This helps with backward stability.
Storage is O(n).
David S. Watkins Core-Chasing Algorithms
Representation of R
R = C ∗n · · ·C ∗
1 (B1 · · ·Bn + e1yT )
����
����
����
������������
+ · · ·
. . . and we don’t have to store the rank-one part!
This helps with backward stability.
Storage is O(n).
David S. Watkins Core-Chasing Algorithms
Representation of R
R = C ∗n · · ·C ∗
1 (B1 · · ·Bn + e1yT )
����
����
����
������������
+ · · ·
. . . and we don’t have to store the rank-one part!
This helps with backward stability.
Storage is O(n).
David S. Watkins Core-Chasing Algorithms
Representation of R
R = C ∗n · · ·C ∗
1 (B1 · · ·Bn + e1yT )
����
����
����
������������
+ · · ·
. . . and we don’t have to store the rank-one part!
This helps with backward stability.
Storage is O(n).
David S. Watkins Core-Chasing Algorithms
Passing a core transformation through R
�� ��
����
����
����
C ∗n · · · C ∗
1
������������
B1 · · · Bn
��������
����
Cost: O(1) flops instead of O(n).
David S. Watkins Core-Chasing Algorithms
Passing a core transformation through R
�� ��
����
����
����
C ∗n · · · C ∗
1
������������
B1 · · · Bn
��������
����
Cost: O(1) flops instead of O(n).
David S. Watkins Core-Chasing Algorithms
Passing a core transformation through R
�� ��
����
����
����
C ∗n · · · C ∗
1
������������
B1 · · · Bn
��������
����
Cost: O(1) flops instead of O(n).
David S. Watkins Core-Chasing Algorithms
Other things we can do
We can also handle
generalized eigenvalue problem
companion pencil
matrix polynomial eigenvalue problems
L. Robol talk at 2 pm
generalizations of Hessenberg form
and more.
Monograph in progress (130+ pp.)
David S. Watkins Core-Chasing Algorithms
Other things we can do
We can also handle
generalized eigenvalue problem
companion pencil
matrix polynomial eigenvalue problems
L. Robol talk at 2 pm
generalizations of Hessenberg form
and more.
Monograph in progress (130+ pp.)
David S. Watkins Core-Chasing Algorithms
Other things we can do
We can also handle
generalized eigenvalue problem
companion pencil
matrix polynomial eigenvalue problems
L. Robol talk at 2 pm
generalizations of Hessenberg form
and more.
Monograph in progress (130+ pp.)
David S. Watkins Core-Chasing Algorithms
Other things we can do
We can also handle
generalized eigenvalue problem
companion pencil
matrix polynomial eigenvalue problems
L. Robol talk at 2 pm
generalizations of Hessenberg form
and more.
Monograph in progress (130+ pp.)
David S. Watkins Core-Chasing Algorithms
Other things we can do
We can also handle
generalized eigenvalue problem
companion pencil
matrix polynomial eigenvalue problems
L. Robol talk at 2 pm
generalizations of Hessenberg form
and more.
Monograph in progress (130+ pp.)
David S. Watkins Core-Chasing Algorithms
Other things we can do
We can also handle
generalized eigenvalue problem
companion pencil
matrix polynomial eigenvalue problems
L. Robol talk at 2 pm
generalizations of Hessenberg form
and more.
Monograph in progress (130+ pp.)
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn
(not monic)
Divide by an, or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn (not monic)
Divide by an, or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn (not monic)
Divide by an,
or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn (not monic)
Divide by an, or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn (not monic)
Divide by an, or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn (not monic)
Divide by an, or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
The Companion Pencil
p(x) = a0 + a1x + · · ·+ anxn (not monic)
Divide by an, or . . .
companion pencil:
λ
1
1. . .
1an
−
0 · · · 0 −a01 0 · · · 0 −a1
1. . .
......
. . . 0 −an−2
1 −an−1
We can handle this too (for a price),
This should be superior in some situations.
e.g. if an is tiny.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435,
currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability
All of these algorithms are normwise backward stable.
This is “obvious” because we work only with unitarytransformations,
but it took us a while to write down a correct proof.
For details see . . .
Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S.Watkins, Fast and backward stable computation of roots ofpolynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp.942–973.
Co-winner of SIAM Best Paper Prize (2017).
Jared L. Aurentz, Thomas Mach, Leonardo Robol, RafVandebril, and David S. Watkins, Roots of polynomials: ontwisted QR methods for companion matrices and pencils,arXiv:1611.02435, currently undergoing a complete rewrite.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper.
We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.
Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Backward Stability Odyssey
Presentation at Householder 2014
First written attempt (horrible)
Second attempt was much better (2015 paper) . . .
. . . but there was one one more thing!
Corrected in companion pencil paper. We also exploited thestructure of the backward error to get a better result.Rejected!
Search for examples.
Take a closer look.
backward error on pencil vs. polynomial coefficients
monic vs. scaled polynomial
Our analysis keeps getting better.
Stay tuned for the revised paper.
David S. Watkins Core-Chasing Algorithms
Nice Picture
‖a‖2
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102
‖a−
a‖2
our code
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102‖a−
a‖2
LAPACK balanced
Our code is not just faster, it is also more accurate!
Thank you for your attention.
David S. Watkins Core-Chasing Algorithms
Nice Picture
‖a‖2
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102
‖a−
a‖2
our code
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102‖a−
a‖2
LAPACK balanced
Our code is not just faster,
it is also more accurate!
Thank you for your attention.
David S. Watkins Core-Chasing Algorithms
Nice Picture
‖a‖2
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102
‖a−
a‖2
our code
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102‖a−
a‖2
LAPACK balanced
Our code is not just faster, it is also more accurate!
Thank you for your attention.
David S. Watkins Core-Chasing Algorithms
Nice Picture
‖a‖2
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102
‖a−
a‖2
our code
‖a‖22
101 103 105 10710−18
10−14
10−10
10−6
10−2
102‖a−
a‖2
LAPACK balanced
Our code is not just faster, it is also more accurate!
Thank you for your attention.
David S. Watkins Core-Chasing Algorithms