a numerical approach toward approximate algebraic computatition zhonggang zeng northeastern illinois...

46
A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng Northeastern Illinois University, USA Oct. 18, 2006, Institute of Mathematics and its Applicat

Upload: warren-moore

Post on 17-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng

Northeastern Illinois University, USA

Oct. 18, 2006, Institute of Mathematics and its Applications

What would happen

when we try numerical computation

on algebraic problems?

A numerical analyst got a surprise 50 years ago on a deceptively simple problem.

1

James H. Wilkinson (1919-1986)

Britain’s Pilot Ace

Start of project: 1948Completed: 1950Add time: 1.8 microsecondsInput/output: cardsMemory size: 352 32-digit wordsMemory type: delay linesTechnology: 800 vacuum tubesFloor space: 12 square feetProject leader: J. H. Wilkinson

2

The Wilkinson polynomial

p(x) = (x-1)(x-2)...(x-20) = x20 - 210 x19 + 20615 x18 + ...

Wilkinson wrote in 1984:

Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst.

57521 )379.18()98.11()7222.5()3145.2()99651.0( xxxxx

3

Matrix rank problem

5

Factoring a multivariate polynomial:

A factorable polynomial irreducibleapproximation

6

Solving polynomial systems:

Example: A distorted cyclic four system:

Translation: There are two 1-dimensional solution set:

33,,6

4321tztztztz

7

Distorted Cyclic Four system in floating point form:

1-dimensional solution set Isolated solutionsapproximation

8

tiny perturbationin data

(< 0.00000001)

huge error In solution

( >105 ) 9

What could happen in approximate algebraic computation?

• “traumatic” error

• dramatic deformation of solution structure

• complete loss of solutions

• miserable failure of classical algorithms

• Polynomial division• Euclidean Algorithm• Gaussian elimination• determinants• … …

10

So, why bother with approximation in algebra?

1. You may have no choice (e.g. Abel’s Impossibility Theorem)

All subsequent computations become approximate

Either

or

11

So, why bother with approximation solutions?

1. You may have no choice

2. Approximate solutions are better!

1)),(),,(( yxgyxfGCD

true image

Application: Image restoration (Pillai & Liang)

blurred image blurred image

),(),(),( yxyxyxp

),( yxf

),(),(),( yxyxyxp

),( yxg

),( yxp

Application: Image restoration (Pillai & Liang)

),(),(),( yxyxyxp

),( yxf

),(),(),( yxyxyxp

),( yxg

),(~)),(),,(( yxpyxgyxfAGCD

true image

blurred image blurred image

restored image

),( yxp

Approximate solution is better than exact solution! 13

Perturbed Cyclic 4

Exact solutions by Maple: 16 isolated (codim 4) solutions

Approximate solutions

by Bertini

(Courtesy of Bates, Hauenstein, Sommese, Wampler)

Perturbed Cyclic 4

Exact solutions by Maple: 16 isolated (codim 4) solutions

Or, by an experimental approximate elimination combined with approximate GCD

Approximate solutions are better than exact ones , arguably15

So, why bother with approximation solutions?

1. You may have no choice

2. Approximate solutions are better

3. Approximate solutions (usually) cost lessExample: JCF computation

t

s

s

r

r

r

1

1

1

,2r

Special case:

,3s 5t

Maple takes 2 hours

On a similar 8x8 matrix, Maple and Mathematica run out of memory

1. You may have no choice

2. Approximate solutions are better

3. Approximate solutions (usually) cost less

16

Pioneer works in numerical algebraic computation (incomplete list)

• Homotopy method for solving polynomial systems (Li, Sommese, Wampler, Verschelde, …)

• Numerical Polynomial Algerba (Stetter)

• Numerical Algebraic Geometry (Sommese, Wampler, Verschelde, …)

17

What is an “approximate solution”?

To solve 0122 xx with 8 digits precision:

backward error: 0.00000001 -- method is good

forward error: 0.0001 -- problem is bad

00000001.010 8

bac

kwar

d e

rro

r

0001.010 4

forw

ard erro

r

0122 xx 1xexact computation

,9999.0x

approximate solution

using 8-digits precision

,0001.1axact solution

0)0001.1)(9999.0( xx

0)10()1( 242 x

18

The condition number

[Forward error] < [Condition number] [Backward error]

A large condition number <=> The problem is sensitive or, ill-conditioned

From numerical method

From problem

An infinite condition number <=> The problem is ill-posed19

Wilkinson’s Turing Award contribution:

Backward error analysis

• A numerical algorithm solves a “nearby” problem

• A “good” algorithm may still get a “bad” answer, if the problem is ill-conditioned (bad)

20

A well-posed problem: (Hadamard, 1923) the solution satisfies

• existence• uniqueness• continuity w.r.t data

Ill-posed problems are common in applications

- image restoration - deconvolution - IVP for stiction damped oscillator - inverse heat conduction- some optimal control problems - electromagnetic inverse scatering- air-sea heat fluxes estimation - the Cauchy prob. for Laplace eq. … …

21

An ill-posed problem is infinitely sensitive to perturbation

tiny perturbation huge error

Ill-posed problems are common in algebraic computing

- Multiple roots

- Polynomial GCD

- Factorization of multivariate polynomials

- The Jordan Canonical Form

- Multiplicity structure/zeros of polynomial systems

- Matrix rank

22

If the answer is highly sensitive to perturbations, you have probably asked the wrong question.

Maxims about numerical mathematics, computers, science and life, L. N. Trefethen. SIAM News

23

Does that mean:

(Most) algebraic problems are wrong problems?

A numerical algorithm seeks the exact solution of a nearby problem

Ill-posed problems are infinitely sensitive to data perturbation

Conclusion: Numerical computation is incompatible

with ill-posed problems.

Solution: Formulate the right problem.

P : Data SolutionP

P

Data

Solution

P

Challenge in solving ill-posed problems:

Can we recover the lost solution when the problem is inexact?

24

William Kahan:

This is a misconception

Are ill-posed problems really sensitive to perturbations?

Kahan’s discovery in 1972:

Ill-posed problems are sensitive to arbitrary perturbation,but insensitive to structure preserving perturbation.

25

Why are ill-posed problems infinitely sensitive?

Plot of pejorative manifolds of degree 3 polynomials with multiple roots

• The solution structure is lost when the problem leaves the manifold due to an arbitrary perturbation

• The problem may not be sensitive at all if the problem stays on the manifold, unless it is near another pejorative manifold

• Problems with certain solution structure form a “pejorative manifold”

W. Kahan’s observation (1972)

26

)( | matrices Rank * rArankCAMr nmnmr

))(( codim -- rnrmM nmr

)),((deg |),( pairs al Polynomi* , rqpGCDqpP nmr rP nm

r codim --

Geometry of ill-posed algebraic problems

nmnmn

nmn PPP

01

nmn

nmnm MMM 10

Similar manifold stratification exists for problems like factorization, JCF, multiple roots …

27

Manifolds of 4x4 matrices defined by Jordan structures (Edelman, Elmroth and Kagstrom 1997)

e.g. {2,1} {1} is the structure of 2 eigenvalues in 3 Jordan blocks of sizes 2, 1 and 1

28

1 codimsion

2 ncodimensio

3 codimsion

B

29

Illustration of pejorative manifolds

0 codimsion

A?

?

Problem A Problem Bperturbation

The “nearest” manifold may not be the answer

The right manifold is of highest codimension within a certain distance

A “three-strikes” principle for formulating an “approximate solution” to an ill-posed problem:

• Backward nearness: The approximate solution is the exact solution of a nearby problem

• Maximum codimension: The approximate solution is the exact solution of a problem on the nearby pejorative manifold of the highest codimension.

• Minimum distance: The approximate solution is the exact solution of the nearest problem on the nearby pejorative manifold of the highest codimension.

Finding approximate solution is (likely) a well-posed problem

Approximate solution is a generalization of exact solution.

30

Continuity of the approximate solution:

Formulation of the approximate rank /kernel:

)(min)( BrankArankAB

0 and nmCA

The approximate rank of A within Backward nearness: app-rank of A is the exact rank of certain matrix B within .

Maximum codimension: That matrix B is on the pejorative manifold possessing thehighest co-dimension and intersecting theneighborhood of A.

)()( BKerAKer with

2)()(2min ACAB

ArankCrank

The approximate kernel of A within

Minimum distance: That B is the nearestmatrix on the pejorative manifold .

• An exact rank is the app-rank within sufficiently small .

• App-rank is continuous (or well-posed)

31

Rank

= 4nullity = 2

+ E = 6nullity = 0

kernel

basis

+ E = 4nullity = 2

98.40

1

26.61))()(( EAKerAKerdist

Rank

= 4nullity = 2

After reformulating the rank:

32

Ill-posedness is removed successfully.

App-rank/kernel can be computed by SVD and other rank-revealing algorithms (e.g. Li-Zeng, SIMAX, 2005)

Formulation of the approximate GCD ngmfxxCgf l )deg( ,)deg( ,0 ],,,[),( 1

),(deg,)deg(,)deg(

],,[),( 1,

jqpGCDnqmp

xxCqpP lnm

j

nmjj Pqpqpgfgf ,),( ),(),(inf),(

),(),(),(min),(),(,),(

gfvugfqpgf kPvu nm

k

)codim(max ,

),(

nmj

gfPk

j

),(),( qpEGCDgfAGCD

The AGCD within :

),( gf

nmkP ,

),( qp

nmkP ,

1

nmkP ,

1

• Finding AGCD is well-posed if (f,g) is sufficiently small

• EGCD is an special case of AGCD for sufficiently small

(Z. Zeng, Approximate GCD of inexact polynomials, part I&II)33

Similar formulation strikes out ill-posedness in problems such as

• Approximate rank/kernel (Li,Zeng 2005, Lee, Li, Zeng 2006) • Approximate multiple roots/factorization (Zeng 2005)

• Approximate GCD (Zeng-Dayton 2004, Gao-Kaltofen-May-Yang-Zhi 2004)

• Approximate Jordan Canonical Form (Zeng-Li 2006)

• Approximate irreducible factorization (Sommesse-Wampler-Verschelde 2003, Gao et al 2003, 2004, in progress)

• Approximate dual basis and multiplicity structure (Dayton-Zeng 05, Bates-Peterson-Sommese ’06)

• Approximate elimination ideal (in progress)

34

after formulating the approximate solution to problem P within

P

The two-staged algorithm

Stage II: Find/solve problem Q such that

RPQPR

min

Q

Stage I: Find the pejorative manifold of the highest dimension s.t.

),(Pdist

Exact solution of Q is the approximate solution of P within

which approximates the solution of S where P is perturbed from

S

35

Case study: Univariate approximate GCD:

Stage I: Find the pejorative manifold

ngmfxCgf )deg( ,)deg( ,0 ],[),(

nmgfSPgfdist knm

k ),( ),,( min,

knwkmv

vgwfv,wgfSk

)deg( ,)deg( with

)(for matrix theis ),( where

)0 and ( vgwfwugvuf

for a least squares solution (u,v,w) by Gauss-Newton iteration

1)(u

gwu

fvu

Stage II: solve the (overdetermined) quadratic system ),(),,( gfbwvuF

(key theorem: The Jacobian of F(u,v,w) is injective.) 36

Start: k = n

Is AGCD of degree kpossible?no

k := k-1

Successful?

no

k := k-1

Refine with G-N Iteration

probably

yes

Output GCD

Univariate AGCD algorithm

Max-codimension

Min-distancenearness

37

Case study: Multivariate approximate GCD:

Stage I: Find the max-codimension pejorative manifold by applying univariate AGCD algorithm on each variable xj

ngmfxxCgf l

)deg( ,)deg( ,0 ],,,[),( 1

Stage II: solve the (overdetermined) quadratic system

1)(u

gwu

fvu

for a least squares solution (u,v,w) by Gauss-Newton iteration

),(),,( gfbwvuF

(key theorem: The Jacobian of F(u,v,w) is injective.)

),,( ),,(),,(

),,( ),,(),,(

and

jjj

jjj

xwxuxg

xvxuxf

wugvuf

38

Case study: univariate factorization:

Stage I: Find the max-codimension pejorative manifold by applying univariate AGCD algorithm on (f, f’ )

nfxCf )deg( ,0 ],[

1m1m1

1m1m1

mm1

k1

k1

k1

)()( )',(

)()()()('

)()()(

k

k

k

zxzxffAGCD

xqzxzxxf

zxzxxf

Stage II: solve the (overdetermined) polynomial system F(z1 ,…,zk )=f

for a least squares solution (z1 ,…,zk ) by Gauss-Newton iteration

(key theorem: The Jacobian is injective.)

)( ) () ( k1 mm1 fzz k

(in the form of coefficient vectors)

39

Case study: Finding the nearest matrix with a Jordan structure

J =

1 1

JxxxxxxxxA ,,,,,, 43214321

Segre characteristic = [3,1]

Equations determining the manifold

0

0,,,,,,

0)( ,,,,,,

1

43214321

43214321

ub

Iuuuuuuuu

SIuuuuuuuuA

T

T

0),,,,,,,,,,( 34241423134321 sssssuuuuAF

3 1

2

1

1Ferrer’s diagram

A ~ J

codim = -1 + 3 + 3(1) = 5

A

B

)( ,,,,,, 43214321 SIuuuuuuuuA

I+S=

s13 s23

s14

s24

s34

Wyre characteristic = [2,1,1]

ijji uu

0),,,( SUAF 40

Case study: Finding the nearest matrix with a Jordan structure

Equations determining the manifold

0

0,,,,,,

0)( ,,,,,,

1

43214321

43214321

ub

Iuuuuuuuu

SIuuuuuuuuA

T

T

A ~ J

A

B

2

2,,

2

2),,,(min )ˆ,ˆ,ˆ,( SUBFSUBF

SU

For B not on the manifold, we can still solve

for a least squares solution :

0),,,( SUBF

When2

)( SIUBU is minimized, so is 22)( ABUSIUB T

The crucial requirement: The Jacobian ),,,( AJ of ),,,( AF is injective.

(Zeng & Li, 2006)

41

tangent plane P0 :

u = G(z0)+J(z

0)(z- z0)

initial iterate

u0 =

G(z

0 )

Least squares solution

u* =

G(z

* )

a

Project to tangent plane

u 1 = G(z 0

)+J(z 0)(z 1

- z 0)

~

new iterate

u1 =

G(z

1 )

Pejora

tive m

anifo

ld

u = G

( z )

Solve G( z ) = a for nonlinear least squares solution z=z*

Solve G(z0)+J(z0)( z - z0 ) = a for linear least squares solution z = z1

G(z0)+J(z0)( z - z0 ) = aJ(z0)( z - z0 ) = - [G(z0) - a ] z1 = z0 - [J(z0)+] [G(z0) - a]

Solving G(z) = a

42

Stage II: Find/solve the nearest problem on the manifold

via solving an overdetermined system G(z)=a for a least squares solution z* s.t . ||G(z*)-a||=minz ||G(z)-a|| by the Gauss-Newton iteration

Stage I: Find the nearby max-codim manifold

,2,1,0 ,)()( 1 kazGzJzz kkkk

Key requirement: Jacobian J(z*) of G(z) at z* is injective

(i.e. the pseudo-inverse exists)

tohzGzGzJ

zz ..)ˆ()()ˆ(

1 ˆ

)ˆ(zG

)(zG

condition number(sensitivity measure)

43

Summary:

• An (ill-posed) algebraic problem can be formulated using the three-strikes principle (backward nearness, maximum-codimension, and minimum distance) to remove the ill-posedness

• The re-formulated problem can be solved by numerical computation in two stages (finding the manifold, solving least squares)

• The combined numerical approach leads to Matlab/Maple toolbox ApaTools for approximate polynomial algebra. The toolbox consists of

univariate/multivariate GCD

matrix rank/kernel

dual basis for a polynomial ideal

univariate factorization

irreducible factorization

elimination ideal

… …

(to be continued in the workshop next week)

44