linear, bilinear and quadratic formsandreea.arusoaie/lectures... · 2019-08-04 · symmetric...
TRANSCRIPT
Linear, bilinear and quadratic formsLecture 8
Mathematics - 1st year, English
Faculty of Computer Science, UAIC
e-mail: [email protected]
web: https://profs.info.uaic.ro/~andreea.arusoaie/mathematics_en.html
facebook: Adrian Zalinescu (group: FII - Matematica (2017-2018))
November 27, 2017
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Outline of the lecture
1 Linear forms
2 Bilinear forms
3 Quadratic forms
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Linear forms
Definition
Let (V ,+, ·) be a linear space.
• A linear mapping f : V → � is called a linear form or a linear functional.
• The linear space L(V ;�) of all linear forms is called the dual of V and isdenoted V ∗.
Proposition
Let (V ,+, ·) be a finite-dimensional linear space. Then V ∗ is alsofinite-dimensional and dimV ∗ = dimV .
Proposition
Let (V ,+, ·) be a finite-dimensional linear space. If v ∈ V r {0V } then thereexists f ∈ V ∗ such that f (v) 6= 0.
Consequence. If u, v ∈ V and u 6= v then there exists f ∈ V ∗ such thatf (u) 6= f (v).
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Bidual and evaluation map
Definition
Let (V ,+, ·) be a linear space.
• The dual of V ∗, denoted by V ∗∗, is called the bidual of V .
• The function ψ : V → V ∗∗ defined by
ψ(v)(f ) := f (v), v ∈ V , f ∈ V ∗
is called the evaluation map.
The evaluation map is well-defined and it is linear:1. It is clear that ψ(v) : V ∗ → �. If α, β ∈ � and f , g ∈ V ∗, then
ψ(v)(αf + βg) = (αf + βg)(v) = αf (v) + βg(v)
= αψ(v)(f ) + βψ(v)(g).
Hence ψ(v) is linear, i.e. ψ(v) ∈ V ∗∗. Therefore, ψ is well-defined.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
2. If α, β ∈ � and u, v ∈ V , then
ψ(αu + βv)(f ) = f (αu + βv) = αf (u) + βf (v)
= αψ(u)(f ) + βψ(v)(f ), ∀f ∈ V ∗.
This means that ψ(αu + βv) = αψ(u) + βψ(v). In conclusion, ψ is linear.
3. If V is finite-dimensional, then ψ is a linear isomorphism.
Indeed, if v ∈ ker ψ, then
f (v) = 0, ∀f ∈ V ∗.
Supposing that v 6= 0V would contradict the existence of some f ∈ V ∗ suchthat f (v) 6= 0. Therefore, v should be equal to 0V . This implies thatker ψ = {0V }, i.e. ψ is injective.
On the other hand, dimV ∗∗ = dimV ∗ = dimV . By the dimension theorem,rank ψ = dimV = dimV ∗∗, so ψ is surjective, too.
In conclusion, ψ is a linear isomorphism. In this case, ψ is also called thecanonical isomorphism between V and V ∗∗.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Vector hyperplanes
Definition
Let (V ,+, ·) be a linear space. A linear subspace W ⊆ V is called a (vector)hyperplane if there exists f ∈ V ∗ r {0V ∗} such that ker f = W .
Proposition
If (V ,+, ·) is a finite-dimensional linear space with dimV = n ∈ �∗, then a linearsubspace W ⊆ V is a hyperplane if and only if dimW = n− 1.
Proof.
[Proof: “⇒”] If W = ker f for some f ∈ V ∗ r {0V ∗}, then by the dimensiontheorem,
dimW = dim(ker f ) = dimV − dim(Im f ) = n− 1,
because f 6= 0V ∗ and thus Im f = �.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Proof.
[Proof: “⇐”] Conversely, if dimW = n− 1, there exists a basisB = {b1, . . . , bn−1, bn} of V such that Lin{b1, . . . , bn−1} = W . Takingf : V → � defined by
f (α1b1 + · · ·+ αnbn) := αn
for α1, . . . , αn ∈ �, we have f 6= 0V ∗ and
f (b1) = · · · = f (bn−1) = 0,
implying that W ⊆ ker f (i.e., f (v) = 0, ∀v ∈ W ). On the other hand, by thedirect implication, dim(ker f ) = n− 1 and consequently W = ker f .
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Let V be a finite-dimensional linear space and B = {b1, . . . , bn} a basis of V .
If W is a hyperplane with W = ker f , where f ∈ V ∗ r {0V ∗}, letβ1 := f (b1), . . . , βn := f (bn). Then v = x1b1 + · · ·+ xnbn ∈ ker f ischaracterized by the equation
(1) β1x1 + · · ·+ βnxn = 0.
Hence
(2) W = {x1b1 + · · ·+ xnbn ∈ V | β1x1 + · · ·+ βnxn = 0} .
Conversely, having β1, . . . , βn ∈ �, not all 0, the subset of V defined by theabove relation is a hyperplane of V .
One can show that any linear subspace of V (not only hyperplanes) can becharacterized by systems of equations of form (1).
If V = �n and B is the canonical basis, relation (2) can be written as
W = {(x1, . . . , xn) ∈ �n | β1x1 + · · ·+ βnxn = 0} .
In the particular cases n = 2 and n = 3, equation (1) becomes the equationof a line, respectively a plane passing through the origin.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Affine functionals
The following notion allows us to characterize all the lines (when n = 2) andplanes (when n = 3), not necessarily those passing through the origin.
Definition
Let (V ,+, ·) be a linear space. A function f : V → � is called an affinefunctional if there exist a linear functional f0 ∈ V ∗ and a constant c ∈ � suchthat f (v) = f0(v) + c , ∀v ∈ V .
For an affine functional f : V → � one can define its kernel in the same way asfor linear functionals, i.e. ker f := {v ∈ V | f (v) = 0}.
Definition
Let (V ,+, ·) be a linear space. A subset U ⊆ V is called an affine hyperplane ifthere exists a non-constant affine functional f : V → � such that ker f = U.
In other words, U is affine hyperplane if there exist a vector hyperplane Wand a vector v0 ∈ V such that
U = W + v0 := {v + v0 | v ∈ W }.A. Zalinescu (Iasi) Lecture 8 November 27, 2017
If V is finite-dimensional with a basis B = {b1, b2, . . . , bn}, then affinehyperplanes are given by subsets of the form
U = {x1b1 + · · ·+ xnbn ∈ V | β1x1 + · · ·+ βnxn + c = 0} ,
where c , β1, . . . , βn ∈ �.
In the cases n = 2 and n = 3, the affine hyperplanes are the lines,respectively the planes.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Bilinear forms
Definition
Let (V ,+, ·) and (W ,+, ·) two linear spaces. A function g : V ×W → � iscalled a bilinear form (bilinear map/mapping) on V ×W if the followingconditions are fulfilled:
1 g(αu + βv, w) = αg(u, w) + βg(v, w), ∀α, β ∈ �, ∀u, v ∈ V , ∀w ∈ W ;
2 g(v, λw + µz) = λg(v, w) + µg(v, z), ∀λ, µ ∈ �, ∀v ∈ V , ∀w, z ∈ W .
In the case W = V , a bilinear form on V × V is also called bilinear form(functional, map/mapping) on V .
1. Suppose now that V and W are finite-dimensional, with basesB = {b1, . . . , bn} and B = {b1, . . . , bm} on V , respectively W .
If v ∈ V and w ∈ W having α1, . . . , αn ∈ � and β1, . . . , βm ∈ � ascoordinates with respect to the bases B, respectively B, then
g(v, w) = g
(n
∑i=1
αibi ,m
∑j=1
βj bj
)=
n
∑i=1
m
∑j=1
αi βjg(bi , bj ).
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
The scalars aij := g(bi , bj ), 1 ≤ i ≤ n, 1 ≤ j ≤ m are called the coefficientsof the bilinear form g with respect to the bases B and B;
the matrix AgB,B
:= (aij ) 1≤i≤n1≤j≤m
in Mnm is called the matrix of the bilinear
form g with respect to the bases B, B.
2. If B ′ = {b′1, . . . , b′n} is another basis of V and B ′ = {b′1, . . . , b′m} is anotherbasis of W , let us denote S = (sij )1≤i ,j≤n ∈Mn the transition matrix from B toB ′ and S = (sij )1≤i ,j≤m ∈Mm the transition matrix from B to B ′.
Then the matrix of g with respect to the bases B ′ and B ′ can be written as
AgB ′,B ′
= S · AgB,B· ST.
It can be proven that rankAgB ′,B ′
= rankAgB,B
, so the rank of the matrix of
the bilinear form doesn’t depend on the bases of reference. This communvalue is called the rank of g and is denoted by rank g .
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Kernel of a bilinear form
Fixing w ∈ W , the bilinear form g : V ×W → � defines a linear functionalfw : V → �, by
fw(v) := g(v, w), v ∈ V .
Allowing now w to variate, the mapping w 7→ fw defines a linear operatorg ′ : W → V ∗.In a similar way, one can define a linear operator g ′′ : V → W ∗ byg ′′(v) := hv, where the linear functional hv ∈ W ∗ is introduced by
hv(w) := g(v, w), w ∈ V .
Definition
Let g : V ×W → � be a bilinear form and the associated linear operatorsg ′ : W → V ∗ and g ′′ : V → W ∗ introduced above. The linear subspaceker g ′ ⊆ W is called the right kernel of g , while the linear subspace ker g ′′ ⊆ V iscalled the left kernel of g .If Ker(g ′) = {0W } and Ker(g ′′) = {0V }, then the bilinear form g is callednon-degenerate.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Symmetric bilinear forms
Definition
A bilinear form g : V × V → � is called symmetric if
g(u, v) = g(v, u), ∀u, v ∈ V ,
respectively antisymmetric if
g(u, v) = −g(v, u), ∀u, v ∈ V .
Proposition
Let g : V × V → � be a symmetric bilinear form or an antisymmetric linear form.Then its right kernel coincides with its left kernel.
For such a bilinear form, the left kernel (which coincides with the right kernel) iscalled the kernel of g and is denoted by ker g .
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Dimension theorem for bilinear forms
Proposition
Let (V ,+, ·) be a finite-dimensional linear space and g : V × V → � asymmetric bilinear form. Then
rank g + dim (ker g) = dimV .
Remark. By the above result, a necessary and sufficient condition for a symmetricbilinear form to be non-degenerate is that rank g = dimV .
Definition
Let g : V × V → � be a symmetric bilinear form.
• Two vectors u, v ∈V are called orthogonal with respect to g if g(u, v) = 0.
• If U is a non-empty subset of V , we say that U is orthogonal with respect tog (or g -orthogonal) if g(u, v) = 0 for any distinct u, v ∈ U.
• If U is a non-empty subset of V , the set {v ∈ V | g(u, v) = 0, ∀u ∈ U} is alinear subspace of V , called the orthgonal complement of U with respect tog , denoted U⊥g .
Remark. If W is a finite dimensional subspace of V with {b1, . . . , bn} a basis ofW , then v ∈ W⊥g if and only if g(bk , v) = 0, ∀l ∈ {1, . . . , n}.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Sylvester’s law of inertia
Theorem
Let (V ,+, ·) be a finite-dimensional linear space and g : V × V → � asymmetric bilinear form. If {b1, . . . , bn} is a basis of V which is g -orthogonal,then rank g is precisely the number of elements amongg(b1, b1), g(b2, b2), ..., g(bn, bn) which are non-zero.
Theorem (Sylvester’s law of inertia)
Let (V ,+, ·) be a finite-dimensional linear space and g : V ×V → � a symmetricbilinear form. Then there exist p, q, r ∈ � such that for every g -orthogonal basis{b1, . . . , bn} of V , p, q and r represent the number of positive, negative,respectively null elements among g(b1, b1), g(b2, b2), ..., g(bn, bn).
The numbers p and q are called the positive, respectively the negative indexof inertia.
The triple (p, q, r) is called the signature of g .
Of course, p + q + r = n (n = dimV ); moreover, rank g = p + q.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Quadratic forms
Definition
Let (V ,+, ·) be a linear space and g : V × V → � a symmetric bilinear form.The function h : V → �, defined by
h(v) := g(v, v), v ∈ V
is called the quadratic form (functional) associated to g .
Remark. Sinceh(u + v) = g(u + v, u + v) = g(u, u) + g(u, v) + g(v, u) + g(v, v) andg(u, v) = g(v, u), we have
h(u + v) = h(u) + 2g(u, v) + h(v), ∀u, v ∈ V .
From this formula we can retreive g from h:
g(u, v) =1
2[h(u + v)− h(u)− h(v)] , ∀u, v ∈ V
or
g(u, v) =1
4[h(u + v)− h(u− v)] , ∀u, v ∈ V .
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Suppose now that V is a finite-dimensional space and B = {b1, . . . , bn} is abasis of V .
Let AgB,B = (aij )1≤i ,j≤n be the matrix of g with respect to B. If
x1, . . . , xn ∈ � are the coefficients of a vector v ∈ V with respect to B, then
h(v) = h(x1b1 + · · ·+ xnbn) =n
∑i=1
n
∑j=1
aijxixj .
The right-hand side of this relation is a homogeneous polynomial of degree 2,called the quadratic polynomial associated to the quadratic form h and thebasis B.
The determinant of the symmetric matrix AgB,B is called the discriminant of h
with respect to the basis B. Its sign does not depend on the basis B.
We say that h is a non-degenerate quadratic form if g is a non-degeneratebilinear functional form, i.e. the discriminant of h (in any basis) is not zero(rankAg
B,B = rank g = n). Otherwise, we say that h is a degeneratequadratic form.
If (p, q, r) is the signature of g , we also call it the signature of the quadraticform h.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Reduced form of a bilinear form
Definition
Let (V ,+, ·) be a finite-dimensional linear space and h : V → V a quadratic formassociated to some symmetric bilinear form g : V × V → �.
• If B is a basis of V such that the matrix of g is diagonal, we call canonical(reduced) form of h the quadratic polynomial associated to h and B.
• A canonical form of h is called normal if the diagonal matrix associated to ghas on its diagonal only the elements 1, −1 and 0.
If B = {b1, . . . , bn} is a basis of V giving a canonical formω1x
21 + ω2x
22 + · · ·+ ωnx
2n of h, then B ′ = {c1b1, . . . , cnbn} gives a normal
form of h, where ci = 1 if ωi = 0, while ci =1√|ωi |
if ωi 6= 0, for 1 ≤ i ≤ n.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Gauss method
Theorem (Gauss method of reducing a quadratic form)
Let (V ,+, ·) be an n-dimensional linear space and h : V → � a quadratic form.Then there exists a basis {b1, . . . , bn} of V and ω1, . . . , ωn ∈ � such that forany x1, . . . , xn ∈ � we have
h(x1b1 + · · ·+ xnbn) = ω1x21 + ω2x
22 + · · ·+ ωnx
2n .
Remarks.
The quadratic polynomial ω1x21 + ω2x
22 + · · ·+ ωnx
2n is then a reduced form
of h (the matrix of g with respect to {b1, . . . , bn} is a diagonal matrix withentries ω1, . . . , ωn).
If (p, q, r) is the signature of h, then among the coefficients ω1, . . . , ωn, pare positive, q are negative and r are equal to 0.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Jacobi method
Theorem (Jacobi method of reducing a quadratic form)
Let (V ,+, ·) be an n-dimensional linear space and h : V → � a quadratic form.Let ∆i , 1 ≤ i ≤ n the principal minors of the associated matrix (aij )1≤i ,j≤n withrespect to a basis of V , i.e.
∆i =
∣∣∣∣∣∣∣∣∣a11 . . . a1i
a21 . . . a2i...
...ai1 . . . aii
∣∣∣∣∣∣∣∣∣ , 1 ≤ i ≤ n.
If ∆i 6= 0, ∀i ∈ {1, . . . , n}, then h can be reduced to the canonical form
µ1x21 + µ2x
22 + · · ·+ µnx
2n ,
where µj =∆j−1
∆j, ∀j = {1, . . . , n}, with ∆0 = 1.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Definition
Let (V ,+, ·) be an n-dimensional linear space and h : V → � a quadratic formwith signature (p, q, r).
• If p = n, h is called a positive-definite quadratic form.
• If q = 0, the quadratic form h is called positive semidefinite.
• If q = n, h is called a negative-definite quadratic form.
• If p = 0, the quadratic form h is called negative semidefinite.
• The quadratic form h is called undefined if p > 0 and q > 0.
Let ∆i , 1 ≤ i ≤ n be the principal minors of the associated matrix with respect toan arbitrary basis. Then h is positive-definite if and only if
∆i > 0, ∀i ∈ {1, . . . , n}
and h is negative-definite if and only if
(−1)i∆i > 0, ∀i ∈ {1, . . . , n}.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Eigenvalues method
Theorem (Eigenvalues method of reducing a quadratic form)
Let (V , 〈·, ·〉) be a finite-dimensional prehilbertian space with dimV = n. Thenthere exists an orthonormal basis with respect to which h has the canonical form
λ1x21 + λ2x
22 + · · ·+ λnx
2n , x1, x2, . . . , xn ∈ �,
where λ1, λ2, . . . , λn ∈ �.
In fact, λ1, . . . , λn are the eigenvalues of the associated matrix with respectto any basis of V .
The method of the proof is similar to the diagonalization algorithm for linearoperators.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Non-homogeneous quadratic functionals
Definition
Let (V ,+, ·) be a linear space, h : V → � a quadratic form and f : V → � anaffine functional. The sum h+ f is called a non-homogeneous quadraticfunctional on V .
If V is finite-dimensional and B = {b1, . . . , bn} of basis of V , then for anyx1, . . . , xn ∈ �
(3) (h+ f )(x1b1 + · · ·+ xnbn) =n
∑i=1
n
∑j=1
aijxixj +n
∑i=1
bixi + c ,
where A = (aij )1≤i ,j≤n is the matrix associated to h and b1, . . . , bn, c ∈ �.
The right-hand side of this equality is called the quadratic polynomialassociated to h+ f (which is a polynomial of degree 2).If V = �n and B is its canonical basis, then (3) can be written as
(4) (h+ f )(x) = ρ(x) := 〈Ax, x〉+ 〈b, x〉+ c , ∀x ∈ �n,
where b = (b1, b2, . . . , bn) ∈ �n and the vectors x ∈ �n are interpreted ascolumn matrices.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Conversely, for arbitrary symmetric matrix A ∈Mn, b ∈ �n and c ∈ �, thefunction ρ : V → � defined by (4), i.e.
ρ(x) := 〈Ax, x〉+ 〈b, x〉+ c , ∀x ∈ �n
defines a non-homogeneous quadratic functional on V .
Moreover, A can be taken not necessarily symmetric, since
〈Ax, x〉 =1
2〈Ax, x〉+ 1
2〈x,Ax〉
=1
2〈Ax, x〉+ 1
2〈ATx, x〉 =
⟨1
2
(A+ AT
)x, x
⟩,
so the matrix A can be replaced by the symmetric matrix 12 (A+ AT).
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Normal form of non-homogeneous quadratic functionals
Let us now consider an affine change of coordinates, i.e. a transformation of theform
x′ = Sx + x0,
where S ∈Mn is a non-singular matrix and x0 ∈ �n. Then
ρ(x) =⟨AS−1(x′ − x0),S
−1(x′ − x0)⟩+⟨
b,S−1(x′ − x0)⟩+ c
=
⟨(S−1
)TAS−1x′, x′
⟩−⟨
2(S−1
)TAS−1x0 +
(S−1
)Tb, x′
⟩+(c −
⟨b,S−1x0
⟩).
Suppose now that S is the transition matrix from the canonical basis to anorthonormal basis giving the canonical form in eigenvalues method of reduction.Therefore, S is an orthonormal matrix (S−1 = ST) andSAST = D := diag(λ1, . . . , λn), where λ1, . . . , λn are the eigenvalues of A.Consequently, we have:
ρ(x) = 〈Dx′, x′〉 − 2
⟨S
(AST x0 +
1
2b
), x′⟩+(c −
⟨b,S−1x0
⟩).
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
If A is non-singular, we can take x0 := − 12SA
−1b, obtaining
ρ(x) = 〈Dx′, x′〉+ c0,
where c0 := 〈Dx0, x0〉 − 〈Sb, x0〉+ c . Therefore, by the change ofcoordinates x′ = Sx− 1
2SA−1b, we obtain
ρ(x) =n
∑i=1
λi (x′i )
2 + c0, ∀x ∈ �n,
where x ′i are the coordinates of x with respect to the new orthogonal basis.If detA = 0, then by letting x0 := 0, we obtain
ρ(x) = 〈Dx′, x′〉+ 〈Sb, x′〉+ c0,
where c0 := −〈Sb, x0〉+ c .If (p, q, r) is the signature of h, we have r > 0 and n− r is the rank of A;one can further find an adequate basis B ′′ such that
ρ(x) =n−r∑i=1
λi (x′′i )
2 + γx ′′n−r+1,
where x ′′1 , . . . , x ′′n are the coordinates of x with respect to this new basis andγ ∈ �.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Geometric classification
From the geometric point of view,
ker ρ := {x ∈ �n | ρ(x) = 0}
is a conic in the case n = 2, a quadric if n = 3, a hyperquadric if n ≥ 4.
1. Case n = 1: the normal forms of ρ are:
x2 + 1 (ker ρ = ∅: two “imaginary” points);
x2 − 1 (ker ρ = {−1, 1}: two distinct points);
x2 (ker ρ = {0}: two identical points).
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
2. Case n = 2: we have nine types of conics, according to the normal form of ρ:
x21 + x2
2 + 1 = 0 (∅: “imaginary” ellipse);
x21 − x2
2 + 1 = 0 (hyperbola);
x21 + x2
2 − 1 = 0 (ellipse);
x21 − 2x2 = 0 (parabola);
x21 + x2
2 = 0 (a point: two “imaginary”, conjugate lines);
x21 − x2
2 = 0 (two intersecting lines);
x21 + 1 = 0 (∅: two “imaginary” lines);
x21 − 1 = 0 (two parallel lines);
x21 = 0 (two identical lines).
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
Powered by TCPDF (www.tcpdf.org)
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
3. Case n = 3: we have 17 types of quadrics, characterized by the followingnormal forms:
x21 + x2
2 + x23 + 1 = 0 (“imaginary” ellipsoid);
x21 + x2
2 + x23 − 1 = 0 (ellipsoid);
x21 + x2
2 − x23 − 1 = 0 (hyperboloid of one sheet);
x21 − x2
2 − x23 − 1 = 0 (hyperboloid of two sheets);
x21 + x2
2 + x23 = 0 (a point: “imaginary” cone);
x21 + x2
2 − x23 = 0 (cone);
x21 + x2
2 − 2x3 = 0 (elliptic paraboloid);
x21 − x2
2 − 2x3 = 0 (hyperbolic paraboloid).
The remaining 9 normal forms are the same as those in the case n = 2, which in�3 represent cylinders of different types: elliptic, hyperbolic or parabolic.The first 6 quadrics are non-singular quadrics, while the others are singularquadrics.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
A. Zalinescu (Iasi) Lecture 8 November 27, 2017
M. Ariciuc, S. Roatesi, Lectii de algebra liniara si geometrie analitica, EdituraMatrix Rom, Bucuresti, 2008.
K. C. Border, More than you wanted to know about quadratic forms, Caltech,2016.
K. Conrad, Bilinear Forms, Notes on Advanced Linear Algebra, 2015.
C. Costinescu, Algebra liniara si aplicatii ın geometrie, Editura Matrix Rom,Bucuresti, 2005.
D. Draghici, Algebra, Editura Didactica si Pedagogica, Bucuresti, 1972.
G. Galbura, F. Rado, Geometrie, Ed. Didactica si Pedag., Bucuresti, 1979.
M. Neagu, Geometria curbelor si suprafetelor. Teorie si aplicatii, EdituraMatrix Rom, Bucuresti, 2013.
P. Ott, Bilinear and Quadratic Forms, Prof. Robert Beezer’s Notes onAdvanced Linear Algebra, 2014.
I. Radomir, Elemente de algebra vectoriala, geometrie si calcul diferential,Editura Albastra, Cluj-Napoca, 2000.
A. Zalinescu (Iasi) Lecture 8 November 27, 2017