Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering
Stochastic Processes and Linear Algebra Recap Slides
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 2
Stochastic processes and variables
𝑡𝑡
𝑥𝑥1(𝑡𝑡)
𝑥𝑥2(𝑡𝑡)
𝑥𝑥𝑛𝑛(𝑡𝑡)
⋮
𝑡𝑡0
𝑋𝑋 𝑡𝑡0 = 𝑋𝑋
𝑋𝑋 𝑡𝑡 𝑋𝑋 𝑡𝑡0 = 𝑋𝑋 𝑥𝑥𝑖𝑖 𝑡𝑡 𝑥𝑥𝑖𝑖 𝑡𝑡0 = 𝑥𝑥
random process random variable realization of random process realization of random variable
𝑋𝑋 𝑡𝑡 /𝑡𝑡 continuous discrete
continuous Continuous-state continuous-time process
Continuous-state discrete-time process
discrete Discrete-state continuous-time process
Discrete-state discrete-time process
discrete-time process = sequence
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 3
Continuous-state discrete-time process
process 𝑋𝑋(𝑘𝑘) ⇒ current realization of 𝑋𝑋(𝑘𝑘): realization 𝑥𝑥(𝑘𝑘)
A stochastic process is said to be strict sense stationary (SSS), if the statistics are invariant to any translation of the time axis
A stochastic process is said to be wide sense stationary (WSS), if its mean is constant and its autocorrelation depends on a time difference 𝜏𝜏 only
Here: we simply call WSS as stationary
If expected values (averaging multiple realizations) can be calculated by time averaging of one realization, the process is said to be ergodic.
Ergodic processes are always strict sense stationary; but not all strict sense stationary processes have to be ergodic
We presume 𝑋𝑋(𝑘𝑘) to be ergodic ⇒ Moment calculation via averaging in time
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 4
Continuous-state discrete-time process
Probability density function:
𝑝𝑝𝑋𝑋 𝑥𝑥 = lim∆𝑥𝑥→0
1∆𝑥𝑥
Pr{𝑥𝑥 < 𝑋𝑋 ≤ 𝑥𝑥 + ∆𝑥𝑥}
Joint probability density function:
𝑝𝑝𝑋𝑋,𝑌𝑌 𝑥𝑥,𝑦𝑦 = lim∆𝑥𝑥→0,∆𝑦𝑦→0
1∆𝑥𝑥∆𝑦𝑦
Pr{𝑥𝑥 < 𝑋𝑋
≤ 𝑥𝑥 + ∆𝑥𝑥,𝑦𝑦 < 𝑌𝑌 ≤ 𝑦𝑦 + ∆𝑦𝑦}
Normal distribution:
𝑝𝑝𝑋𝑋 𝑥𝑥 =1
2𝜋𝜋𝜋𝜋𝜋𝑒𝑒−
𝑥𝑥−𝜇𝜇𝑋𝑋 𝜋2𝜎𝜎𝜋
Moments → E{∙} 1st order: E 𝑋𝑋 = ∫ 𝑥𝑥 ∙ 𝑝𝑝𝑋𝑋 𝑥𝑥 𝑑𝑑𝑥𝑥 = 𝜇𝜇𝑋𝑋
+∞−∞
2nd order: E 𝑋𝑋𝜋 = ∫ 𝑥𝑥𝜋 ∙ 𝑝𝑝𝑋𝑋 𝑥𝑥 𝑑𝑑𝑥𝑥+∞−∞
Variance: E |𝑋𝑋 − 𝜇𝜇𝑋𝑋|𝜋 = E |𝑋𝑋𝜋| − E 𝑋𝑋 2 = 𝜋𝜋𝑋𝑋2 = ∫ |𝑥𝑥 − 𝜇𝜇𝑋𝑋|𝜋𝑝𝑝𝑋𝑋 𝑥𝑥 𝑑𝑑𝑥𝑥+∞−∞
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 5
Correlation series of discrete-time processes
auto correlation series (not necessarily stationary); complex valued process 𝑋𝑋(𝑘𝑘) 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅1, 𝜅𝜅2 = E 𝑋𝑋∗ 𝜅𝜅1 ∙ 𝑋𝑋 𝜅𝜅2 = E (𝑋𝑋𝑅𝑅 𝜅𝜅1 − 𝑗𝑗𝑋𝑋𝐼𝐼 𝜅𝜅1 ) ∙ (𝑋𝑋𝑅𝑅 𝜅𝜅2 + 𝑗𝑗𝑋𝑋𝐼𝐼 𝜅𝜅2 ) stationary processes: 𝜅𝜅1 → 𝑘𝑘, 𝜅𝜅2→ 𝑘𝑘 + 𝜅𝜅; auto covariance series: 𝑐𝑐𝑋𝑋𝑋𝑋(𝜅𝜅) = E (𝑋𝑋∗ 𝑘𝑘 − 𝜇𝜇𝑋𝑋∗ )(𝑋𝑋 𝑘𝑘 + 𝜅𝜅 − 𝜇𝜇𝑋𝑋) = 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 − |𝜇𝜇𝑋𝑋|² zero mean process: 𝑐𝑐𝑋𝑋𝑋𝑋(𝜅𝜅) = 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 cross correlation series of two processes 𝑋𝑋 𝑘𝑘 , 𝑌𝑌 𝑘𝑘 𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅1, 𝜅𝜅2 = E 𝑋𝑋∗ 𝜅𝜅1 ∙ 𝑌𝑌 𝜅𝜅2 stationary →
𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 = E 𝑋𝑋∗ 𝑘𝑘 ∙ 𝑋𝑋 𝑘𝑘 + 𝜅𝜅
𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 = E 𝑋𝑋∗ 𝑘𝑘 ∙ 𝑌𝑌 𝑘𝑘 + 𝜅𝜅
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 6
Correlation series of discrete-time processes
Properties of the ACS
• 𝑟𝑟𝑋𝑋𝑋𝑋 −𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑋𝑋∗ 𝜅𝜅 real valued processes: → 𝑟𝑟𝑋𝑋𝑋𝑋 −𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 even ACF
• max𝜅𝜅
𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑋𝑋 0
• 𝑟𝑟𝑋𝑋𝑋𝑋 0 = E 𝑋𝑋∗ 𝑘𝑘 ∙ 𝑋𝑋 𝑘𝑘 = E |𝑋𝑋 𝑘𝑘 |𝜋 zero mean: → 𝑟𝑟𝑋𝑋𝑋𝑋 0 = 𝜋𝜋𝑋𝑋2
Properties of the CCS
• 𝑟𝑟𝑋𝑋𝑌𝑌 −𝜅𝜅 = 𝑟𝑟𝑌𝑌𝑋𝑋∗ 𝜅𝜅 real valued processes: → 𝑟𝑟𝑋𝑋𝑌𝑌 −𝜅𝜅 = 𝑟𝑟𝑌𝑌𝑋𝑋 𝜅𝜅
• 𝑐𝑐𝑋𝑋𝑌𝑌 𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 − 𝜇𝜇𝑋𝑋∗ ∙ 𝜇𝜇𝑌𝑌 cross covariance sequence
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
Random Variables (RVs): Covariance/Uncorrelatedness/Orthogonality
• The covariance C of two RVs 𝑋𝑋 and 𝑌𝑌 is C = E (𝑋𝑋 − 𝜇𝜇𝑋𝑋) ∙ (𝑌𝑌 − 𝜇𝜇𝑌𝑌) =E 𝑋𝑋𝑌𝑌 - E 𝑋𝑋 E 𝑌𝑌
• Uncorrelatedness: Two RVs are called uncorrelated if their covariance equals zero. C =0 → E 𝑋𝑋𝑌𝑌 = E 𝑋𝑋 E 𝑌𝑌
• Orthogonality: Two RVs are called orthogonal if E 𝑋𝑋𝑌𝑌 =0
7
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
Processes: Correlateness, Orthogonality, White noise
• Two WSS processes 𝑋𝑋(𝑘𝑘) and 𝑌𝑌(𝑘𝑘) are called uncorrelated if zero mean processes: 𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 = 0 ∀𝜅𝜅
• Two WSS processes 𝑋𝑋(𝑘𝑘) and 𝑌𝑌(𝑘𝑘) are called (mutually) orthogonal if • White noise: White noise is a stationary process with E{𝑋𝑋(𝑘𝑘)}=0 and
8
𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 = 0 ∀𝜅𝜅
𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 = 𝜋𝜋𝑋𝑋2 ∙ 𝛿𝛿(𝜅𝜅)
𝑐𝑐𝑋𝑋𝑌𝑌 𝜅𝜅 = 0 ∀𝜅𝜅 → 𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 = 𝜇𝜇𝑋𝑋∗ ∙ 𝜇𝜇𝑌𝑌
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 9
Power Spectral Density
Definition (Wiener-Khintchine Theorem):
Because of conjugate even ACF → Power Spectral Density always real valued
Total power of the process (zero mean):
Var 𝑋𝑋 𝑘𝑘 = 𝜋𝜋𝑋𝑋2 = � 𝑆𝑆𝑋𝑋𝑋𝑋 𝑒𝑒𝑗𝑗Ω𝜋𝜋
−𝜋𝜋𝑑𝑑Ω = 𝑟𝑟𝑋𝑋𝑋𝑋 0
White noise: PSD constant (total power limited, because of band-limited system)
𝑆𝑆𝑋𝑋𝑋𝑋 𝑒𝑒𝑗𝑗Ω = 𝜋𝜋𝑋𝑋2 for − 𝜋𝜋 < Ω < 𝜋𝜋
𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 = IDTFT 𝜋𝜋𝑋𝑋2 = 𝜋𝜋𝑋𝑋2 ∙ 𝛿𝛿(𝜅𝜅)
𝑆𝑆𝑋𝑋𝑋𝑋 𝑒𝑒𝑗𝑗Ω = DTFT 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 = � 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 𝑒𝑒−𝑗𝑗Ω𝜅𝜅∞
𝜅𝜅=−∞
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 10
ACF for bandlimited noise
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 11
Influence of a linear system
System impulse response: ℎ 𝑘𝑘 � rand. process at the input of the system: 𝑋𝑋(𝑘𝑘)rand. process at the output of the system: 𝑌𝑌(𝑘𝑘)
System-(energy-) autocorrelation sequence: 𝑟𝑟ℎℎ𝐸𝐸 𝜅𝜅 = ∑ ℎ∗ 𝑘𝑘 ℎ(𝑘𝑘 + 𝜅𝜅)∞𝑘𝑘=−∞ = ℎ 𝜅𝜅 ∗ ℎ∗ −𝑘𝑘
ACS output: 𝑟𝑟𝑌𝑌𝑌𝑌 𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 ∗ 𝑟𝑟ℎℎ𝐸𝐸 𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 ∗ ℎ 𝜅𝜅 ∗ ℎ∗ −𝑘𝑘
CCS output: 𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 = 𝑟𝑟𝑋𝑋𝑋𝑋 𝜅𝜅 ∗ ℎ 𝜅𝜅
Power density output: 𝑆𝑆𝑌𝑌𝑌𝑌 𝑒𝑒𝑗𝑗Ω = 𝑆𝑆𝑋𝑋𝑋𝑋 𝑒𝑒𝑗𝑗Ω ∙ |𝐻𝐻 𝑒𝑒𝑗𝑗Ω |𝜋 “phase blind”
Cross power density; in-output: 𝑆𝑆𝑋𝑋𝑌𝑌 𝑒𝑒𝑗𝑗Ω = 𝑆𝑆𝑋𝑋𝑋𝑋 𝑒𝑒𝑗𝑗Ω ∙ 𝐻𝐻 𝑒𝑒𝑗𝑗Ω White noise at the input of a system:
𝑟𝑟𝑌𝑌𝑌𝑌 𝜅𝜅 = 𝜋𝜋𝑋𝑋2 ∙ 𝛿𝛿 𝜅𝜅 ∗ 𝑟𝑟ℎℎ𝐸𝐸 𝜅𝜅 = 𝜋𝜋𝑋𝑋2 ∙ 𝑟𝑟ℎℎ𝐸𝐸 𝜅𝜅 ⟺ 𝑆𝑆𝑌𝑌𝑌𝑌 𝑒𝑒𝑗𝑗Ω = 𝜋𝜋𝑋𝑋2 ∙ 𝐻𝐻 𝑒𝑒𝑗𝑗Ω 2
𝑟𝑟𝑋𝑋𝑌𝑌 𝜅𝜅 = 𝜋𝜋𝑋𝑋2 ∙ 𝛿𝛿 𝜅𝜅 ∗ ℎ 𝜅𝜅 = 𝜋𝜋𝑋𝑋2 ∙ ℎ 𝜅𝜅 ⟺ 𝑆𝑆𝑋𝑋𝑌𝑌 𝑒𝑒𝑗𝑗Ω = 𝜋𝜋𝑋𝑋2 ∙ 𝐻𝐻 𝑒𝑒𝑗𝑗Ω
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 12
Complex Gaussian noise
PDF of a single real valued Gaussian random variable
𝑝𝑝𝑛𝑛′ 𝑛𝑛′ = 1𝜎𝜎𝑁𝑁′ 2𝜋𝜋
𝑒𝑒− 𝑛𝑛′
2
2𝜎𝜎𝑁𝑁′2
PDF of a complex valued random variable 𝑛𝑛 = 𝑛𝑛′ + 𝑗𝑗𝑛𝑛𝑗𝑗 is given by the joint pdf of two real-valued (real and imaginary part) random variables 𝑝𝑝𝑛𝑛(𝑛𝑛′ + 𝑗𝑗𝑛𝑛𝑗𝑗) ≔ 𝑝𝑝𝑛𝑛′,𝑛𝑛′′(𝑛𝑛′,𝑛𝑛𝑗𝑗) If we assume that real and imaginary part are statistically independent then 𝑝𝑝𝑛𝑛(𝑛𝑛′ + 𝑗𝑗𝑛𝑛𝑗𝑗) ≔ 𝑝𝑝𝑛𝑛′ (𝑛𝑛′) ∙ 𝑝𝑝𝑛𝑛′′ (𝑛𝑛′′) PDF of a single complex Gaussian random variable
𝑝𝑝𝑛𝑛 𝑛𝑛 =1𝜋𝜋𝑁𝑁2𝜋𝜋
𝑒𝑒−𝑛𝑛
′2+𝑛𝑛′′2
𝜎𝜎𝑁𝑁2 =
1𝜋𝜋𝑁𝑁2𝜋𝜋
𝑒𝑒−|𝑛𝑛|2
𝜎𝜎𝑁𝑁2
-3 -2 -1 0 1 2 30
0.10.20.30.4
𝑝𝑝𝑛𝑛′ 𝑛𝑛′
𝑛𝑛′
𝑛𝑛′ 𝑛𝑛′′
𝑝𝑝𝑛𝑛 𝑛𝑛′ + 𝑗𝑗𝑛𝑛′′
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 13
New nomenclature
In the following we use small letters for both random variable and particular
realization. Random variable: 𝑋𝑋 → 𝑥𝑥
Scalar random variable: 𝑥𝑥
Vector-valued random variable: 𝐱𝐱 (column vector)
Matrix-valued random variable: 𝐗𝐗
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 14
Autocorrelation matrix Vector-valued random variable:
𝐱𝐱 =
𝑥𝑥(0)𝑥𝑥(1)⋮
𝑥𝑥(𝑁𝑁 − 1)
∈ ℂ𝑁𝑁×1: column vector; expectation: E 𝐱𝐱 =
E 𝑥𝑥 0E 𝑥𝑥 1
⋮E 𝑥𝑥 𝑁𝑁 − 1
E 𝐱𝐱 2 = E 𝐱𝐱H𝐱𝐱 = � E 𝑥𝑥 𝑖𝑖 2𝑁𝑁−1
𝑖𝑖=0
= E 𝑥𝑥 0 2 + 𝑥𝑥 1 2 + ⋯+ 𝑥𝑥 𝑁𝑁 − 1 2
Autocorrelation matrix: Note:
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
Digital signals and linear time invariant system:
Assume:
• ℎ 𝑘𝑘 causal FIR with order 𝑚𝑚 ⇒ impulse response of length 𝑚𝑚 + 1
• Timely infinite input sequence 𝑥𝑥(𝑘𝑘); −∞ < 𝑘𝑘 < ∞
𝑦𝑦 𝑘𝑘 = 𝑥𝑥 𝑘𝑘 ∗ ℎ 𝑘𝑘 = ∑ ℎ 𝜐𝜐 ∙ 𝑥𝑥(𝑘𝑘 − 𝜐𝜐)𝑚𝑚𝜐𝜐=0
Define:
𝐡𝐡 =
ℎ(0)ℎ(1)⋮
ℎ(𝑚𝑚)
∈ ℂ𝑚𝑚+1; 𝐱𝐱(𝑘𝑘) =
𝑥𝑥(𝑘𝑘)𝑥𝑥(𝑘𝑘 − 1)
⋮𝑥𝑥(𝑘𝑘 −𝑚𝑚)
∈ ℂ𝑚𝑚+1
15
Convolution as inner product
ℎ(𝑘𝑘) 𝑥𝑥(𝑘𝑘) 𝑦𝑦(𝑘𝑘)
past values of 𝑥𝑥(𝑘𝑘), non-causal input
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
Output signal 𝑦𝑦(𝑘𝑘) of filter as inner product:
𝑦𝑦(𝑘𝑘) = 𝐡𝐡T ∙ 𝐱𝐱 𝑘𝑘 = 𝐱𝐱 𝑘𝑘 T ∙ 𝐡𝐡
Assume: 𝑋𝑋(𝑘𝑘) is stationary discrete-time process ⟹ 𝐱𝐱(𝑘𝑘) is vector of random variables ⟹ 𝑦𝑦(𝑘𝑘) is scalar random variable
16
Convolution as inner product
Power of output signal:
E 𝑦𝑦 𝑘𝑘 2 = E 𝑦𝑦 𝑘𝑘 𝑦𝑦 𝑘𝑘 ∗
= E 𝐡𝐡T𝐱𝐱(𝑘𝑘)𝐱𝐱H(𝑘𝑘)𝐡𝐡∗
= 𝐡𝐡TE 𝐱𝐱(𝑘𝑘)𝐱𝐱H(𝑘𝑘) 𝐡𝐡∗
= 𝐡𝐡T𝐑𝐑𝑥𝑥𝑥𝑥𝐡𝐡∗
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 17
Convolution as matrix multiplication
Causal input: 𝐱𝐱 𝑘𝑘 = [𝑥𝑥 0 , 𝑥𝑥 1 , … , 𝑥𝑥 𝐿𝐿 − 1 ]T Finite impulse response: 𝐡𝐡 𝑘𝑘 = [ℎ 0 ,ℎ 1 , … , ℎ 𝑚𝑚 ]T Full equation system:
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 18
Convolution as matrix multiplication
Example of convolution as matrix multiplication with 𝑚𝑚 = 2, 𝐿𝐿 = 4:
Matrix 𝐇𝐇 has Toeplitz structure
steady-state: complete impulse response in rows
filter filled up with input samples
decay phase
transient phase
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 19
Convolution as matrix multiplication
Convolutional matrix in general: Toeplitz structure
𝐿𝐿
𝑚𝑚
transient phase
𝐿𝐿−
m
steady state
𝑚𝑚 decay phase
𝐿𝐿+𝑚𝑚
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
with
20
Correlation as convolution
Define correlation of two signals (at least one is deterministic) as:
𝐿𝐿
𝐿𝐿
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
with
21
Correlation as scalar product
Define correlation of two signals (at least one is deterministic) as:
Note: defined as causal input in contrast to anti-causal definition for formulation
of convolution
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 22
Singular Value Decomposition (SVD)
Every 𝑚𝑚 × 𝑛𝑛 matrix 𝐀𝐀 of rank r can be written as • Singular values 𝜋𝜋𝑖𝑖 of 𝐀𝐀 = square roots of nonzero eigenvalues of AHA or AAH • Unitary 𝑚𝑚 × 𝑚𝑚 matrix 𝐔𝐔 contains left singular vectors of A = eigenvectors of AAH • Unitary 𝑛𝑛 × 𝑛𝑛 matrix 𝐕𝐕 contains right singular vectors of A = eigenvectors of AHA
Verification with eigenvalue decomposition
Four fundamental subspaces: the vectors • u1,...,ur span the column space of A • ur+1,...,um span the left nullspace of A • v1,...,vr span the row space of A • vr+1,...,vn span the right nullspace of A
with the matrix of singular values
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra
Singular Value Decomposition (SVD) (2)
• Illustration of the fundamental subspaces Consider linear mapping with orthogonal decomposition
23
x! Ax x = xr + xn
xr
xn
x0 Axn = 0
Ax = Axr
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 24
Moore-Penrose Pseudoinverse
Inverse 𝐀𝐀−1 exists only for square matrices with full rank Assume any 𝑚𝑚 × 𝑛𝑛 matrix 𝐀𝐀 Definition: (Moore-Penrose) pseudo inverse A+
Special cases for full rank matrices:
It can be verified that
)
if and only if 𝐀𝐀 has full rank
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 25
QR decomposition
Every 𝑚𝑚 × 𝑛𝑛 matrix A can be written as where • Q is an m × n matrix with orthonormal columns
• R is an upper triangular 𝑛𝑛 × 𝑛𝑛 matrix
Columns of A are represented in the orthonormal base defined by Q
Illustration for the 𝑚𝑚 × 2 case
,
q2
a1 = r1;1q1
r2;2q2
q1 r1;2q1
a2 = r1;2q1 + r2;2q2
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 26
Matrix inversion lemma
Matrix Inversion Lemma (A∈Rm x m, B∈Rm x n, C∈Rn x n, D∈Rn x m)
Inverse of block matrix E:
with A∈Rm x m, B∈Rm x n, C∈Rn x m, D∈Rn x n
Schur complement of A w.r.t E
Schur complement of D w.r.t E
Digital Signal Processing Advanced - Stochastic Processes and Linear Algebra 27
Wirtinger calculus
Derivative w.r.t. a vector
• derivative w.r.t. column-vector row-vector
• derivative w.r.t. row-vector column-vector
since