random fields - efficient analysis and simulation...karhunen-loeve expansion is very useful for...
TRANSCRIPT
Random FieldsEfficient Analysis and Simulation
Christian Bucher & Sebastian Wolff
Vienna University of Technology
& DYNARDO Austria GmbH, Vienna
Overview
Introduction
Elementary properties
Conditional random fields
Computational aspects
Example
Concluding remarks
2/25 c⃝ Christian Bucher 2010-2014
Random field
Real valued function H(x) defined in an n-dimensional space
H ∈ R; x = [x1, x2, . . . xn]T ∈ D ⊂ Rn
Ensemble of all possible realisations
Describe statistics in terms of mean and variance
Need to consider the correlation structure between values of
H at different locations x and y
H(x, ω)
x, y
L
ω
xy
3/25 c⃝ Christian Bucher 2010-2014
Second order statistics of random field
Mean value function
H(x) = E[H(x)]
Autocovariance function
CHH(x, y) = E[{H(x)− H(x)}{H(y)− H(y)}]
A random field H(x) is called weakly homogeneous if
H(x) = const. ∀x ∈ D; CHH(x, x+ξ) = CHH(ξ) ∀x ∈ D
A homogeneous random field H(x) is called isotropic if
CHH(ξ) = CHH(||ξ||) ∀ξ
Correlation distance (characteristic length Lc)
4/25 c⃝ Christian Bucher 2010-2014
Example: Random field in a square plate
Simulated random samples of isotropic field
5/25 c⃝ Christian Bucher 2010-2014
Conditional Random Fields 1
Assume that the values of the random field H(x) are known
at the locations xk , k = 1 . . . m
Stochastic interpolation for the conditional random field:
H(xi) = a(x) +
m∑k=1
bk(x)H(xk)
a(x) and bk(x) are random interpolating functions.
Make the mean value of the difference between the random
field and the conditional field zero
E[H(x)−H(x)] = 0
Minimize the variance of the difference
E[(H(x)−H(x))2]→ Min.
6/25 c⃝ Christian Bucher 2010-2014
Conditional Random Fields 2Mean value of the conditional random field.
¯H(x) =[CHH(x, x1) . . . CHH(x, xm)
]C−1HH
H(x1)...H(xm)
CHH denotes the covariance matrix of the random field H(x)
at the locations of the measurements.
Covariance matrix of the conditional random field
C(x, y) = C(x, y)−
−[CHH(x, x1) . . . CHH(x, xm)
]C−1HH
CHH(y, x1)...
CHH(y, xm)
Zero at the measurement points.
7/25 c⃝ Christian Bucher 2010-2014
Spectral decomposition
Perform a Fourier-type series expansion using deterministic
basis functions ϕk and random coefficients ck
H(x) =
∞∑k=1
ckϕk(x), ck ∈ R, ϕk ∈ R, x ∈ D
Optimal choice of the basis functions is given by an
eigenvalue (”spectral”) decomposition of the
auto-covariance function (Karhunen-Loeve expansion)
CHH =
∞∑k=1
λkϕk(x)ϕk(y),
∫DCHH(x, y)ϕx(x)dx = λkϕk(y)
The basis functions ϕk are orthogonal and the coefficients ckare uncorrelated.
8/25 c⃝ Christian Bucher 2010-2014
Discrete version
Discrete random field
Hi = H(xi), i = 1 . . . N (1)
Spectral decomposition is given by
Hi − E(Hi) =N∑k=1
ϕk(xi)ck =
N∑k=1
ϕikck (1)
In matrix-vector notation
H = Φc+ H
Computation of basis vectors by solving for the eigenvalues
λk of the covariance matrix CHH
CHHϕk = λkϕk ; λk ≥ 0; k = 1 . . . N
9/25 c⃝ Christian Bucher 2010-2014
Example: Random field in a square plate
Basis vectors
10/25 c⃝ Christian Bucher 2010-2014
Modeling random fields
Most important: Correlation structure, most significant
parameter is the correlation length Lc .
Estimate for the correlation length can be obtained by
applying statistical methods to observed data
Type of probability distribution of the material/geometrical
parameters. Statistical methods can be applied to infer
distribution information from observed measurements.
Helpful to identify the exact type of correlation (or
covariance) function, and to check for homogeneity. This will
be feasible only if a fairly large set of experimental data is
available.
11/25 c⃝ Christian Bucher 2010-2014
Computational aspects
Need to set up the covariance matrix from covariance
function of field
Storage requirements of O(M2)Covariance matrix is full
Karhunen-Loeve expansion is realised using numerical
methods from linear algebra (eigenvalue analysis)
Numerical complexity of O(M3)
12/25 c⃝ Christian Bucher 2010-2014
Simulation for small correlation length
Assemble sparse covariance matrix (e.g. based on piecewise
polynomial covariance functions)
Cl ,p(d) = (1− d/l)p+ , p > 1
Perform a decomposition of the covariance matrix, possible
C = LLT , eg. by a sparse Cholesky factorization.
Simulate N field vectors uk of statistically independent
standard-normal random variables, one number for each
node.
Apply the correlation in standard normal space for each
sample k : zk = Luk .
Transform the correlated field samples into the space of the
desired random field: xk,i = F(−1) (N(zk,i)).
Does not reduce the number of variables
13/25 c⃝ Christian Bucher 2010-2014
Simulation for large correlation length 1
Typical covariance function
Cl(d) = exp
(−d2
2l2
)A spectral decomposition is used to factorize the covariance
matrix by Cov = Φdiag(λi)ΦT with eigenvalues λi and
orthogonal eigenvectors Φ = [ϕi ].
This decomposition is used to reduce the number of random
variables. Given a moderately large correlation length, only a
few (eg. 3-5) eigenvectors are required to represent more
than 90% of the total variability.
Perform a decomposition of the covariance matrix
CHH = Φdiag(λi i)ΦT and choose m basis vectors ϕi being
associated with the largest eigenvalues.
14/25 c⃝ Christian Bucher 2010-2014
Simulation for large correlation length 2
Simulate N vectors uk of statistically independent
standard-normal random variables, each vector is of
dimension m.
Apply the (decomposed) covariance in standard normal space
for each sample k
zk =
m∑i
√λiϕiuk,i
Transform the correlated field samples into the space of the
desired random field: xk,j = F(−1) (N(zk,j)).
15/25 c⃝ Christian Bucher 2010-2014
Simulation for large correlation length 3
A global error measure ϵ may be based on the total variability
being explained by the selected eigenvalues, i.e.
ϵ = 1−∑i=1 cλi∑i=1 nλi
= 1−1
n
∑i=1
cλi
wherein n is the number of discrete points.
This procedure allows the generation of random field samples
with relatively large correlation length parameters
It is based on a model order reduction, i.e. only a portion of
the desired variability can be retained.
Covariance matrix is stored as a dense matrix. Hence, the
size of the FEM mesh is effectively limited to ≈ 30.000nodes (covariance matrix has 9x108 entries, i.e. > 7GB).
16/25 c⃝ Christian Bucher 2010-2014
Efficient simulation strategy
Randomly select M support points from the finite element
mesh.
Assemble the covariance matrix for the selected sub-space.
Perform a decomposition of the covariance matrix
C = Φdiag(λi)ΦT and choose m basis vectors ϕi .
Create basis vectors ψi by interpolating the values of ϕi on
the FEM mesh.
Simulate N vectors uk of statistically independent
standard-normal random variables, each vector is of
dimension m.
Apply the (decomposed) covariance in standard normal space
for each sample k
zk =
m∑i
ψiuk,i
Transform the correlated field samples into the space of the
desired random field: xk,j = F(−1) (N(zk,j)).
17/25 c⃝ Christian Bucher 2010-2014
Expansion Optimal Linear Estimator 1
Expansion Optimal Linear Estimation (EOLE) is an
extension of Kriging
Kriging interpolates a random field based on samples being
measured at a sub-set of mesh points.
Assume that the sub-space is described by the field values
yk = {zk,1, . . . , zk,M} = {m∑i=1
√λiϕi ,1uk,i , . . . ,
m∑i=1
√λiϕi ,Muk,i}
18/25 c⃝ Christian Bucher 2010-2014
Expansion Optimal Linear Estimator 2
Minimization of the variance between the target random field
and its approximation under the constraint of equal mean
values of both results in:
ψi = CTzyC
−1yy
ϕi√λi
with Cyy denoting the correlation matrix between the
sub-space points and Czy denoting the (rectangular)
covariance matrix between the sub-space points and the
nodes in full space.
19/25 c⃝ Christian Bucher 2010-2014
Example
Sheet metal forming application
Modelled by 4-node shell elements using 8786 finite element
nodes
Homogeneoues field, exponential correlation function
Maximum dimension is 540 mm, correlation length
parameter is chosen to be 100 mm.
Truncated Gaussian distribution with mean value 5, standard
deviation 15, lower bound −20 and upper bound 30Sub-space dimension is chosen to be small (between 50 and
1000 points)
20/25 c⃝ Christian Bucher 2010-2014
Example
M = 50 M = 200
M = 500
21/25 c⃝ Christian Bucher 2010-2014
Example - Basis vectors (M = 50)
22/25 c⃝ Christian Bucher 2010-2014
Example - Basis vectors (full)
23/25 c⃝ Christian Bucher 2010-2014
Errors
MAC values of various shapes (reference of comparison: full
model) for different numbers of support points n.
n MAC ψ1 MAC ψ2 MAC ψ5 MAC ψ1050 0.999 0.999 0.949 0.393
100 0.999 0.999 0.999 0.986
200 0.999 0.999 0.999 0.999
400 0.999 0.999 0.999 0.999
800 1 1 0.999 0.999
8786 1 1 1 1
24/25 c⃝ Christian Bucher 2010-2014
Concluding Remarks
Karhunen-Loeve expansion is very useful for reduction of
number of variables
Solution of eigenvalue problem may run into computational
problems (storage, time)
Suitable reduction methods reduce storage and time
requirements drastically
→ Software Statistics on Structures - SoS by
25/25 c⃝ Christian Bucher 2010-2014