variational generative flows for reconstruction

6
Variational Generative Flows for Reconstruction Uncertainty Estimation Jiaxin Zhang 1 Jan Drgona 2 Sayak Mukherjee 2 Mahantesh Halappanavar 2 Frank Liu 1 Abstract The goal of inverse learning is to determine hid- den information from a set of observed but partial measurements. To fully characterize the uncer- tainty naturally induced by the partial view, a robust inverse solver that is able to estimate the complete posterior of the unrecoverable targets conditioned on a specific observation is therefore important, with a potential to probabilistically in- terpret the observational data for decision making. In this work, we propose an efficient variational approach that leverages a generative model to learn an approximate posterior distribution for the purpose of quantifying uncertainty in hidden tar- gets. This is achieved by parameterizing the target posterior using a flow-based model and minimiz- ing the KL divergence between the generative dis- tribution and the posterior distribution. Without requiring large training data, the target posterior samples can be efficiently drawn from the learned flow-based model through an invertible transfor- mation from tractable Gaussian random samples. We demonstrate our proposed approach on a real- world FastMRI image reconstruction problem and find it achieves high-quality performance with a smaller variation and error compared to the state- of-the-art baseline methods. 1. Introduction In computer vision and image processing, computational image reconstruction is a typical inverse problem where the goal is to learn and recover a hidden image x from directly measured data y via a forward operator F . Such mapping y = F (x), referred to as the forward process is often well-established. Unfortunately, the inverse process x = F -1 (y), proceeds in the opposite direction, which is a nontrivial task since it is often ill-posed. A regularized 1 Oak Ridge National Laboratory, USA 2 Pacific Northwest National Laboratory, USA. Correspondence to: Jiaxin Zhang <[email protected]>. Presented at the ICML 2021 Workshop on Uncertainty and Robust- ness in Deep Learning., Copyright 2021 by the author(s). optimization is therefore formulated to recover the hidden image x * : x * = arg min x {L(y, F (x)) + λω(x)} (1) where L is a loss function to measure the difference between the observed measurement data and the forward prediction, ω is a regularization function and λ is a regularization coef- ficient. The regularization function, including 1 -norm and total variation (VT), are typically used to constrain the im- age to a unique inverse solution in underdetermined imaging systems (Bouman & Sauer, 1993; Strong & Chan, 2003). Recent trends have focused on using deep learning for com- putational image reconstruction, which does not rely on an explicit transformer model or iterative updates but per- forms learned inversion from representative large datasets (Zhu et al., 2018; Belthangady & Royer, 2019; Tonolini et al., 2020; Wang et al., 2020), with applications in medical science, biology, astronomy and more. However, most of these existing studies in regularized optimization (Natterer & Wübbeling, 2001; Park et al., 2003) and feed-forward deep learning approaches (Ulyanov et al., 2018; Belthangady & Royer, 2019; Wang et al., 2020) mainly focus on pursuing a unique inverse solution by recovering a single point estimate. This leads to a significant limitation when working with un- derdetermined systems where it is conceivable that multiple inverse image solutions would be equally consistent with the measured data (Barbano et al., 2020; Sun & Bouman, 2020). Practically, in many cases, only partial and limited measure- ments are available which naturally leads to a reconstruction uncertainty. Thus, a reconstruction using a point estimate without uncertainty quantification would potentially mislead the decision making process (Beliy et al., 2019; Zhang et al., 2019; Zhou et al., 2020). Therefore, the ability to character- ize and quantify reconstruction uncertainty is of paramount relevance. In principle, Bayesian methods are an attractive route to address the inverse problems with uncertainty es- timation. However, in practice, exact Bayesian treatment of complex real-world problems is usually intractable. The common limitation is to resort to inference and sampling, typically by Markov Chain Monte Carlo (MCMC), which are often prohibitively expensive for imaging problems due to the curse of dimensionality. Objective This work aims to achieve a reliable image re- construction with an accurate estimation of data uncertainty

Upload: others

Post on 05-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Variational Generative Flows for Reconstruction

Variational Generative Flows for Reconstruction Uncertainty Estimation

Jiaxin Zhang 1 Jan Drgona 2 Sayak Mukherjee 2 Mahantesh Halappanavar 2 Frank Liu 1

Abstract

The goal of inverse learning is to determine hid-den information from a set of observed but partialmeasurements. To fully characterize the uncer-tainty naturally induced by the partial view, arobust inverse solver that is able to estimate thecomplete posterior of the unrecoverable targetsconditioned on a specific observation is thereforeimportant, with a potential to probabilistically in-terpret the observational data for decision making.In this work, we propose an efficient variationalapproach that leverages a generative model tolearn an approximate posterior distribution for thepurpose of quantifying uncertainty in hidden tar-gets. This is achieved by parameterizing the targetposterior using a flow-based model and minimiz-ing the KL divergence between the generative dis-tribution and the posterior distribution. Withoutrequiring large training data, the target posteriorsamples can be efficiently drawn from the learnedflow-based model through an invertible transfor-mation from tractable Gaussian random samples.We demonstrate our proposed approach on a real-world FastMRI image reconstruction problem andfind it achieves high-quality performance with asmaller variation and error compared to the state-of-the-art baseline methods.

1. IntroductionIn computer vision and image processing, computationalimage reconstruction is a typical inverse problem wherethe goal is to learn and recover a hidden image x fromdirectly measured data y via a forward operator F . Suchmapping y = F(x), referred to as the forward process isoften well-established. Unfortunately, the inverse processx = F−1(y), proceeds in the opposite direction, which isa nontrivial task since it is often ill-posed. A regularized

1Oak Ridge National Laboratory, USA 2Pacific NorthwestNational Laboratory, USA. Correspondence to: Jiaxin Zhang<[email protected]>.

Presented at the ICML 2021 Workshop on Uncertainty and Robust-ness in Deep Learning., Copyright 2021 by the author(s).

optimization is therefore formulated to recover the hiddenimage x∗:

x∗ = argminx

{L(y,F(x)) + λ ω(x)} (1)

where L is a loss function to measure the difference betweenthe observed measurement data and the forward prediction,ω is a regularization function and λ is a regularization coef-ficient. The regularization function, including `1-norm andtotal variation (VT), are typically used to constrain the im-age to a unique inverse solution in underdetermined imagingsystems (Bouman & Sauer, 1993; Strong & Chan, 2003).

Recent trends have focused on using deep learning for com-putational image reconstruction, which does not rely onan explicit transformer model or iterative updates but per-forms learned inversion from representative large datasets(Zhu et al., 2018; Belthangady & Royer, 2019; Tonoliniet al., 2020; Wang et al., 2020), with applications in medicalscience, biology, astronomy and more. However, most ofthese existing studies in regularized optimization (Natterer &Wübbeling, 2001; Park et al., 2003) and feed-forward deeplearning approaches (Ulyanov et al., 2018; Belthangady &Royer, 2019; Wang et al., 2020) mainly focus on pursuing aunique inverse solution by recovering a single point estimate.This leads to a significant limitation when working with un-derdetermined systems where it is conceivable that multipleinverse image solutions would be equally consistent with themeasured data (Barbano et al., 2020; Sun & Bouman, 2020).Practically, in many cases, only partial and limited measure-ments are available which naturally leads to a reconstructionuncertainty. Thus, a reconstruction using a point estimatewithout uncertainty quantification would potentially misleadthe decision making process (Beliy et al., 2019; Zhang et al.,2019; Zhou et al., 2020). Therefore, the ability to character-ize and quantify reconstruction uncertainty is of paramountrelevance. In principle, Bayesian methods are an attractiveroute to address the inverse problems with uncertainty es-timation. However, in practice, exact Bayesian treatmentof complex real-world problems is usually intractable. Thecommon limitation is to resort to inference and sampling,typically by Markov Chain Monte Carlo (MCMC), whichare often prohibitively expensive for imaging problems dueto the curse of dimensionality.

Objective This work aims to achieve a reliable image re-construction with an accurate estimation of data uncertainty

Page 2: Variational Generative Flows for Reconstruction

Variational Generative Flows for Reconstruction Uncertainty Estimation

resulting from measurement noise and sparsity. A suitableflow-based variational approach is proposed to approximatea posterior distribution of an unobserved (target) image.

Contributions We propose an uncertainty-aware frame-work that leverages a deep variational approach with robustgenerative flows to address these challenges. Our goal isto perform accurate characterization and quantification ofreconstruction uncertainty (data uncertainty) which is dueto sparse and noise measurements. We therefore minimizethe model uncertainties caused by invertible architecturesby introducing a robust flow-based model. We demonstrateour method on recently introduced problems of FastMRIreconstruction and show that it achieves a reliable and high-quality reconstruction with accurate uncertainty estimation.

2. BackgroundGenerative flow-based models Generative models, suchas GANs and VAEs, are intractable for explicitly learningthe probability density function which plays a fundamen-tal role on uncertainty estimation. Flow-based generativemodels overcome this difficulty with the help of normaliz-ing flows (NFs), which describe the transformation from alatent density z0 ∼ π0(z0) to a target density τ(x), wherex = zK ∼ πK(zK) through a sequence of invertible map-pings Tk : Rd → Rd, k = 1, ...,K. By using the change ofvariables rule

τ(x) = πk(zk) = πk−1(zk−1)

∣∣∣∣det ∂T −1k

∂zk−1

∣∣∣∣= πk−1(zk−1)

∣∣∣∣det ∂Tk∂zk−1

∣∣∣∣−1 ,(2)

the target density πK(zK) obtained by successively trans-forming a random variable z0 through a chain of K trans-formations zK = TK ◦ · · · ◦ T1(z0) is

log τ(x) = log πK(zK) = log π0(z0)−K∑k=1

log

∣∣∣∣det ∂Tk∂zk−1

∣∣∣∣,where each transformation Tk must be sufficiently expres-sive while being theoretically invertible and efficient to com-pute the Jacobian determinant. Affine coupling functions(Dinh et al., 2016; Kingma & Dhariwal, 2018) are oftenused because they are simple and efficient to compute. How-ever, these benefits come at the cost of expressivity andflexibility; many flows must be stacked to learn complexrepresentation, as shown in Figure 1.

Density Estimation Assuming that samples {xi}Mi=1

drawn from a probability density p(x) are available, ourgoal is to learn a flow-based model τφ(x) parameterized bythe vector φ by through a transformation x = T (z) of alatent density π0(z) with T = TK ◦ · · · ◦ T1 as a K-step

flow. This is achieved by minimizing the KL-divergenceDKL = KL(p(x) ‖ τφ(x)) , which is equivalent to maxi-mum likelihood estimation.

Variational Inference The goal is to approximate theposterior distribution p through a variational distribution πKencoded by a flow-based model τφ(x), which is tractableto compute and draw samples. This is achieved by mini-mizing the KL-divergence DKL = KL(πK ‖ p), which isequivalent to maximizing an evidence lower bound (ELBO).

Evaluation metrics for generative models Designingindicative evaluation metrics for generative models and sam-ples remains a challenge. A widely used metric for measur-ing the similarity between real and generated images hasbeen the Fréchet Inception Distance (FID) score (Heuselet al., 2017) but it fails to separate two critical aspects ofthe quality of generative models: fidelity that refers to thedegree to which the generated samples resemble the realones, and diversity, which measures whether the generatedsamples cover the full variability of the real samples. Weintroduce reliable metrics (density and coverage) to evalu-ate the quality of the generated samples and measure thedifference from the ground truth. They are defined as

density :=1

kM

M∑j=1

N∑i=1

1XGj ∈B(Xi,NNDk(Xi)),

coverage :=1

N

N∑i=1

1∃js.t.XGj ∈B(Xi,NNDk(Xi))

(3)

where N and M are the number of true and generativesamples, B(x, r) is the sphere around x with radius r, andNNDk(Xi) denotes the distance from Xi to the kth nearestneighbour (Naeem et al., 2020).

3. MethodOur goal is to build a deep variational framework to ac-curately estimate the data uncertainty quantified by an ap-proximation of the posterior distribution. The regularizedoptimization in Eq. (1) can be further written in terms ofdata fidelity (data fitting loss) and regularity:

x∗ = argminx

{LD(y,F(x)) + λω(x)}

= argminx

{‖ y −F(x) ‖2︸ ︷︷ ︸

Data fidelity

+ λω(x)︸ ︷︷ ︸Regularity

} (4)

Assuming the forward operator F is known and the mea-surement noise statistics are given, we can reformulate theinverse problem in a probabilistic way. In Bayesian per-spective, the regularized inverse problem in Eq. (4) can beinterpreted as a Bayesian inference problem but aims tomaximize the posterior distribution by searching a point

Page 3: Variational Generative Flows for Reconstruction

Variational Generative Flows for Reconstruction Uncertainty Estimation

estimator x∗:

x∗ = argmaxx

{log p(x|y)︸ ︷︷ ︸

Posterior

}= argmax

x

{log p(y|x)︸ ︷︷ ︸Data likelihood

+ log p(x)︸ ︷︷ ︸Prior

} (5)

where the prior distribution p(x) (e.g., image prior (Ulyanovet al., 2018) in reconstruction problems) defines a similarregularization term and data likelihood p(y|x) correspondsto the data fidelity in Eq. (4).

If we parameterize the target x using a generative modelx = Tφ(z), z ∼ N (0, I) with model parameter φ, anapproximate posterior distribution τφ∗(x) is obtained byminimizing the KL-divergence between the generative dis-tribution and the target posterior distribution

φ∗ = argminφ

KL(τφ(x) ‖ p(x|y))

= argminφ

Ex∼τφ(x)[− log p(y|x)− log p(x) + log τφ(x)]

Unfortunately, the probability density (data likelihood)τφ(x) can not be exactly evaluated by most of existinggenerative models, such as GANs (Goodfellow et al., 2014)or VAEs (Kingma & Welling, 2013). Flow-based models(Rezende & Mohamed, 2015; Dinh et al., 2016; Kingma &Dhariwal, 2018; Grathwohl et al., 2018; Wu et al., 2020;Nielsen et al., 2020) offer a promising approach to computethe likelihood exactly via the change of variable theoremwith invertible architectures. Therefore, Eq. (3) can be re-formulated in terms of flow-based model as

φ∗ = argminφ

Ez∼π(z) [− log p(y|Tφ(z))− log p(Tφ(z))

+ log π(z)− log∣∣det∇zTφ(z)

∣∣We replace data likelihood and prior terms by using datafidelity loss and regularization function in Eq. (4) such thatwe can define a new optimization problem where it can besolved by using a Monte Carlo method to approximate theexpectation in practice:

φ∗ = argminφ

Ez∼π(z) [LD(y,F(Tφ(z)))

+ λω(Tφ(z)) + log π(z)− log∣∣det∇zTφ(z)

∣∣= argmin

φ

M∑j=1

[LD(y,F(Tφ(zj))) + λω(Tφ(zj))

− log∣∣det∇zTφ(zj)

∣∣︸ ︷︷ ︸Entropy

]

where π(z) is a constant under expectation andlog∣∣det∇zTφ(zj)

∣∣ is an entropy term that is importantto encourage sample diversity and exploration so as to avoid

generative model from collapsing to a deterministic solution(Higgins et al., 2016).

It can be noted that the flow-based model is very criticaland sensitive to uncertainty estimation and quantificationwithin this variational framework. To perform accurate datauncertainty estimation, the uncertainty associated with theflow-based model must be minimized. To this end, we pro-pose to use a robust generative flow (RGF) by leveragingneural spline flow-based models (Durkan et al., 2019) withenhanced stability, expressivity and flexibility, while con-serving efficient inference and sampling without increasingarchitecture depth.

Latent density Target density<latexit sha1_base64="DrRBow5GcTP1eITadaP1x54H7Yg=">AAAGR3icfVTNbtNAEN4WUkr4a+HIxSKqVKqqilHVFiSkSvTABbVI/UOxVa0342TV/bF21zSp5afgCg/EI/AU3BBHZpNQ1U5Sy5bG386338zs7CSZ4Na1278WFu/dbyw9WH7YfPT4ydNnK6vPT63ODYMTpoU25wm1ILiCE8edgPPMAJWJgLPk8oNfP/sKxnKtjt0wg1jSnuIpZ9Qh9CVbjxIZXL9uXqy02lvt0RNMG+HEaJHJc3Sx2liPuprlEpRjglrbCduZiwtqHGcCymaUW8gou6Q96KCpqAQbF6OIy2ANkW6QaoOfcsEIvc0oqLR2KBP0lNT1bX3Ng7PWOrlL9+KCqyx3oNhYKM1F4HTg0w+63ABzYogGZYZjrAHrU0OZwyJVVBI5U9wjmZODaoIpV1TERdZNM/y1ZT1en+U0OsqiAhrt8GBUr4r2DM36nKHm2m3Y5knKa67j+lag3IhqsFz5bokLULnkDmTV/QrFprZNhaauusuo2ezmOIjc4H5OM6FT7zZyt24osIx6AN1ajCha3UtS0+PqPbYZV3HRAy3BmWGVZcFZtKEWGFVsiBpVNNH60tGkVvE+NoAxkNaioZm/CjWxPLnB1+6uXeVIbuCOgNRN0tqIK3s7GDimZVY/OOmvUtlsRgqu/B9V3QhbRAjfPEU0aiF+DUXYLoswLCO89NjKHi6nOWBqrLelJ84l4YwYzhLbK5E5XyrHis6i7ZbInEvDAhiQIGYq7pT43hnnvFi3S3wrzNvUIjo+PDgsO2FcROMjwJtSGGzOwi+8C1ph6RkHgOPMwCe8m4eYHXXabBQRnqTkqsTx1os2vXWXIx38d0Sr6WdrWJ+k08bpm61wZ6v9ebu1vzeZssvkJXlF1klIdsk++UiOyAlhRJJv5Dv50fjZ+N340/g7dl1cmHBekMqztPAPGAdJPw==</latexit>

p(z)<latexit sha1_base64="aBUGDR9ftpoWVBttp5tPeqXG58I=">AAAGR3icfVTNbtNAEN4WUkr4a+HIxSKqVKqqiquqLUhIleiBC2qR+odiq1pvxsmq+2PtrmmC5afgCg/EI/AU3BBHZpNQ1U5Sy5bG386338zs7CSZ4Na1278WFu/dbyw9WH7YfPT4ydNnK6vPz6zODYNTpoU2Fwm1ILiCU8edgIvMAJWJgPPk6r1fP/8CxnKtTtwwg1jSnuIpZ9Qh9DlbjxIZDF43L1da7a326AmmjXBitMjkOb5cbaxHXc1yCcoxQa3thO3MxQU1jjMBZTPKLWSUXdEedNBUVIKNi1HEZbCGSDdItcFPuWCE3mYUVFo7lAl6Sur6tr7mwVlrndyl+3HBVZY7UGwslOYicDrw6QddboA5MUSDMsMx1oD1qaHMYZEqKomcKe6RzMlBNcGUKyriIuumGf7ash6vz3IaHWVRAY12eDCqV0V7hmZ9zlBz7TZs8yTlNddxfStQbkQ1WK58t8QFqFxyB7Lqfo1iU9umQlNX3WXUbHZzHERucD+nmdCpdxu5WzcUWEY9gG4tRhSt7iWp6XH1DtuMq7jogZbgzLDKsuAs2lALjCo2RI0qmmh95WhSq3gfG8AYSGvR0MxfhZpYntzga3fXrnIkN3BHQOomaW3Elb0dDBzTMqsfnPRXqWw2IwXX/o+qboQtIoRvniIatRD/CkXYLoswLCO89NjKHi6nOWBqrDelJ84l4YwYzhLbL5E5XyrHis6i7ZXInEvDAhiQIGYq7pb43hnnvFh3SnwrzNvUIjo5OjwqO2FcROMjwJtSGGzOwi+8DVph6RmHgOPMwEe8m0eYHXXabBQRnqTkqsTx1os2vXWXIx38d0Sr6WdrWJ+k08bZ9la4u9X+tNM62J9M2WXykrwi6yQke+SAfCDH5JQwIsk38p38aPxs/G78afwduy4uTDgvSOVZWvgHDE1JPQ==</latexit>

p(x)

<latexit sha1_base64="MKdDTxQuQV/kWSNLhMdHf2px2gI=">AAAGRXicfVTNattAEN6kdZu6f0l77EXUBEIIwSohSQuFQHPopSSF/IElwmo9shfvj9hdNXaEHqLX9oH6DH2I3kqv7azthki2IyQYfTvffjOzs5NkglvXbv9cWr53v/Hg4cqj5uMnT589X117cWZ1bhicMi20uUioBcEVnDruBFxkBqhMBJwngw9+/fwLGMu1OnGjDGJJe4qnnFGH0HmUyOD6sn252mpvt8dPMGuEU6NFps/x5VpjI+pqlktQjglqbSdsZy4uqHGcCSibUW4ho2xAe9BBU1EJNi7G8ZbBOiLdINUGP+WCMXqbUVBp7Ugm6Cmp69v6mgfnrXVyl+7HBVdZ7kCxiVCai8DpwCcfdLkB5sQIDcoMx1gD1qeGMoclqqgkcq64RzInh9UEU66oiIusm2b4a8t6vD7LWXScRQU02uGxqF4V7Rma9TlDzfXbsM2TlNdcJ/WtQLkR1WC58r0SF6ByyR3IqvsVis1smwpNXXWXcavZrUkQucH9nGZCp95t7G7dSGAZ9RC6tRhRtLqXpKbH1XtsM67iogdagjOjKsuCs2hDLTCq2Ag1qmii9cDRpFbxPjaAMZDWoqGZvwg1sTy5wdfvrl3lSG7gjoDUTdPajCt7Oxg6pmVWPzjpr1LZbEYKrvwfVd0IW0QI3zxFNG4hfg1F2C6LMCwjvPLYyh4uZzlgaqy3pScuJOGEGM0T2y+RuVgqx4rOo+2VyFxIwwIYkCDmKu6W+N4Z56JYd0p8K8zb1CI6OTo8KjthXESTI8CbUhhszsIvvAtaYekZh4DjzMAnvJtHmB112mwWEZ6k5KrE8daLtrx1lyMd/ndEq9nE2RrWJ+mscfZmO9zdbn/eaR3sT6fsCnlFXpMNEpI9ckA+kmNyShgZkK/kG/ne+NH41fjd+DNxXV6acl6SytP4+w9ioUnu</latexit>z0<latexit sha1_base64="0+Sjuhgv7EFF5PfM9p8DVky8ZP8=">AAAGRXicfVTNattAEN6kdZu6f0l77EXUBEIIwSohSQuFQHPopSSF/IElwmo9shfvj9hdNXaEHqLX9oH6DH2I3kqv7azthki2IyQYfTvffjOzs5NkglvXbv9cWr53v/Hg4cqj5uMnT589X117cWZ1bhicMi20uUioBcEVnDruBFxkBqhMBJwngw9+/fwLGMu1OnGjDGJJe4qnnFGH0HmUyOD6MrxcbbW32+MnmDXCqdEi0+f4cq2xEXU1yyUoxwS1thO2MxcX1DjOBJTNKLeQUTagPeigqagEGxfjeMtgHZFukGqDn3LBGL3NKKi0diQT9JTU9W19zYPz1jq5S/fjgqssd6DYRCjNReB04JMPutwAc2KEBmWGY6wB61NDmcMSVVQSOVfcI5mTw2qCKVdUxEXWTTP8tWU9Xp/lLDrOogIa7fBYVK+K9gzN+pyh5vpt2OZJymuuk/pWoNyIarBc+V6JC1C55A5k1f0KxWa2TYWmrrrLuNXs1iSI3OB+TjOhU+82drduJLCMegjdWowoWt1LUtPj6j22GVdx0QMtwZlRlWXBWbShFhhVbIQaVTTReuBoUqt4HxvAGEhr0dDMX4SaWJ7c4Ot3165yJDdwR0DqpmltxpW9HQwd0zKrH5z0V6lsNiMFV/6Pqm6ELSKEb54iGrcQv4YibJdFGJYRXnlsZQ+XsxwwNdbb0hMXknBCjOaJ7ZfIXCyVY0Xn0fZKZC6kYQEMSBBzFXdLfO+Mc1GsOyW+FeZtahGdHB0elZ0wLqLJEeBNKQw2Z+EX3gWtsPSMQ8BxZuAT3s0jzI46bTaLCE9SclXieOtFW966y5EO/zui1WzibA3rk3TWOHuzHe5utz/vtA72p1N2hbwir8kGCckeOSAfyTE5JYwMyFfyjXxv/Gj8avxu/Jm4Li9NOS9J5Wn8/QdofEnv</latexit>z1

<latexit sha1_base64="oAtZ8RmVI9hjvj8yseO/299czIk=">AAAGRXicfVTdahNBFJ6q0Rr/Wr30ZjEUSiklK6WtglCwF4JIK/QPskuZnZxNhszPMjNrky77EN7qA/kMPoR34q2eSWLpbpIuu3D2m/PNd86ZMyfJBLeu3f65dOfuvcb9B8sPm48eP3n6bGX1+anVuWFwwrTQ5jyhFgRXcOK4E3CeGaAyEXCWDN779bMvYCzX6tiNMogl7SmeckYdQmdRIoOri48XK632Vnv8BLNGODVaZPocXaw21qOuZrkE5Zig1nbCdubighrHmYCyGeUWMsoGtAcdNBWVYONiHG8ZrCHSDVJt8FMuGKM3GQWV1o5kgp6Sur6tr3lw3lond+leXHCV5Q4UmwiluQicDnzyQZcbYE6M0KDMcIw1YH1qKHNYoopKIueKeyRzclhNMOWKirjIummGv7asx+uznEXHWVRAox0ei+pV0Z6hWZ8z1Fy7Cds8SXnNdVLfCpQbUQ2WK98rcQEql9yBrLpfotjMtqnQ1FV3Gbea3ZwEkRvcz2kmdOrdxu7WjQSWUQ+hW4sRRat7SWp6XL3DNuMqLnqgJTgzqrIsOIs21AKjio1Qo4omWg8cTWoV72MDGANpLRqa+YtQE8uTa3zt9tpVjuQa7ghI3TStjbiyt4OhY1pm9YOT/iqVzWak4NL/UdWNsEWE8M1TROMW4ldQhO2yCMMywiuPrezhcpYDpsZ6U3riQhJOiNE8sb0SmYulcqzoPNpuicyFNCyAAQliruJOie+tcS6KdbvEt8K8SS2i48ODw7ITxkU0OQK8KYXB5iz8wtugFZaecQA4zgx8wrt5iNlRp81GEeFJSq5KHG+9aNNbtznS4X9HtJpNnK1hfZLOGqevt8Kdrfbn7db+3nTKLpOX5BVZJyHZJfvkAzkiJ4SRAflKvpHvjR+NX43fjT8T1ztLU84LUnkaf/8BAMlKCQ==</latexit>zK<latexit sha1_base64="dC4gB2tp3n8s6SkjDSrcqUndIzo=">AAAGRXicfVTNbtNAEN4CgRL+WjhysYgqVVVVxahqCxJSJXrgglqk/kmxVa0342TJ/li7a5rU8kNwhQfiGXgIbogrzCahqp2kli2Nv51vv5nZ2Ukywa1rt38u3bl7r3H/wfLD5qPHT54+W1l9fmp1bhicMC20OU+oBcEVnDjuBJxnBqhMBJwlg/d+/ewLGMu1OnajDGJJe4qnnFGH0FmUyODq4vPFSqu91R4/wawRTo0WmT5HF6uN9airWS5BOSaotZ2wnbm4oMZxJqBsRrmFjLIB7UEHTUUl2LgYx1sGa4h0g1Qb/JQLxuhNRkGltSOZoKekrm/rax6ct9bJXboXF1xluQPFJkJpLgKnA5980OUGmBMjNCgzHGMNWJ8ayhyWqKKSyLniHsmcHFYTTLmiIi6ybprhry3r8fosZ9FxFhXQaIfHonpVtGdo1ucMNdduwjZPUl5zndS3AuVGVIPlyvdKXIDKJXcgq+6XKDazbSo0ddVdxq1mNydB5Ab3c5oJnXq3sbt1I4Fl1EPo1mJE0epekpoeV++wzbiKix5oCc6MqiwLzqINtcCoYiPUqKKJ1gNHk1rF+9gAxkBai4Zm/iLUxPLkGl+7vXaVI7mGOwJSN01rI67s7WDomJZZ/eCkv0plsxkpuPR/VHUjbBEhfPMU0biF+BUUYbsswrCM8MpjK3u4nOWAqbHelJ64kIQTYjRPbK9E5mKpHCs6j7ZbInMhDQtgQIKYq7hT4ntrnIti3S7xrTBvUovo+PDgsOyEcRFNjgBvSmGwOQu/8DZohaVnHACOMwMf8W4eYnbUabNRRHiSkqsSx1sv2vTWbY50+N8RrWYTZ2tYn6SzxunrrXBnq/1pu7W/N52yy+QleUXWSUh2yT75QI7ICWFkQL6Sb+R740fjV+N348/E9c7SlPOCVJ7G33+2Tkoo</latexit>z j

<latexit sha1_base64="UDTsFT15RZqrMn3RJF80mDR+LJc=">AAAGRHicfVTNattAEN60dZu6f0l77EXUBEIIRgrGdgqFQHPopSSF/IElwmq9srfZH7G7auwKvUOv7QP1HfoOvZVeS2dlJ0SKHSHB6Nv59puZnZ045cxY3/+1cu/+g8bDR6uPm0+ePnv+Ym395YlRmSb0mCiu9FmMDeVM0mPLLKdnqaZYxJyexhfv3frpF6oNU/LITlMaCTySLGEEW4BOQjJU1pyvtfx2rxN0+zteafR2d2dGt+N7QdsvnxaaP4fn643NcKhIJqi0hGNjBoGf2ijH2jLCadEMM0NTTC7wiA7AlFhQE+VluIW3AcjQS5SGT1qvRG8yciyMmYoYPAW2Y1Nfc+CitUFmk36UM5lmlkoyE0oy7lnludy9IdOUWD4FAxPNIFaPjLHGxEKFKiqxWCjukNSKSTXBhEnMozwdJin8mqIer8vyNlpmUQG1snAqclRFRxqnY0ZAc+MmbLI4YTXXWX0rUKZ5NVgmXatEOZWZYJaKqvsliN3aNuEK2+ouZaeZ7VkQmYb9rCJcJc6tdDd2yqGMakKHtRhBtLqXwHrE5DtoMyajfESVoFZPqyxDrQGb1gLDkkxBo4rGSl1YHNcqPoYG0JomtWhw6u5BTSyLr/GNu2tXOZJreMBpYudpbUWVvS2dWKJEWj844a5S0WyGkl66PyyHIbQI56558rBsIfaV5oFf5EFQhHDjoZUdXNzmUF1j7RaOuJQEA2K6SKxfAHO5VAYVXUTrFcBcSoMCaCooX6jYLeC9M85lsXYKeCvMm9Q8PDrYPygGQZSHsyOAm5JraM7cLbz1WkHhGPsUxpmmH+FuHkB22Cq9lYdwkoLJAsbbKNx21l2OeHLlCFazCbP1aoB6y42TnXbQbfufOq29/nzKrqLX6A3aRAHqoT30AR2iY0TQZ/QNfUc/Gj8bvxt/Gn9nrvdW5pxXqPI0/v0HCEFKYA==</latexit>· · ·<latexit sha1_base64="UDTsFT15RZqrMn3RJF80mDR+LJc=">AAAGRHicfVTNattAEN60dZu6f0l77EXUBEIIRgrGdgqFQHPopSSF/IElwmq9srfZH7G7auwKvUOv7QP1HfoOvZVeS2dlJ0SKHSHB6Nv59puZnZ045cxY3/+1cu/+g8bDR6uPm0+ePnv+Ym395YlRmSb0mCiu9FmMDeVM0mPLLKdnqaZYxJyexhfv3frpF6oNU/LITlMaCTySLGEEW4BOQjJU1pyvtfx2rxN0+zteafR2d2dGt+N7QdsvnxaaP4fn643NcKhIJqi0hGNjBoGf2ijH2jLCadEMM0NTTC7wiA7AlFhQE+VluIW3AcjQS5SGT1qvRG8yciyMmYoYPAW2Y1Nfc+CitUFmk36UM5lmlkoyE0oy7lnludy9IdOUWD4FAxPNIFaPjLHGxEKFKiqxWCjukNSKSTXBhEnMozwdJin8mqIer8vyNlpmUQG1snAqclRFRxqnY0ZAc+MmbLI4YTXXWX0rUKZ5NVgmXatEOZWZYJaKqvsliN3aNuEK2+ouZaeZ7VkQmYb9rCJcJc6tdDd2yqGMakKHtRhBtLqXwHrE5DtoMyajfESVoFZPqyxDrQGb1gLDkkxBo4rGSl1YHNcqPoYG0JomtWhw6u5BTSyLr/GNu2tXOZJreMBpYudpbUWVvS2dWKJEWj844a5S0WyGkl66PyyHIbQI56558rBsIfaV5oFf5EFQhHDjoZUdXNzmUF1j7RaOuJQEA2K6SKxfAHO5VAYVXUTrFcBcSoMCaCooX6jYLeC9M85lsXYKeCvMm9Q8PDrYPygGQZSHsyOAm5JraM7cLbz1WkHhGPsUxpmmH+FuHkB22Cq9lYdwkoLJAsbbKNx21l2OeHLlCFazCbP1aoB6y42TnXbQbfufOq29/nzKrqLX6A3aRAHqoT30AR2iY0TQZ/QNfUc/Gj8bvxt/Gn9nrvdW5pxXqPI0/v0HCEFKYA==</latexit>· · ·

<latexit sha1_base64="yJizXtBTNfuTraXqqvZkXlbSOIc=">AAAGWXicfVTbattAEN2kjZu6t6R5Kn0RNYE0lCCVkKSFQqB56EtJCrmBJcxqPXKW7EXsrpo4QvRr+tp+T+nPdNZ2QyTbERKMzs7ZMzM7O2kuuHVh+Gdh8cHDpdaj5cftJ0+fPX+xsvry1OrCMDhhWmhznlILgis4cdwJOM8NUJkKOEsvP/v1s+9gLNfq2A1zSCQdKJ5xRh1CvZVXcSqDm14YxJbLIO+FGxPgbW+lE26FoyeYNqKJ0SGT56i3urQR9zUrJCjHBLW2G4W5S0pqHGcCqnZcWMgpu6QD6KKpqASblKMcqmAdkX6QaYOfcsEIvcsoqbR2KFP0lNRd2OaaB2etdQuX7SUlV3nhQLGxUFaIwOnAFyTocwPMiSEalBmOsQbsghrKHJatppLKmeIeyZ28rieYcUVFUub9LMdfWzXj9VlOo6MsaqDRDo9KDerowND8gjPUXL8L2yLNeMN1XN8aVBhRD5Yr3z9JCaqQ3IGsu1+h2NS2mdDU1XcZtZ99Nw6iMLif00zozLuN3K0bCiyjvoZ+I0YUre8lqRlw9QnbjKukHICW4MywzrLgLNrQCIwqNkSNOppqfelo2qj4BTaAMZA1oqG5vxwNsSK9xdfvr13tSG7hroDMTdLaTGp7O7h2TMu8eXDSX6Wq3Y4VXPk/qvoxtogQvnnKeNRC/AbKKKzKKKpiHAPYyh6upjlgGqwPlSfOJeHUGM4S26uQOV+qwIrOou1WyJxLwwIYkCBmKu5U+N4b57xYtyt8a8y71DI+Pjw4rLpRUsbjI8CbUhpsztIvfAw6UeUZB4DjzMBXvJuHmB112myWMZ6k5KrC8TaI33nrPkd6/d8RrXYbZ2vUnKTTxun7rWhnK/y23dnfm0zZZfKavCEbJCK7ZJ98IUfkhDDyg/wkv8jvpb+thdZyqz12XVyYcNZI7Wmt/QOmkUxg</latexit>

z0 ⇠ p0(z0)<latexit sha1_base64="UnUZ8hQIPpG0wBXE2hdwPKNEsmE=">AAAGX3icfVRNa9tAEFXSxkndNnXaU+mhoiaQhhCsEpK0EAg0h0IoSSFfYAmzWo/sJfshdleNHaFjf02v7Y/psf+ks7YbItmOkPHo7bx9M7OzE6ecGdtq/VlYfPR4qba88qT+9Nnz1ReNtZcXRmWawjlVXOmrmBjgTMK5ZZbDVaqBiJjDZXz92a1ffgdtmJJndphCJEhPsoRRYhHqNN6GsfBvO8d+aJjw087xxgQ4cP+D951Gs7XdGj3+tBFMjKY3eU47a0sbYVfRTIC0lBNj2kErtVFOtGWUQ1EPMwMpodekB200JRFgonyUSeGvI9L1E6XxJ60/Qu8zciKMGYoYPQWxfVNdc+CstXZmk/0oZzLNLEg6Fkoy7lvlu7L4XaaBWj5Eg1DNMFaf9okm1GLxSiqxmCnukNSKQTnBhEnCozztJil+mqIar8tyGh1lUQK1snhgsldGe5qkfUZRc/0+bLI4YRXXcX1LUKZ5OVgmXRdFOchMMAui7H6DYlPbJlwRW95l1IRmaxxEpnE/qyhXiXMbuRs75FhGNYBuJUYULe8liO4xeYBtxmSU90AJsHpYZhmwBm2oBEYkHaJGGY2VurYkrlS8jw2gNSSVaEjqrkhFLIvv8PWHa1c6kju4zSGxk7Q2o9LeFgaWKpFWD064q1TU66GEG/dFZDfEFuHcNU8ejlqI3UIetIo8CIoQhwG2soOLaQ7oCutj4YhzSTg7hrPE9gtkzpfKsKKzaHsFMufSsAAaBPCZirsFvg/GOS/WnQLfEvM+NQ/PTo5OinYQ5eH4CPCm5BqbM3cLn/xmUDjGEeA40/AV7+YJZkes0pt5iCcpmCxwvPXCLWc95EgG/x3RqtdxtgbVSTptXHzYDna3W992mof7kym74r3x3nkbXuDteYfeF+/UO/eo98P76f3yfi/9rS3XVmuNseviwoTzyis9tdf/AGrqTu0=</latexit>

zK ⇠ pK(zK = x)

<latexit sha1_base64="2m6ZdDGbp+z0L+JSJwQfv5w0sJY=">AAAGiXicfVRbT9swFA7b6FjZBbbHvUSrmBhiKJkQMCQkJHjYhCaYxE1qIuS4TmvhS2Q7o8Xyb9qv2cNetr+y47ZDJG2xUvX48/nOzccnKxjVJop+zz16/GS+8XThWXPx+YuXr5aWX59rWSpMzrBkUl1mSBNGBTkz1DByWSiCeMbIRXZ94M8vfhClqRSnZlCQlKOuoDnFyAB0tfQ1ybjtu3Av9MKtuzoK3++FOfwlmCoMkj36GLvxLulIo+9O4nB1TIo+hM2rpVa0EQ1XOCnEY6EVjNfJ1fL8KtjDJSfCYIa0bsdRYVKLlKGYEddMSk0KhK9Rl7RBFIgTndphzi5cAaQT5lLBT5hwiN5nWMS1HvAMNDkyPV0/8+C0s3Zp8p3UUlGUhgg8cpSXLDQy9AUMO1QRbNgABIQVhVhD3EMKYQNlrnjJ+FTnHikM71cTzKlALLVFJy9gq109Xp/lJDrMogIqaeBqRbeKdhUqehSDz5X7sC6znNZUR/WtQKVi1WCp8P2WWiJKTg3hVfUbcDZhNmcSmaqVYbvq9VEQpQJ7RmImc682VNdmwKCMsk86tRjBadUWR6pLxR60GRWp7RLJiVGDKksTo0EmtcCQwAPwUUUzKa8NymoV70EDKEXyWjSo8I+p5qzM7vCVh2tXuZI7uM1IbsZpraUV24b0DZa8qF8c90/JNZuJIDd+h0QngRZhzDePTYYtRG+JjSNn49glMDaglT3sJjlE1VifnSfOJMGUGUxztuOAOdtVCRWdRtt2wJxJgwIowgmb6nHLwfdgnLNi3XTwVZj3qTY5PT48du04tcnoCuClWAXNaf3BbtiKnWccEhhninyDt3kM2SEj1ZpN4CY5FQ7GWzdZ99JDiqj/XxGkpp+tcX2STgrnnzbirY3o+2Zrf3M8ZReCt8G7YDWIg+1gP/gSnARnAQ5+Br+CP8HfxmIjbuw0dkeqj+bGnDdBZTUO/gH29l1i</latexit>

x = zK = fK � fK�1 � · · ·� f1(z0)

Gaussian samples Change of variable theorem

Invertible architecture

Sampling method for latent random variable

10 s

ampl

es25

sam

ples

<latexit sha1_base64="Q+sPcqMyKvWdnr7Wg+ipkD29s5g=">AAAGSnicfVTNahsxEFbSOE3dv6Q99rLUBNIQgreEJC0UAs2hl5IU8gfeJWjlWVtEP4ukbews+xq9tg/UF+hr9FZ66ch2Q3ZtR+zC6NN8+kaj0SSZ4Na1278WFh8sNZYfrjxqPn7y9Nnz1bUXZ1bnhsEp00Kbi4RaEFzBqeNOwEVmgMpEwHly9dGvn38FY7lWJ26YQSxpT/GUM+oQitLLcCNKZHBz2X5zudpqb7dHI5g2wonRIpNxfLnW2Ii6muUSlGOCWtsJ25mLC2ocZwLKZpRbyCi7oj3ooKmoBBsXo6DLYB2RbpBqg79ywQi9yyiotHYoE/SU1PVtfc2Ds9Y6uUv344KrLHeg2FgozUXgdOAzEHS5AebEEA3KDMdYA9anhjKHeaqoJHKmuEcyJwfVA6ZcUREXWTfNcGrLerz+lNPo6BQV0GiHd6N6VbRnaNbnDDXX78I2T1Jecx3ntwLlRlSD5coXTFyAyiV3IKvu1yg2tW0qNHXVXUb1ZrfGQeQG93OaCZ16t5G7dUOBadQD6NZiRNHqXpKaHlcfsMy4ioseaAnODKssC86iDbXAqGJD1KiiidZXjia1jPexAIyBtBYNzfxrqInlyS2+fn/uKldyC3cEpG5yrM24sreDgWNaZvWLk/4plc1mpODaz6jqRlgiQvjiKaJRCfEbKMJ2WYRhGeG7x1L2cDnNAVNjvSs9cS4J28Rwlth+icz5UjlmdBZtr0TmXBomwIAEMVNxt8Tv3jjnxbpT4ldh3qUW0cnR4VHZCeMiGl8BvpTCYHEWfuF90ApLzzgEbGcGPuPbPMLTUafNZhHhTUquSmxvvWjLW/c50sF/R7SaTeytYb2TThtnb7fD3e32l53Wwc6ky66QV+Q12SAh2SMH5BM5JqeEkYx8I9/Jj8bPxu/Gn8bfseviwoTzklTG8tI/t4BKZA==</latexit>

f1(z0)

Posterior samples

Figure 1: Illustration of mechanism for normalizing flows. Theprior Gaussian samples drawn from the latent distribution is trans-formed to the posterior samples that match the target density byusing a sequence of invertible mappings.

4. ExperimentsFastMRI case study Partial and undersampled noisymeasurements in MRI will lead to reconstruction uncer-tainty. We demonstrate that our proposed variational frame-work with robust generative flows (RGF) and variance-reduced sampling (LPSS) (Shields & Zhang, 2016) canbe successfully applied to quantify the reconstruction (data)uncertainty and error on two cases (brain and knee) fromFastMRI dataset (Zbontar et al., 2018) (resized to 128 ×128 pixels) with three different acceleration factors 4X, 6Xand 8X. Fig. 2 presents the reconstruction results with pixel-wise statistics of the estimated posterior distribution. 1000posterior samples drawn from the learned models are usedto estimate the statistical information. Note that, for bothbrain and knee cases with a speedup 4X factor, our RGFLshows a more accurate mean estimate µx with a smallerabsolute error εx than the baseline RealNVP model withsimilar architecture (4 blocks). Our advantage in termsof standard deviation σx is more significant thanks to themodel robustness with variance reduction. In other words,our method provides a more reliable reconstruction giventhe same measurement. As expected, the pixel-wise σx ofthe reconstruction tends to be larger as the speedup factorincreases (more details in supplementary materials).

Page 4: Variational Generative Flows for Reconstruction

Variational Generative Flows for Reconstruction Uncertainty Estimation

Table 1: Statistical comparison of the estimated posteriors on FastMRI brain case

Brain Speedup 4X Speedup 6X Speedup 8X

RNVP NSF RGFL RNVP NSF RGFL RNVP NSF RGFLStd. Dev. ↓ 1.21E-5 8.82E-6 9.73E-7 1.97E-5 1.02E-5 2.35E-6 3.83E-5 2.27E-5 4.10E-6

Abs. Error ↓ 3.31E-5 6.91E-6 1.78E-6 5.70E-5 8.04E-6 4.58E-6 8.81E-5 1.33E-5 7.12E-6Precision ↑ 0.548 0.564 0.603 0.501 0.555 0.590 0.511 0.540 0.566

Recall ↑ 0.564 0.598 0.629 0.523 0.582 0.606 0.493 0.571 0.569Density ↑ 0.925 0.934 0.955 0.887 0.929 0.953 0.831 0.907 0.940

Coverage ↑ 0.957 0.989 0.997 0.892 0.983 0.988 0.887 0.965 0.971

RNVP 4X RGFL 4X RGFL 6X RGFL 8X RNVP 4X RGFL 4X RGFL 6X RGFL 8X

Mean

Std.Dev.

Abs.Error

Mean

Std.Dev.

Abs.Error

Figure 2: FastMRI reconstruction of brain case at three differentacceleration speedup factors: 4X, 6X and 8X (each shown in acolumn). Row 1 shows the ground truth and sampling masks foreach case. Row 2-4 shows the mean, standard deviation, absolutionof error for the estimated posterior samples.

We further use the mean of the standard deviation σx and theabsolute error εx to quantitatively compare the pixel-wisestatistics (see Table 1). Our method outperforms the othertwo baselines (RNVP and NSF) in terms of accuracy andvariation of the reconstruction. Specifically, our estimationachieves significant variance reduction with 1 or 2 orders ofmagnitude. The fidelity and diversity metrics are used hereto evaluate the posterior samples drawn from the learnedgenerative models (↓ and ↑ means a lower and higher valueis expected respectively). Our method shows competitiveperformance in most cases.

5. Related worksDeep learning for solving inverse problems requires uncer-tainty estimation to be reliable in real settings. Bayesiandeep learning (Kendall & Gal, 2017; Khan et al., 2018;Wilson & Izmailov, 2020), specifically Bayesian neural net-works (Hernández-Lobato & Adams, 2015; Gal) can achievethis goal while offering a computationally tractable way forrecovering reconstruction uncertainty. However, exact infer-

ence in the BNN framework is not a trivial task, so severalvariational approximation approaches are proposed to dealwith the scalability challenges. Monte Carlo dropout (Gal& Ghahramani, 2016) can be seen as a promising alterna-tive approach that is easy to implement and evaluate. Deepensemble (Lakshminarayanan et al., 2017) methods pro-posed by combining multiple deep models from differentinitialization have outperformed BNN. Recent methods ondeterministic uncertainty quantification (Van Amersfoortet al., 2020; van Amersfoort et al., 2021) use a single for-ward pass but scales well to large datasets. Although theseapproaches show impressive performance, they rely on su-pervised learning with paired input-output datasets and onlycharacterize the uncertainty conditioned on a training set.

Variational methods offer a more efficient alternative ap-proximating true but intractable posterior distribution byan optimally selected tractable distribution family (Bleiet al., 2017). However, the restriction to limited distributionfamilies fails if the true posterior is too complex. Recentadvances in conditional generative models, such as condi-tional GANs (cGANs) (Wang et al., 2018), overcome thisrestriction in principle, but have limitations in satisfactorydiversity in practice. Another commonly adopted optionis conditional VAEs (cVAEs) (Sohn et al., 2015), whichoutperform cGANs in some cases, but in fact, the directapplication of both conditional generative models in com-putational imaging is challenging because a large numberof data is typically required (Tonolini et al., 2020). Thisintroduces additional difficulties if our observations andmeasurements are sparse, and expensive to collect.

6. ConclusionIn this work, we propose an uncertainty-aware frameworkthat leverages a deep variational approach with robust gen-erative flows and variance-reduced sampling to performan accurate estimation of reconstruction uncertainty. Weminimize the model uncertainties by developing a robustflow-based model and decrease the sampling variation viavariance-reduced sampling. The results on a real-world MRIdemonstrate our advantages. The future work focuses on im-proving the invertiblility of normalizing flows and reducingthe computational cost in the variance-reduced sampling.

Page 5: Variational Generative Flows for Reconstruction

Variational Generative Flows for Reconstruction Uncertainty Estimation

ReferencesBarbano, R., Kereta, Ž., Zhang, C., Hauptmann, A., Arridge,

S., and Jin, B. Quantifying sources of uncertainty indeep learning-based image reconstruction. arXiv preprintarXiv:2011.08413, 2020.

Beliy, R., Gaziv, G., Hoogi, A., Strappini, F., Golan, T.,and Irani, M. From voxels to pixels and back: Self-supervision in natural-image reconstruction from fmri.arXiv preprint arXiv:1907.02431, 2019.

Belthangady, C. and Royer, L. A. Applications, promises,and pitfalls of deep learning for fluorescence image re-construction. Nature methods, 16(12):1215–1225, 2019.

Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. Varia-tional inference: A review for statisticians. Journal ofthe American statistical Association, 112(518):859–877,2017.

Bouman, C. and Sauer, K. A generalized gaussian imagemodel for edge-preserving map estimation. IEEE Trans-actions on image processing, 2(3):296–310, 1993.

Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density esti-mation using real nvp. arXiv preprint arXiv:1605.08803,2016.

Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G.Neural spline flows. arXiv preprint arXiv:1906.04032,2019.

Gal, Y. Uncertainty in deep learning.

Gal, Y. and Ghahramani, Z. Dropout as a bayesian approx-imation: Representing model uncertainty in deep learn-ing. In international conference on machine learning, pp.1050–1059. PMLR, 2016.

Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,Warde-Farley, D., Ozair, S., Courville, A., and Ben-gio, Y. Generative adversarial networks. arXiv preprintarXiv:1406.2661, 2014.

Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I.,and Duvenaud, D. Ffjord: Free-form continuous dy-namics for scalable reversible generative models. arXivpreprint arXiv:1810.01367, 2018.

Hernández-Lobato, J. M. and Adams, R. Probabilistic back-propagation for scalable learning of bayesian neural net-works. In International Conference on Machine Learning,pp. 1861–1869. PMLR, 2015.

Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., andHochreiter, S. Gans trained by a two time-scale updaterule converge to a local nash equilibrium. In Proceedingsof the 31st International Conference on Neural Informa-tion Processing Systems, pp. 6629–6640, 2017.

Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X.,Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrainedvariational framework. 2016.

Kendall, A. and Gal, Y. What uncertainties do we need inbayesian deep learning for computer vision? In NIPS,2017.

Khan, M., Nielsen, D., Tangkaratt, V., Lin, W., Gal, Y., andSrivastava, A. Fast and scalable bayesian deep learning byweight-perturbation in adam. In International Conferenceon Machine Learning, pp. 2611–2620. PMLR, 2018.

Kingma, D. P. and Dhariwal, P. Glow: Generativeflow with invertible 1x1 convolutions. arXiv preprintarXiv:1807.03039, 2018.

Kingma, D. P. and Welling, M. Auto-encoding variationalbayes. arXiv preprint arXiv:1312.6114, 2013.

Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simpleand scalable predictive uncertainty estimation using deepensembles. In NIPS, 2017.

Naeem, M. F., Oh, S. J., Uh, Y., Choi, Y., and Yoo, J. Reli-able fidelity and diversity metrics for generative models.In International Conference on Machine Learning, pp.7176–7185. PMLR, 2020.

Natterer, F. and Wübbeling, F. Mathematical methods inimage reconstruction. SIAM, 2001.

Nielsen, D., Jaini, P., Hoogeboom, E., Winther, O., andWelling, M. Survae flows: Surjections to bridge the gapbetween vaes and flows. Advances in Neural InformationProcessing Systems, 33, 2020.

Park, S. C., Park, M. K., and Kang, M. G. Super-resolutionimage reconstruction: a technical overview. IEEE signalprocessing magazine, 20(3):21–36, 2003.

Rezende, D. and Mohamed, S. Variational inference withnormalizing flows. In International Conference on Ma-chine Learning, pp. 1530–1538. PMLR, 2015.

Shields, M. D. and Zhang, J. The generalization of latinhypercube sampling. Reliability Engineering & SystemSafety, 148:96–108, 2016.

Sohn, K., Lee, H., and Yan, X. Learning structured outputrepresentation using deep conditional generative models.Advances in neural information processing systems, 28:3483–3491, 2015.

Strong, D. and Chan, T. Edge-preserving and scale-dependent properties of total variation regularization. In-verse problems, 19(6):S165, 2003.

Page 6: Variational Generative Flows for Reconstruction

Variational Generative Flows for Reconstruction Uncertainty Estimation

Sun, H. and Bouman, K. L. Deep probabilistic imaging:Uncertainty quantification and multi-modal solution char-acterization for computational imaging. arXiv preprintarXiv:2010.14462, 2020.

Tonolini, F., Radford, J., Turpin, A., Faccio, D., and Murray-Smith, R. Variational inference for computational imag-ing inverse problems. Journal of Machine Learning Re-search, 21(179):1–46, 2020.

Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep imageprior. In Proceedings of the IEEE conference on computervision and pattern recognition, pp. 9446–9454, 2018.

Van Amersfoort, J., Smith, L., Teh, Y. W., and Gal, Y. Un-certainty estimation using a single deep deterministicneural network. In International Conference on MachineLearning, pp. 9690–9700. PMLR, 2020.

van Amersfoort, J., Smith, L., Jesson, A., Key, O., and Gal,Y. Improving deterministic uncertainty estimation in deeplearning for classification and regression. arXiv preprintarXiv:2102.11409, 2021.

Wang, G., Ye, J. C., and De Man, B. Deep learning fortomographic image reconstruction. Nature Machine In-telligence, 2(12):737–748, 2020.

Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J.,and Catanzaro, B. High-resolution image synthesis andsemantic manipulation with conditional gans. In Pro-ceedings of the IEEE conference on computer vision andpattern recognition, pp. 8798–8807, 2018.

Wilson, A. G. and Izmailov, P. Bayesian deep learning and aprobabilistic perspective of generalization. arXiv preprintarXiv:2002.08791, 2020.

Wu, H., Köhler, J., and Noé, F. Stochastic normalizingflows. arXiv preprint arXiv:2002.06707, 2020.

Zbontar, J., Knoll, F., Sriram, A., Murrell, T., Huang, Z.,Muckley, M. J., Defazio, A., Stern, R., Johnson, P., Bruno,M., et al. fastmri: An open dataset and benchmarks foraccelerated mri. arXiv preprint arXiv:1811.08839, 2018.

Zhang, Z., Romero, A., Muckley, M. J., Vincent, P., Yang,L., and Drozdzal, M. Reducing uncertainty in under-sampled mri reconstruction with active acquisition. InProceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition, pp. 2049–2058, 2019.

Zhou, Q., Yu, T., Zhang, X., and Li, J. Bayesian inferenceand uncertainty quantification for medical image recon-struction with poisson data. SIAM Journal on ImagingSciences, 13(1):29–52, 2020.

Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R., and Rosen,M. S. Image reconstruction by domain-transform mani-fold learning. Nature, 555(7697):487–492, 2018.