a generalized structured low-rank matrix completion algorithm …€¦ · a generalized structured...

11
1 A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE, Xiaohan Liu, Student Member, IEEE, and Mathews Jacob, Senior Member, IEEE Abstract—Recent theory of mapping an image into a structured low-rank Toeplitz or Hankel matrix has become an effective method to restore images. In this paper, we introduce a gen- eralized structured low-rank algorithm to recover images from their undersampled Fourier coefficients using infimal convolution regularizations. The image is modeled as the superposition of a piecewise constant component and a piecewise linear component. The Fourier coefficients of each component satisfy an annihi- lation relation, which results in a structured Toeplitz matrix, respectively. We exploit the low-rank property of the matrices to formulate a combined regularized optimization problem. In order to solve the problem efficiently and to avoid the high memory demand resulting from the large-scale Toeplitz matrices, we introduce a fast and memory efficient algorithm based on the half-circulant approximation of the Toeplitz matrix. We demon- strate our algorithm in the context of single and multi-channel MR images recovery. Numerical experiments indicate that the proposed algorithm provides improved recovery performance over the state-of-the-art approaches. Index Terms—Structured low-rank matrix, infimal convolu- tion, compressed sensing, image recovery I. I NTRODUCTION The recovery of images from their limited and noisy mea- surements is an important problem in a wide range of biomed- ical imaging applications, including microscopy [1], magnetic resonance imaging (MRI) [2], and computed tomography [3]. The common method is to formulate the image reconstruction as an optimization problem, where the criterion is a linear combination of data consistency error and a regularization penalty. The regularization penalties are usually chosen to exploit the smoothness or the sparsity priors in the discrete image domain. For example, compressed sensing methods are capable of recovering the original MR images from their partial k space measurements [2] using the L 1 norm in the total variation (TV) or wavelet domain. The reconstruction performance is determined by the effectiveness of the regular- ization. In order to improve the quality of the reconstructed images, several extensions and generalizations of TV are also proposed, such as total generalized variation (TGV) [4], [5], Hessian-based norm regularization [6], and higher degree total variation (HDTV) [7], [8]. All of these regularization penalties are formulated in the discrete domain, and hence suffer from discretization errors as well as lack of rotation invariance. This work was supported by grants Natural Science Foundation of China (NSFC) 61501146, Natural Science Foundation of Heilongjiang province F2016018, and NIH 1R01EB019961-01A1. Y. Hu and X. Liu are with the School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, China 150001 (e-mail: [email protected]). M. Jacob is with the Department of Electrical and Computer Engineering, University of Iowa, IA 52246, USA (email: [email protected]). Recently, a new family of reconstruction methods, which are based on the low-rank property of structured Hankel or Toeplitz matrices built from the Fourier coefficients of the image, have been introduced as powerful continuous domain alternatives to the above discrete domain penalties [9], [10], [11], [12], [13]. Since these methods minimize discretization errors and rotation dependence, they provide improved recon- struction. These algorithms can be viewed as the multidimen- sional extensions of the finite-rate-of-innovation (FRI) frame- work [14], [15]. All of these methods exploit the “annihilation property”, which implies that image derivatives can be annihi- lated by multiplication with a bandlimited polynomial function in the spatial domain; this image domain relation translates to a convolutional annihilation relationship in the Fourier domain. Since the locations of the discontinuities are not isolated in the multidimensional setting, the theoretical tools used to show perfect recovery are very different from the 1-D FRI setting [11], [16]. The convolution relations are compactly represented as a multiplication between a block Hankel structured matrix and the Fourier coefficients of the filter. It has been shown that the above structured matrix is low-rank, which allows the recovery of the unknown matrix entries using structured low-rank matrix completion. Empirical results show improved performance over classical total variation methods [17], [16], [9], [18]. Haldar proposed a Hankel structured low-rank matrix algorithm (LORAKS) for the reconstructions of single coil MR images [10] with the assumption that the image has limited spatial support and smooth phase. The effectiveness of the algorithm was also investigated in parallel MRI [19], [20], [21]. In this paper, we extend the structured low-rank framework to recover the sum of two piecewise smooth functions from their sparse measurements. This work is inspired by the infimal convolution framework [22], where the sum of a piecewise constant and a piecewise linear function was recovered; the infimal convolution of functions with first and second order derivatives were considered as penalties in [22]. The algorithm was then applied in a general discrete setting for image de- noising [23] to obtain improved performance over standard TV. The extension of TV using infimal convolution (ICTV) was applied in video and image reconstruction in [24]. The infimal convolution of TGV (ICTGV) was proposed in the context of dynamic MRI reconstruction by balancing the weights for the spatial and temporal regularizations [25]. In [26] and [27], the infimal convolution of two total variation Bregman distances are applied to exploit the structural information in the reconstruction of dynamic MR datasets and the joint reconstruction of PET-MRI, respectively. In [28], the authors adopted the robust PCA method [29] into dynamic MRI, where arXiv:1811.10778v1 [eess.IV] 27 Nov 2018

Upload: others

Post on 20-May-2020

22 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

1

A Generalized Structured Low-Rank MatrixCompletion Algorithm for MR Image Recovery

Yue Hu, Member, IEEE, Xiaohan Liu, Student Member, IEEE, and Mathews Jacob, Senior Member, IEEE

Abstract—Recent theory of mapping an image into a structuredlow-rank Toeplitz or Hankel matrix has become an effectivemethod to restore images. In this paper, we introduce a gen-eralized structured low-rank algorithm to recover images fromtheir undersampled Fourier coefficients using infimal convolutionregularizations. The image is modeled as the superposition of apiecewise constant component and a piecewise linear component.The Fourier coefficients of each component satisfy an annihi-lation relation, which results in a structured Toeplitz matrix,respectively. We exploit the low-rank property of the matricesto formulate a combined regularized optimization problem. Inorder to solve the problem efficiently and to avoid the highmemory demand resulting from the large-scale Toeplitz matrices,we introduce a fast and memory efficient algorithm based on thehalf-circulant approximation of the Toeplitz matrix. We demon-strate our algorithm in the context of single and multi-channelMR images recovery. Numerical experiments indicate that theproposed algorithm provides improved recovery performanceover the state-of-the-art approaches.

Index Terms—Structured low-rank matrix, infimal convolu-tion, compressed sensing, image recovery

I. INTRODUCTION

The recovery of images from their limited and noisy mea-surements is an important problem in a wide range of biomed-ical imaging applications, including microscopy [1], magneticresonance imaging (MRI) [2], and computed tomography [3].The common method is to formulate the image reconstructionas an optimization problem, where the criterion is a linearcombination of data consistency error and a regularizationpenalty. The regularization penalties are usually chosen toexploit the smoothness or the sparsity priors in the discreteimage domain. For example, compressed sensing methodsare capable of recovering the original MR images from theirpartial k space measurements [2] using the L1 norm in thetotal variation (TV) or wavelet domain. The reconstructionperformance is determined by the effectiveness of the regular-ization. In order to improve the quality of the reconstructedimages, several extensions and generalizations of TV are alsoproposed, such as total generalized variation (TGV) [4], [5],Hessian-based norm regularization [6], and higher degree totalvariation (HDTV) [7], [8]. All of these regularization penaltiesare formulated in the discrete domain, and hence suffer fromdiscretization errors as well as lack of rotation invariance.

This work was supported by grants Natural Science Foundation of China(NSFC) 61501146, Natural Science Foundation of Heilongjiang provinceF2016018, and NIH 1R01EB019961-01A1.

Y. Hu and X. Liu are with the School of Electronics and InformationEngineering, Harbin Institute of Technology, Harbin, China 150001 (e-mail:[email protected]).

M. Jacob is with the Department of Electrical and Computer Engineering,University of Iowa, IA 52246, USA (email: [email protected]).

Recently, a new family of reconstruction methods, whichare based on the low-rank property of structured Hankel orToeplitz matrices built from the Fourier coefficients of theimage, have been introduced as powerful continuous domainalternatives to the above discrete domain penalties [9], [10],[11], [12], [13]. Since these methods minimize discretizationerrors and rotation dependence, they provide improved recon-struction. These algorithms can be viewed as the multidimen-sional extensions of the finite-rate-of-innovation (FRI) frame-work [14], [15]. All of these methods exploit the “annihilationproperty”, which implies that image derivatives can be annihi-lated by multiplication with a bandlimited polynomial functionin the spatial domain; this image domain relation translates to aconvolutional annihilation relationship in the Fourier domain.Since the locations of the discontinuities are not isolated in themultidimensional setting, the theoretical tools used to showperfect recovery are very different from the 1-D FRI setting[11], [16]. The convolution relations are compactly representedas a multiplication between a block Hankel structured matrixand the Fourier coefficients of the filter. It has been shownthat the above structured matrix is low-rank, which allowsthe recovery of the unknown matrix entries using structuredlow-rank matrix completion. Empirical results show improvedperformance over classical total variation methods [17], [16],[9], [18]. Haldar proposed a Hankel structured low-rank matrixalgorithm (LORAKS) for the reconstructions of single coil MRimages [10] with the assumption that the image has limitedspatial support and smooth phase. The effectiveness of thealgorithm was also investigated in parallel MRI [19], [20],[21].

In this paper, we extend the structured low-rank frameworkto recover the sum of two piecewise smooth functions fromtheir sparse measurements. This work is inspired by the infimalconvolution framework [22], where the sum of a piecewiseconstant and a piecewise linear function was recovered; theinfimal convolution of functions with first and second orderderivatives were considered as penalties in [22]. The algorithmwas then applied in a general discrete setting for image de-noising [23] to obtain improved performance over standard TV.The extension of TV using infimal convolution (ICTV) wasapplied in video and image reconstruction in [24]. The infimalconvolution of TGV (ICTGV) was proposed in the contextof dynamic MRI reconstruction by balancing the weights forthe spatial and temporal regularizations [25]. In [26] and[27], the infimal convolution of two total variation Bregmandistances are applied to exploit the structural informationin the reconstruction of dynamic MR datasets and the jointreconstruction of PET-MRI, respectively. In [28], the authorsadopted the robust PCA method [29] into dynamic MRI, where

arX

iv:1

811.

1077

8v1

[ee

ss.I

V]

27

Nov

201

8

Page 2: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

2

the dataset is decomposed into low-rank and sparse component(L+S). In [30], instead of imposing low-rank assumptions,the k-t PCA method was improved using model consistencyconstraints (MOCCO) to obtain temporal basis functions fromlow resolution dynamic MR data. In this paper, we proposeto model the image as the combination of a piecewise con-stant component and a piecewise linear component. For thepiecewise constant component, the Fourier coefficients of thegradient of the component satisfy the annihilation relation. Wethus build a structured Toeplitz matrix, which can be provedto be low-rank. Similarly, we can obtain a structured low-rankToeplitz matrix from the Fourier coefficients of the secondorder partial derivatives of the piecewise linear component.By introducing the generalized structured low-rank method,the image can be automatically separated into componentswhere either the strong edges and feature details or the smoothregions of the image can be accurately recovered. Thus, theoptimal balance can be obtained between the first order andhigher order penalties.

Since the proposed method involves the recovery of large-scale first and second order derivatives lifted Toeplitz matrices,the implementation of the method is associated with highcomputational complexity and memory demand. In order tosolve the corresponding optimization problem efficiently, weintroduce an algorithm based on the half-circulant approxi-mation of Toeplitz matrices, which is a generalization of theGeneric Iteratively Reweighted Annihilating Filter (GIRAF)algorithm proposed in [17], [31]. This algorithm alternatesbetween the estimation of the annihilation filter of the image,and the computation of the image annihilated by the filter in aleast squares formulation. By replacing the linear convolutionby circular convolution, the algorithm can be implementedefficiently using Fast Fourier Transforms, which significantlyreduces the computational complexity and the memory de-mand. We investigate the performance of the algorithm inthe context of compressed sensing MR images reconstruc-tion. Experiments show that the proposed method is capableof providing more accurate recovery results than the state-of-the-art algorithms. The preliminary version of the workwas accepted as a conference paper. Compared to the work[32], the theoretical and algorithmic frameworks are furtherdeveloped here. We have significantly more validation in thecurrent version, in addition to the generalization of the methodto parallel MRI setting.

II. GENERALIZED STRUCTURE LOW-RANK MATRIXRECOVERY

A. Image recovery model

We consider the recovery of a discrete 2-D image ρ ∈ CNfrom its noisy and degraded measurements b ∈ CM . Wemodel the measurements as b = A(ρ)+n, where A ∈ CM×Nis a linear degradation operator which maps ρ to b, andn ∈ CM is assumed to be the Gaussian distributed whitenoise. Since the recovery of ρ from the measurements b isill-posed in many practical cases, the general approach is topose the recovery as a regularized optimization problem, i.e.,

ρ = arg minρ‖A(ρ)− b‖2 + λJ (ρ) (1)

where ‖A(ρ) − b‖2 is the data consistency term, λ is thebalancing parameter, and J (ρ) is the regularization term,which determines the quality of the recovered image. Commonchoices for the regularization term include total variation,wavelet, and their combinations. Researchers have also pro-posed the extensions of total variations [4], [6], [8] to improvethe performance.

B. Structured low-rank matrix completion

Consider the general model for a 2-D piecewise smoothimage ρ(r) at the spatial location r = (x, y) ∈ Z2:

ρ(r) =

N∑i=1

gi(r)χΩi(r) (2)

where χΩiis a characteristic function of the set Ωi and the

functions gi(r) are smooth polynomial functions which vanishwith a collection of differential operators D = D1, ..., DNwithin the region Ωi. It is proved that under certain assump-tions on the edge set ∂Ω =

⋃Ni=1 ∂Ωi, the Fourier transform of

derivatives of ρ(r) satisfies an annihilation property [11]. Weassume that a bandlimited trigonometric polynomial functionµ(r) vanishes on the edge set of the image:

µ(r) =∑k∈∆1

c[k]ej2π〈k,r〉 (3)

where c[k] denotes the Fourier coefficients of µ and ∆1 is anyfinite set of Z2. According to [13], the family of functions in(2) is a general form including many common image modelsby choosing different set of differential operators D.

1) Piecewise constant images: Assume ρ1(r) is a piecewiseconstant image function, thus the first order partialderivative of the image D1ρ1 = ∇ρ1 = (∂xρ1, ∂yρ1)is annihilated by multiplication with µ1 in the spatialdomain:

µ1∇ρ1 = 0 (4)

The multiplication in spatial domain translates to theconvolution in Fourier domain, which is expressed as:∑

k∈∆1

∇ρ1[`− k]c1[k] = 0, ∀` ∈ Z2 (5)

where ∇ρ1[k] = j2π(kxρ1[k], kyρ1[k]) for k =(kx, ky). Thus the annihilation property can be formu-lated as a matrix multiplication:

T1(ρ1)c =

[Tx(ρ)Ty(ρ)

]c1 = 0 (6)

where T1(ρ1) is a Toeplitz matrix built from the entriesof ρ1, the Fourier coefficients of ρ1. Specifically, Tx(ρ1),Ty(ρ1) are matrices corresponding to the discrete 2-Dconvolution of kxρ1[k] and kyρ1[k] for (kx, ky) ∈ Γ,omitting the irrelevant factor j2π. Here c1 is the vector-ized version of the filter c1[k]. Note that c is supportedin ∆1. Thus, we can obtain that:

ρ1[k] ∗ d[k] = 0, k ∈ Γ (7)

Page 3: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

3

Here d[k] = c1[k] ∗ h[k], where h[k] is any FIR filter.Note that ∆1 is smaller than Γ, the support of d. Thus, ifwe take a larger filter size than the minimal filter c1[k],the annihilation matrix will have a larger null space.Therefore, T1(ρ1) is a low-rank matrix. The methodcorresponding to this case is referred to as the firstorder structured low-rank algorithm (first order SLA)for simplicity.

2) Piecewise linear images: Assume ρ2(r) is a piecewiselinear image function, it can be proved that the sec-ond order partial derivatives of the image D2ρ2 =(∂2xxρ2, ∂

2xyρ2, ∂

2yyρ2) satisfy the annihilation property

[13]:µ2

2D2ρ2 = 0 (8)

Thus, the Fourier transform of D2ρ2 is annihilated byconvolution with the Fourier coefficients c2[k],k ∈ ∆2

of µ22: ∑

k∈∆2

D2ρ2[`− k]c2[k] = 0, ∀` ∈ Z2 (9)

where D2ρ2[k] = (j2π)2(k2xρ2[k], kxkyρ2[k], k2

yρ2[k])for k = (kx, ky). Similarly, the annihilation relation canbe expressed as:

T2(ρ2)d =

Txx(ρ2)Txy(ρ2)Tyy(ρ2)

c2 = 0 (10)

where T2(ρ2) is a Toeplitz matrix. Txx(ρ2), Txy(ρ2),and Tyy(ρ2) are matrices corresponding to the discreteconvolution of k2

xρ2[k], kxkyρ2[k], and k2yρ2[k], omit-

ting the insignificant factor, and c2 is the vectorizedversion of d[k]. Similar to the piecewise constant case,the Toeplitz matrix T2(ρ2) is also a low-rank matrix.The method exploiting the low-rank property of T2(ρ2)is referred to as the second order structured low-rankalgorithm (second order SLA).

C. Generalized structured low-rank image recovery (GSLR)

We assume that a 2-D image ρ is a piecewise smoothfunction, which can be decomposed into two componentsρ = ρ1 + ρ2, such that ρ1 represents the piecewise constantcomponent of ρ, while ρ2 represents the piecewise linearcomponent of ρ. We assume that the gradient of ρ1 and thesecond derivative of ρ2 vanish on the zero sets of µ1 and µ2,respectively. This relation transforms to a convolution relationbetween the weighted Fourier coefficients of ρ1 and ρ2 withc1 and c2, respectively, based on the analysis in Section. II-B.Inspired by the concept of infimal convolution, we considerthe framework of a combined regularization procedure, wherewe formulate the reconstruction of the Fourier data ρ from theundersampled measurements b as follows:

minρ1+ρ2=ρ

rank[T1(ρ1)] + rank[T2(ρ2)]

such that b = A(ρ) + n (11)

Since the above problem is NP hard, we choose the Schattenp (0 ≤ p < 1) norm as the relaxation function, which makes(11) as the following optimization problem:

ρ1?, ρ2

? = arg minρ1,ρ2

λ1‖T1(ρ1)‖p + λ2‖T2(ρ2)‖p

+‖A(ρ1 + ρ2)− b‖2 (12)

where T1(ρ1) and T2(ρ2) are the structured Toeplitz matricesin the first and second order partial derivatives weighted lifteddomain, respectively. λ1 and λ2 are regularization parameterswhich balance the data consistency and the degree to whichT1(ρ1) and T2(ρ2) are low-rank. ‖ · ‖p is the Schatten p norm(0 ≤ p < 1), defined for an arbitrary matrix X as:

‖X‖p =1

pTr[(X∗X)

p2 ] =

1

pTr[(XX∗)

p2 ] =

1

p

∑i

σpi (13)

where σi are the singular values of X. Note that when p→ 1,‖X‖p is the nuclear norm; when p → 0, ‖X‖p =

∑i

log σi.

The penalty ‖X‖p is convex for p = 1, and non-convex for0 ≤ p < 1.

III. OPTIMIZATION ALGORITHM

We apply the iterative reweighted least squares (IRLS)algorithm to solve the optimization problem (12). Based onthe equation ‖X‖p = ‖XH

12 ‖2F , where H = (X∗X)

p2−1, let

X = Ti(ρi) (i = 1, 2), (12) becomes:

ρ?1, ρ?2 = arg minρ1,ρ2

λ1‖T1(ρ1)H121 ‖2F + λ2‖T2(ρ2)H

122 ‖2F

+ ‖A(ρ1 + ρ2)− b‖2 (14)

In order to solve (14), we can use an alternating minimizationscheme, which alternates between the following subproblems:updating the weight matrices H1 and H2, and solving aweighted least squares problem. Specifically, at nth iteration,we compute:

H(n)1 = [T1(ρ

(n)1 )∗T1(ρ

(n)1 )︸ ︷︷ ︸

G1

+εnI]p2−1 (15)

H(n)2 = [T2(ρ

(n)2 )∗T2(ρ

(n)2 )︸ ︷︷ ︸

G2

+εnI]p2−1 (16)

ρ(n)1 , ρ

(n)2 = arg min

ρ1,ρ2

‖A(ρ1 + ρ2)− b‖2

+ λ1‖T1(ρ1)(H(n)1 )

12 ‖2F + λ2‖T2(ρ2)(H

(n)2 )

12 ‖2F (17)

where εn → 0 is a small factor used to stabilize the inverse.We now show how to efficiently solve the subproblems.

A. Update of least squares

First, let H121 = [h

(1)1 , . . . ,h

(L)1 ], H

122 = [h

(1)2 , . . . ,h

(M)2 ],

we rewrite the least squares problem (17) as follows:

ρ(n)1 , ρ

(n)2 = arg min

ρ1,ρ2

‖A(ρ1 + ρ2)− b‖2

+ λ1

L∑l=1

‖T1(ρ1)h(l)1 ‖2F + λ2

M∑m=1

‖T2(ρ2)h(m)2 ‖2F (18)

Page 4: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

4

We now focus on the update of ρ1. The update of ρ2 can bederived likewise. From the structure property of T1(ρ1) andthe convolution relationship, we can obtain:

T1(ρ1)h(l)1 = PΓ1(M1ρ1 ∗ h

(l)1 ) = PΓ1(h

(l)1 ∗M1ρ1)

= P1C(l)1 M1ρ1, l = 1, ..., L (19)

where C(l)1 denotes the linear convolution by h

(l)1 , PΓ1

is theprojection of the convolution to a finite set Γ1 of the validk space index, which is expressed by the matrix P1. M1

is the linear transformation in k space, which denotes themultiplication with the first order Fourier derivatives j2πkxand j2πky , referred to as the gradient weighted lifting case.We can approximate C

(l)1 by a circular convolution by h

(l)1

on a sufficiently large convolution grid. Then, we can obtainC

(l)1 = FS

(l)1 F∗, where F is the 2-D DFT and S

(l)1 is a

diagonal matrix representing multiplication with the inverseDFT of h

(l)1 . Assuming P∗1P1 ≈ I, we can thus rewrite the

second term in (18) as:

λ1

L∑l=1

‖P1C(l)1 M1ρ1‖2 = λ1ρ1

∗M∗1F

L∑l=1

S(l)∗1 S

(l)1 F∗M1ρ1

= λ1‖S121 F∗M1ρ1‖2 (20)

where S1 is a diagonal matrix with entries∑Nl=1 |µl(r)|2,

and µl(r) is the trigonometric polynomial of inverse Fouriertransform of h

(l)1 . S1 is specified as

S1 =

L∑l=1

S(l)∗1 S

(l)1 = diag(F∗P∗1h1) (21)

Similarly, the third term in (18) can be rewritten asλ2‖S

122 F∗M2ρ2‖2, where S2 is given by

S2 =

M∑m=1

S(m)∗2 S

(m)2 = diag(F∗P∗2h2) (22)

Therefore, we can reformulate the optimization problem(18) as:

minρ1,ρ2

‖A(ρ1 + ρ2)− b‖2 + λ1‖S121 y1‖2F + λ2‖S

122 y2‖2F

s.t. Fy1 = M1ρ1, Fy2 = M2ρ2 (23)

The above constrained problem can be efficiently solved us-ing the alternating directions method of multipliers (ADMM)algorithm [33], which yields to solving the following subprob-lems:

y(n)1 = min

y1

‖S121 y1‖22 + γ1‖q(n−1)

1 + F∗M1ρ1(n−1) − y1‖22

(24)

y(n)2 = min

y2

‖S122 y2‖22 + γ2‖q(n−1)

2 + F∗M2ρ2(n−1) − y2‖22

(25)

ρ(n)1 = min

ρ1

‖A(ρ1+ρ2)−b‖22+γ1λ1‖q(n−1)1 +F∗M1ρ1−y(n)

1 ‖22

(26)

ρ(n)2 = min

ρ2‖A(ρ1+ρ2)−b‖22+γ2λ2‖q(n−1)

2 +F∗M2ρ2−y(n)2 ‖

22

(27)

q(n)i = q

(n−1)i + F∗Miρ

(n)i − y

(n)i i = 1, 2 (28)

where qi (i = 1, 2) represent the vectors of Lagrangemultipliers, and γi (i = 1, 2) are fixed parameters tuned toimprove the conditioning of the subproblems. Subproblems(24) to (27) are quadratic and thus can be solved easily asfollows:

y(n)1 = (S1 + γ1I)−1[γ1(q

(n−1)1 + F∗M1ρ

(n−1)1 )] (29)

y(n)2 = (S2 + γ2I)−1[γ2(q

(n−1)2 + F∗M2ρ

(n−1)2 )] (30)

ρ(n)1 = (A∗A+ γ1λ1M

∗1M1)−1[γ1λ1(M∗

1F)(y(n)1 − q

(n−1)1 )

+A∗b−A∗Aρ(n−1)2 ]

(31)

ρ(n)2 = (A∗A+ γ2λ2M

∗2M2)−1[γ2λ2(M∗

2F)(y(n)2 − q

(n−1)2 )

+A∗b−A∗Aρ(n−1)1 ]

(32)

B. Update of weight matrices

We now show how to update the weight matrices H1 andH2 in (15) and (16) efficiently based on the GIRAF method[11]. In order to obtain the weight matrices H1 and H2, wefirst compute the Gram matrix G1 and G2 as:

G1 = T1(ρ1)∗T1(ρ1) (33)G2 = T2(ρ2)∗T2(ρ2) (34)

Let (V1,Λ1) denote the eigen-decomposition of G1, whereV1 is the orthogonal basis of eigenvectors v

(l)1 , and Λ1 is

the diagonal matrix of eigenvalues λ(l)p1 , which satisfy G1 =

V1Λ1V∗1 . Then we can rewrite the weight matrix H1 as:

H1 = [V1(Λ1 + εI)V∗1]p2−1 = V1(Λ1 + εI)

p2−1V∗1 (35)

Thus, one choice of the matrix square root H121 is

H121 = (Λ1 + εI)

p4−

12 V∗1

= [(λ(1)p1 + ε)

p4−

12 (v

(1)1 )∗, ..., (λ(L)

p1 + ε)p4−

12 (v

(L)1 )∗]

= [h(1)1 , . . . ,h

(L)1 ] (36)

Similarly, we can obtain H122 as:

H122 = [(λ(1)

p2 + ε)p4−

12 (v

(1)2 )∗, ..., (λ(M)

p2 + ε)p4−

12 (v

(M)2 )∗]

= [h(1)2 , . . . ,h

(M)2 ] (37)

Page 5: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

5

C. Implementation details

The details for solving the optimization problem (14) areshown in the following pseudocode Algorithm 1. In orderto investigate how the SNR values of the recovered imagebehave as the function of the balancing parameters λ1 and λ2,we plot the parameters optimization results for two imagesin Fig. 1, where (a) correspond to the parameters for thecompressed sensing reconstruction of an ankle MR imagewith the acceleration factor of 6, and (b) correspond to theparameters choices for the recovery of a phantom image with4-fold undersampling. We find that the tuning of the twoparameters is not very time consuming, since we observe thatthe optimal parameters are localized in a narrow range between105 and 106 for λ1 and between 107 and 108 for λ2, fordifferent images under different scenarios.

(a) SNR of ankle MR image (b) SNR of phantom image

Fig. 1: Parameters optimization results for the recovery of two images. (a)shows the SNR as the function of the two parameters λ1 and λ2 for the 6-fold undersampling recovery of an ankle MR image (as shown in Fig. 5). (b)shows the SNR as the function of λ1 and λ2 for the 4-fold undersamplingrecovery of a piecewise smooth phantom image (as shown in Fig. 3). Theblack rectangles correspond to the optimal parameters with the highest SNRvalues.

IV. EXPERIMENTS AND RESULTS

A. 1-D signal recovery

We first experiment on a 1-D signal to investigate theperformance of the algorithm on recovering signals from theirundersampled measurements. Fig. 2 (a) shows the originalsignal, which is 2-fold undersampled in k space using variabledensity undersampling pattern, indicated in (b). The directIFFT recovery is shown in (c). (d) shows the recovered signal(in blue solid line) using the proposed GSLR method in 1-D, and the decomposition results of the piecewise constantcomponent ρ1 (in black dotted line) and the piecewise linearcomponent ρ2 (in red dotted line). We then experiment on thesignal recovery using a 4-fold random undersampling pattern,shown in (e). (f) is the direct IFFT of the undersampledmeasurements. (g) represents the recovered signal and thedecomposition results. The results clearly show that by theGSLR method, both the jump discontinuities and the linearparts of the signal are nicely restored.

B. MR images recovery

The performance of the proposed method is investigated inthe context of compressed sensing MR images reconstruction.We compare the proposed GSLR method with the first orderand the second order structured low-rank algorithms. We alsostudy the improvement of the image quality offered by the

(a) Original signal

(b) 2-fold variabledensity sampling

(c) IFFT signal (d) Reconstructionfrom (c)

(e) 4-fold randomsampling

(f) IFFT signal (g) Reconstructionfrom (f)

Fig. 2: 1-D signal recovery using the GSLR method. (a) is the original signal,which is undersampled by a 2-fold variable density random undersamplingpattern, indicated in (b). (c) is the direct IFFT recovery of the undersampledmeasurements. (d) shows the recovered signal (in blue solid line) and thedecomposition results of the two components, namely the piecewise constantcomponent (showed in black dotted line) and the piecewise linear component(showed in red dotted line). (e)-(g) represent the results of a 4-fold randomundersampling compressed sensing signal recovery experiment. Note that theproposed scheme recovers the Fourier coefficients of the signal; the ringingat the edges is associated by the inverse Fourier transform of the recoveredFourier coefficients.

GSLR algorithm over standard TV, TGV algorithm [4], andthe LORAKS method [10]. For all of the experiments, wehave manually tuned the parameters to ensure the optimalperformance in each scenario. Specifically, we determine theparameters to obtain the optimized signal-to-noise ratio (SNR)to ensure fair comparisons between different methods. TheSNR of the recovered image is computed as:

SNR = −10 log10

(‖forig − f‖

2F

‖forig‖2F

)(38)

where f is the recovered image, forig is the original image,and ‖ · ‖F is the Frobenius norm.

In the experiments, we consider two types of undersam-pling trajectory: a radial trajectory with uniform angularspacing, and a 3-D variable density random retrospectiveundersampling trajectory. For the 3-D sampling pattern, sincethe readout direction is orthogonal to the image plane, suchundersampling patterns can be implemented on the scanner.

We first study the performance of the proposed method forthe recovery of a piecewise smooth phantom image from itsnoiseless k space data. We assume the data was sampled with26 k radial spokes, with the approximate acceleration factor of10.7. Fig. 3 (a) is the actual image, (b) to (g) are the recoveredimages using GSLR with filter size of 51 × 51, GSLR withfilter size of 31×31, the 1st SLA and the 2nd SLA with filtersize of 31× 31, TGV, and the standard TV, respectively. Thesecond row show the zoomed regions of the corresponding

Page 6: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

6

Algorithm 1: GSLR image recovery algorithm

Initialization: ρ(0) ← A∗b, n ← 1, ε(0) > 0 ;while n < Nmax do

Step1: Update of weight matrices;Compute the Gram matrices Gi (i = 1, 2) using (33) and (34);Compute eigendecompositions (λ(k)

pi ,v(k)i )Kk=1 of Gi (i = 1, 2);

Compute h(l)1 and h

(m)2 using (36) and (37);

Step2: Update of least squares;Compute the weight matrices S1 and S2 using (21) and (22);Solve the following least squares problem :

minρ1,ρ2

λ1‖S121 y1‖2F + λ2‖S

122 y2‖2F + ‖A(ρ1 + ρ2)− b‖2using ADMM iterations (24) to (28);

Choose ε(n) such that 0 < ε(n) ≤ ε(n−1);end

ORIGGSLR 51×51SNR: 33.71

1st 31×31SNR: 32.66

2nd 31×31SNR: 32.01

TGVSNR: 31.27

TVSNR: 30.83

(a) (b) (c) (e) (f) (g)

(i) (j) (k) (l) (m) (n)

(q) (r) (s) (t) (u) (v)(o)

(h)

(d)

0

0.02

0.04

0.06

0.08

0.1

0.12

GSLR 31×31SNR: 33.28

Fig. 3: Recovery of a piecewise smooth phantom image from around 10-fold undersampled measurements. (a): the actual image. (b)-(g): Reconstructionsusing the proposed GSLR method with filter size of 51 × 51, GSLR with filter size of 31 × 31, the 1st and 2nd SLA with filter size of 31 × 31, TGV,and the standard TV, respectively. (h)-(n): the zoomed regions of the area indicated in the red rectangle for the corresponding images. (o): the undersamplingpattern. (q)-(v): the error images. Note that recovered images using TGV and TV methods present undersampling artifacts, indicated in green arrows, whilethe GSLR methods outperform the other methods in recovering both the edges and the smooth regions, indicated in red arrows.

images. (o) is the undersampling pattern, (q) to (v) are theerror images.

We observe that the structured low-rank algorithms outper-form TGV and the standard TV algorithms under this scenario,in that the recovered images by TGV and TV methodssuffer from obvious undersampling artifacts, indicated in greenarrows. For the structured low-rank algorithms with filter sizeof 31×31, it is shown that GSLR performs better than the 1stSLA and the 2nd SLA in recovering the edges, indicated inred arrows. It is shown that with larger filter sizes (51× 51),the GSLR method provides the best reconstruction result withthe SNR improvement of around 3dB over standard TV.

In the following experiments, we investigate the proposedGSLR method on the reconstruction of single-coil real MR

images. The reconstructions of a brain MR image at theacceleration of 4 is shown in Fig.4, where we compare theproposed GSLR method using different filter sizes with thefirst and second order SLA, S-LORAKS method, TGV, andthe standard TV method. Fig. 4 (a) is the original image.(b) to (h) are the reconstructions using the GSLR with filtersize of 51 × 51, GSLR with filter size 31 × 31, the 1st and2nd SLA with filter size 31× 31, S-LORAKS, TGV, and thestandard TV. (i) to (p) are the zoomed versions of the images,indicated by the red rectangle. (q) is the variable densityrandom undersampling pattern. (r) to (x) are the error imagesusing the corresponding methods. It is seen that among all ofthe methods, GSLR performs the best in preserving the detailsand providing the most accurately recovered image. Note that

Page 7: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

7

ORIGGSLR 51×51SNR: 27.36

GSLR 31×31SNR: 26.72

1st 31×31SNR: 26.40

2nd 31×31SNR: 25.90

TGVSNR: 25.36

TVSNR: 24.71

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o)

Sampling: 25%

(r)

(t) (u) (v) (w)

(p)

0

0.02

0.04

0.06

0.08

0.1

(x)

S-LORAKSSNR: 25.86

G-LORAKSSNR: 24.93

(q)

(y) (z)(s)

Fig. 4: Recovery of a brain MR image from 4-fold undersampled measurements. (a): the original image. (b)-(h): Reconstructions using GSLR with filter sizeof 51× 51, GSLR with filter size 31× 31, the 1st and 2nd SLA with filter size 31× 31, S-LORAKS, TGV, and the standard TV, respectively. (i)-(p): Thezoomed versions of the area shown in the red rectangle in (a). (q): The undersampling pattern. (r)-(x): the error images.

0

0.05

0.1

0.15

ORIGGSLR 51×51SNR: 27.08

S-LORAKSSNR: 25.15

1st 51×51SNR: 26.76

2nd 51×51SNR:26.49

TGVSNR: 25.59

TVSNR: 24.15

Sampling:15%

(a) (b) (c) (d) (e) (f) (g) (h)

(i) (j) (k) (l) (m) (n) (o)

(q) (r) (s) (t) (u) (v) (w)

(p)

G-LORAKSSNR:25.24

(x)

Fig. 5: Recovery of an ankle MR image from 6.7-fold undersampled measurements with radial undersampling pattern. (a): The actual image. (b)-(f): Therecovery images using GSLR, 1st SLA, 2nd SLA with filter size of 51 × 51, S-LORAKS, G-LORAKS, TGV, and TV, respectively. (i)-(p): The zoomedversions of the images. (q): The undersampling pattern. (r)-(s): Error images using the corresponding methods. Note that the proposed GSLR method performsthe best in preserving the edges and eliminating the artifacts caused by the undersampling, compared with the other methods.

by increasing the filter size from 31×31 to 51×51, the imagequality is significantly improved, with only a modest increasein runtime (85 s versus 36 s).

We demonstrate the performance of the proposed method onthe reconstruction of an ankle MR image at the approximateacceleration rate of 6.7 using the radial undersampling patternin Fig. 5. In this experiment, we compare the proposed GSLRmethod with 1st and 2nd SLR, S-LORAKS, G-LORAKS,TGV, and the standard TV. All of the structured low rankmethods are with filter size of 51 × 51. (a) is the originalimage. (b) to (h) are the reconstructed images using GSLR,

1st SLA, 2nd SLA, S-LORAKS, G-LORAKS, TGV, andTV, respectively. (j) to (p) are the zoomed versions of theimages indicated by the red rectangle shown in (a). (q) is theradial undersampling pattern. (r) to (x) are the error images.We observe that TV method gives blurry reconstruction. Theimages recovered by S-LORAKS, G-LORAKS, and TGVmethods have undersampling artifacts, indicated in the greenarrow. The structured low rank methods provide improvedresults, among which GSLR performs better in preservingimage details, shown in red arrows.

In Fig. 6, we experiment on a brain MR image using radial

Page 8: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

8

undersampling pattern with the acceleration factor around 4.8.(a) is the actual image. (b) to (h) are the reconstruction resultsusing GSLR with filter size of 51 × 51, 1st and 2nd SLAwith filter size of 51 × 51, S-LORAKS, G-LORAKS, TGV,and the standard TV, respectively. The second row are thezoomed versions of the red rectangular area shown in (a)for different methods. (q) shows the undersampling pattern.(r) to (x) are the error images of the corresponding methods,respectively. We observe that TV and TGV methods provideblurry reconstructions, and the LORAKS methods preservefine details better while suffers from undersampling artifacts.Among the methods, GLSR provides the best reconstructionand improves the SNR by around 2dB compared over standardTV.

In Fig. 7, we compare different methods on the recovery ofa multi-coil MR dataset acquired using four coils from 8-foldundersampled measurements. The data was retrospectivelyundersampled using the variable density random undersam-pling pattern. (a) is the actual image. (b) to (g) show thereconstruction results using GSLR, 1st and 2nd SLA with filtersize of 31 × 31, S-LORAKS, G-LORAKS, and the standardTV, respectively. (h) to (n) are the zoomed regions indicatedin the red rectangle. (o) is the undersampling pattern. (p) to(u) indicate error images by different methods. According tothe results, we observe that compared with the other methods,GLSR performs the best in preserving the image features andproviding the recovered image with highest SNR value. Wehave shown the phase images of all the datasets in Fig. 8.We note that all of the images are associated with reasonablephase variations, expected from a typical MR acquisition. Wenote that GSLR relies on the compact representation of theimage, enabled by its decomposition into piecewise constantand linear components. Since S-LORAKS and G-LORAKS donot exploit these property, we obtain improved reconstructionswith filter sizes larger than 31x31.

The SNRs of the recovered images using variable densityrandom undersampling patterns are shown in Table I, andthe reconstruction results using radial undersampling patternsare shown in Table II. We compare the 1st and 2nd SLA,S-LORAKS, G-LORAKS, TGV, the standard TV with theproposed GSLR method. For the structured low-rank algo-rithms, we compare the performance by different filter sizes.Specifically, we use three different filter sizes, 15 × 15,31 × 31 and 51 × 51 for the variable density undersamplingexperiments, and two filter sizes for the radial undersamplingexperiments. Note that when the filter size is 15 × 15, theresults provided by GSLR are not comparable to the othermethods for some cases. However, using larger filter sizesleads to significantly improved image quality. For filter sizes31×31 and 51×51, GSLR consistently obtains the best results,with the SNR improvement by around 2-3 dB over standardTV. The reason is that the size of the filter specifies the typeof curves or edges that its zero set can capture. Specifically,smaller filters can only represent simpler and smoother curves,while larger filters can represent complex shapes (see [11] foran illustration). When complex structures are presented in theimage, the use of a smaller filter fails to capture the intricatedetails. We note that for most images, we need to use larger

filters to ensure that the details are well captured. The use oflarger filters is made possible by the proposed IRLS algorithm,which does not require us to explicitly compute the Toeplitzmatrices.

V. CONCLUSION

We proposed a novel generalized structured low-rank al-gorithm to recover images from their undersampled k spacemeasurements. We assume that an image can be modeled asthe superposition of two piecewise smooth functions, namelya piecewise constant component, and a piecewise linear com-ponent. Each component can be annihilated by multiplica-tion with a bandlimited polynomial function, which yieldsto a structured Toeplitz matrix. We formulate a combinedregularized optimization algorithm by exploiting the low-rank property of the Toeplitz matrix. In order to solve thecorresponding problem efficiently, we adapt the iterativelyreweighted least squares method which alternates between thecomputation of the annihilation filter and the least squaresproblem. We investigate the proposed algorithm on the com-pressed sensing reconstruction of single-coil and multi-coilMR images. Experiments show that the proposed algorithmprovides more accurate recovery results compared with thestate-of-the-art approaches.

REFERENCES

[1] N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J.-C. Olivo-Marin, and J. Zerubia, “Richardson–lucy algorithm with total variationregularization for 3D confocal microscope deconvolution,” Microscopyresearch and technique, vol. 69, no. 4, pp. 260–266, 2006.

[2] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The applicationof compressed sensing for rapid mr imaging,” Magnetic resonance inmedicine, vol. 58, no. 6, pp. 1182–1195, 2007.

[3] E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beamcomputed tomography by constrained, total-variation minimization,”Physics in Medicine & Biology, vol. 53, no. 17, p. 4777, 2008.

[4] F. Knoll, K. Bredies, T. Pock, and R. Stollberger, “Second order totalgeneralized variation (TGV) for MRI,” Magnetic resonance in medicine,vol. 65, no. 2, pp. 480–491, 2011.

[5] F. Knoll, C. Clason, K. Bredies, M. Uecker, and R. Stollberger, “Par-allel imaging with nonlinear reconstruction using variational penalties,”Magnetic Resonance in Medicine, vol. 67, no. 1, pp. 34–41, 2012.

[6] S. Lefkimmiatis, A. Bourquard, and M. Unser, “Hessian-based normregularization for image restoration with biomedical applications,” IEEETransactions on Image Processing, vol. 21, no. 3, pp. 983–995, 2012.

[7] Y. Hu and M. Jacob, “Higher degree total variation (HDTV) regular-ization for image recovery,” IEEE Transactions on Image Processing,vol. 21, no. 5, pp. 2559–2571, 2012.

[8] Y. Hu, G. Ongie, S. Ramani, and M. Jacob, “Generalized higher degreetotal variation (HDTV) regularization,” IEEE Transactions on ImageProcessing, vol. 23, no. 6, pp. 2423–2435, 2014.

[9] K. H. Jin, D. Lee, and J. C. Ye, “A general framework for compressedsensing and parallel MRI using annihilating filter based low-rank Hankelmatrix,” IEEE Transactions on Computational Imaging, vol. 2, no. 4,pp. 480–495, 2016.

[10] J. P. Haldar, “Low-rank modeling of local k-space neighborhoods (LO-RAKS) for constrained MRI,” IEEE Transactions on Medical Imaging,vol. 33, no. 3, pp. 668–681, 2014.

[11] G. Ongie and M. Jacob, “Off-the-grid recovery of piecewise constantimages from few Fourier samples,” SIAM Journal on Imaging Sciences,vol. 9, no. 3, pp. 1004–1041, 2016.

[12] H. Pan, T. Blu, and P. L. Dragotti, “Sampling curves with finite rate ofinnovation,” IEEE Transactions on Signal Processing, vol. 62, no. 2, pp.458–471, 2014.

[13] G. Ongie and M. Jacob, “Recovery of piecewise smooth images fromfew Fourier samples,” in Sampling Theory and Applications (SampTA),2015 International Conference on. IEEE, 2015, pp. 543–547.

Page 9: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

9

ORIGGSLR 51×51SNR: 26.62

S-LORAKSSNR: 24.23

1st 51×51SNR:26.25

2nd 51×51SNR:24.95

TGVSNR: 24.26

TVSNR:23.65

Sampling:21%

(a) (b) (c) (d) (e) (f) (g) (h)

(i) (j) (k) (l) (m) (n) (o)

(q) (r) (s) (t) (u) (v) (w)

(p)

G-LORAKSSNR: 23.75

0

0.02

0.04

0.06

0.08

0.1

0.12

(x)

Fig. 6: Recovery of a brain MR image from 4.85-fold undersampled measurements. (a): The actual image. (b)-(h): The recovery images using GSLR, 1st and2nd SLA with 51× 51 filter size, S-LORAKS, G-LORAKS, TGV, and the standard TV, respectively. (i)-(p): Zoomed versions of the red rectangular area fordifferent methods. (q): the undersampling pattern. (r)-(x): Error images of the corresponding methods.

ORIGGSLR 31×31SNR: 24.16

1st 31×31SNR: 23.68

2nd 31×31SNR: 23.39

S-LORAKSSNR: 21.53

TVSNR: 21.26

(a) (b) (c) (d) (e) (f) (g)

(h) (i) (j) (k) (l) (m) (n)

(o) Sampling: 12.5% (q) (r) (s) (t) (u)(p)

G-LORAKSSNR: 20.83

TGVSNR: 21.70

0

0.02

0.04

0.06

0.08

0.1

0.12

Fig. 7: Recovery of a multi-coil brain MR dataset from 8-fold undersampled measurements. (a): The actual image. (b)-(g): The reconstructions using GSLR,the first and second order SLA with 31 × 31 filter size, S-LORAKS and G-LORAKS method, and the standard TV. (h)-(n): The zoomed regions. (o): Theundersampling pattern. (p)-(u): The error images. Note that GLSR provides the most accurate reconstruction result compared with the other methods, indicatedby red arrows.

[14] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rateof innovation,” IEEE Transactions on Signal Processing, vol. 50, no. 6,pp. 1417–1428, 2002.

[15] I. Maravic and M. Vetterli, “Sampling and reconstruction of signals withfinite rate of innovation in the presence of noise,” IEEE Transactionson Signal Processing, vol. 53, no. 8, pp. 2788–2805, 2005.

[16] G. Ongie, S. Biswas, and M. Jacob, “Convex recovery of continuousdomain piecewise constant images from nonuniform fourier samples,”IEEE Transactions on Signal Processing, vol. 66, no. 1, pp. 236–250,2017.

[17] G. Ongie and M. Jacob, “A fast algorithm for convolutional structuredlow-rank matrix recovery,” IEEE Transactions on Computational Imag-ing, vol. 3, no. 4, pp. 535–550, 2017.

[18] K. H. Jin, D. Lee, and J. C. Ye, “A novel k-space annihilating filter

method for unification between compressed sensing and parallel mri,”in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposiumon. IEEE, 2015, pp. 327–330.

[19] P. J. Shin, P. E. Larson, M. A. Ohliger, M. Elad, J. M. Pauly, D. B. Vi-gneron, and M. Lustig, “Calibrationless parallel imaging reconstructionbased on structured low-rank matrix completion,” Magnetic resonancein medicine, vol. 72, no. 4, pp. 959–970, 2014.

[20] J. P. Haldar and J. Zhuo, “P-LORAKS: Low-rank modeling of local k-space neighborhoods with parallel imaging data,” Magnetic resonancein medicine, vol. 75, no. 4, pp. 1499–1514, 2016.

[21] T. H. Kim, K. Setsompop, and J. P. Haldar, “LORAKS makes betterSENSE: Phase-constrained partial fourier sense reconstruction withoutphase calibration,” Magnetic resonance in medicine, vol. 77, no. 3, pp.1021–1035, 2017.

Page 10: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

10

R2filter size [15,15] [31,31] [51,51] [15,15] [31,31] [51,51] [15,15] [31,31] [51,51]Phantom acc=2 acc=4 acc=5First SLA 47.63 53.78 55.34 38.06 44.20 46.20 34.87 39.93 42.06

Second SLA 48.77 56.26 57.66 33.73 43.30 45.53 33.62 38.07 41.10TGV 42.17 42.17 42.17 37.86 37.86 37.86 35.40 35.40 35.40TV 41.31 41.31 41.31 35.88 35.88 35.88 35.06 35.06 35.06

GSLR 51.13 56.94 58.21 39.22 45.31 47.18 35.93 40.79 43.11Brain Fig.4 acc=2 acc=4 acc=5First SLA 34.93 36.14 36.72 24.85 26.40 26.80 23.00 24.66 25.24

Second SLA 34.10 35.67 36.31 24.28 25.90 26.43 22.63 24.30 24.73TGV 33.09 33.09 33.09 25.36 25.36 25.36 23.34 23.34 23.34TV 32.40 32.40 32.40 24.71 24.71 24.71 22.90 22.90 22.90

S-LORAKS 35.43 35.43 35.43 25.86 25.86 25.86 24.22 24.22 24.22G-LORAKS 33.36 33.36 33.36 24.93 24.93 24.93 23.01 23.01 23.01

GSLR 35.34 36.38 37.19 25.34 26.72 27.36 23.58 25.03 26.07Brain Fig.6 acc=2 acc=4 acc=5First SLA 30.31 32.30 32.61 20.64 23.63 24.17 18.77 21.36 22.54

Second SLA 30.16 31.99 32.33 19.24 22.81 23.62 17.56 20.70 22.19TGV 30.26 30.26 30.26 23.05 23.05 23.05 21.13 21.13 21.13TV 30.10 30.10 30.10 22.65 22.65 22.65 20.83 20.83 20.83

S-LORAKS 29.83 29.83 29.83 22.71 22.71 22.71 21.02 21.02 21.02G-LORAKS 28.69 28.69 28.69 22.06 22.06 22.06 20.69 20.69 20.69

GSLR 30.70 32.51 33.17 21.41 24.20 24.93 19.51 22.15 23.08Ankle acc=2 acc=4 acc=5

First SLA 37.80 38.13 38.42 30.01 30.89 31.17 27.37 28.26 28.50Second SLA 37.96 38.38 38.61 29.65 30.69 30.90 26.65 27.31 28.08

TGV 36.89 36.89 36.89 30.43 30.43 30.43 28.05 28.05 28.05TV 33.43 33.43 33.43 28.38 28.38 28.38 26.22 26.22 26.22

S-LORAKS 37.91 37.91 37.91 30.18 30.18 30.18 27.44 27.44 27.44G-LORAKS 37.02 37.02 37.02 29.27 29.27 29.27 26.85 26.85 ?26.85

GSLR 38.05 38.47 39.05 30.46 31.00 31.66 27.96 28.43 28.96Multi-coil acc=4 acc=6 acc=8First SLA 29.64 30.97 31.32 23.48 25.45 25.81 21.43 23.68 24.04

Second SLA 29.70 31.22 31.68 23.74 25.67 25.99 21.02 23.39 23.62TGV 27.48 27.48 27.48 21.53 21.53 21.53 21.70 21.70 21.70TV 26.68 26.68 26.68 22.15 22.15 22.15 21.26 21.26 21.26

S-LORAK 27.83 27.83 27.83 23.92 23.92 23.92 21.53 21.53 21.53G-LORAKS 27.22 27.22 27.22 22.81 22.81 22.81 20.83 20.83 20.83

GSLR 30.08 31.48 32.24 23.88 25.82 26.29 22.00 24.16 24.58

TABLE I: Comparison of MR image recovery algorithms using variable density random undersampling pattern

(a) Brain Fig4 (b) Ankle Fig5 (d) Brain Fig7(c) Brain Fig6

Fig. 8: Phase images of the real datasets. (a) Brain image in Fig.4. (b) Ankleimage in Fig.5. (c) Brain image in Fig.6. (d) Brain image in Fig.7.

[22] A. Chambolle and P.-L. Lions, “Image recovery via total variationminimization and related problems,” Numerische Mathematik, vol. 76,no. 2, pp. 167–188, 1997.

[23] S. Setzer, G. Steidl, and T. Teuber, “Infimal convolution regularizationswith discrete L1-type functionals,” Communications in MathematicalSciences, vol. 9, no. 3, pp. 797–827, 2011.

[24] M. Holler and K. Kunisch, “On infimal convolution of TV-type function-als and applications to video and image reconstruction,” SIAM Journalon Imaging Sciences, vol. 7, no. 4, pp. 2258–2300, 2014.

[25] M. Schloegl, M. Holler, A. Schwarzl, K. Bredies, and R. Stollberger, “In-fimal convolution of total generalized variation functionals for dynamicMRI,” Magnetic resonance in medicine, vol. 78, no. 1, pp. 142–155,2017.

[26] J. Rasch, E.-M. Brinkmann, and M. Burger, “Joint reconstruction viacoupled Bregman iterations with applications to PET-MR imaging,”Inverse Problems, vol. 34, no. 1, p. 014001, 2017.

[27] J. Rasch, V. Kolehmainen, R. Nivajarvi, M. Kettunen, O. Grohn,M. Burger, and E.-M. Brinkmann, “Dynamic MRI reconstruction fromundersampled data with an anatomical prescan,” Inverse Problems,

vol. 34, no. 7, p. 074001, 2018.[28] R. Otazo, E. Candes, and D. K. Sodickson, “Low-rank plus sparse

matrix decomposition for accelerated dynamic MRI with separationof background and dynamic components,” Magnetic Resonance inMedicine, vol. 73, no. 3, pp. 1125–1136, 2015.

[29] E. J. Candes, X. Li, Y. Ma, and J. Wright, “Robust principal componentanalysis?” Journal of the ACM (JACM), vol. 58, no. 3, p. 11, 2011.

[30] J. V. Velikina and A. A. Samsonov, “Reconstruction of dynamic imageseries from undersampled mri data using data-driven model consistencycondition (MOCCO),” Magnetic resonance in medicine, vol. 74, no. 5,pp. 1279–1290, 2015.

[31] G. Ongie and M. Jacob, “A fast algorithm for structured low-rank matrixrecovery with applications to undersampled MRI reconstruction,” inBiomedical Imaging (ISBI), 2016 IEEE 13th International Symposiumon. IEEE, 2016, pp. 522–525.

[32] Y. Hu, X. Liu, and M. Jacob, “Adative structured low-rank algorithmfor MR image recovery,” arXiv preprint arXiv:1805.05013.

[33] E. Esser, “Applications of Lagrangian-based alternating direction meth-ods and connections to split Bregman,” CAM report, vol. 9, p. 31, 2009.

Page 11: A Generalized Structured Low-Rank Matrix Completion Algorithm …€¦ · A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery Yue Hu, Member, IEEE,

11

R2filter size [31,31] [51,51] [31,31] [51,51]Brain Fig.4 acc≈4.8 acc≈6.7First SLA 26.64 26.85 23.19 23.80

Second SLA 26.05 26.55 22.82 23.32TGV 25.90 25.90 23.11 23.11TV 25.26 25.26 22.59 22.59

S-LORAKS 26.31 26.31 22.85 22.85G-LORAKS 25.97 25.97 21.30 21.30

GSLR 26.77 27.25 23.45 24.18Brain Fig.6 acc≈4.8 acc≈6.7First SLA 24.99 26.25 21.17 22.62

Second SLA 23.82 24.95 20.86 22.29TGV 24.26 24.26 21.63 21.63TV 23.65 23.65 20.88 20.88

S-LORAKS 24.23 24.23 21.20 21.20G-LORAKS 23.75 23.75 21.53 21.53

GSLR 25.23 26.62 22.47 23.01Ankle acc≈4.8 acc≈6.7

First SLA 31.02 31.37 26.15 26.76Second SLA 31.14 31.42 26.07 26.49

TGV 30.53 30.53 25.59 25.59TV 28.53 28.53 24.15 24.15

S-LORAKS 31.03 31.03 25.15 25.15G-LORAKS 29.89 29.89 25.24 25.24

GSLR 31.39 31.69 26.60 27.08Multi-coil acc=5.2 acc=10First SLA 27.78 29.01 22.33 22.87

Second SLA 27.24 28.15 21.76 22.60TGV 26.08 26.08 20.77 20.77TV 24.92 24.92 18.53 18.53

S-LORAK 27.32 27.32 21.01 21.01G-LORAKS 26.58 26.58 20.22 20.22

GSLR 27.90 29.43 22.51 23.15

TABLE II: Comparison of MR image recovery algorithms using radialundersampling pattern