Abstract
Much research has recently been devoted to jointly sparse (JS) signal recovery from multiple measurement vectors using \(\ell _{2,1}\) regularization, which is often more effective than performing separate recoveries using standard sparse recovery techniques. However, JS methods are difficult to parallelize due to their inherent coupling. The variance based joint sparsity (VBJS) algorithm was recently introduced in Adcock et al. (SIAM J Sci Comput, submitted). VBJS is based on the observation that the pixel-wise variance across signals convey information about their shared support, motivating the use of a weighted\(\ell _1\) JS algorithm, where the weights depend on the information learned from calculated variance. Specifically, the \(\ell _1\) minimization should be more heavily penalized in regions where the corresponding variance is small, since it is likely there is no signal there. This paper expands on the original method, notably by introducing weights that ensure accurate, robust, and cost efficient recovery using both \(\ell _1\) and \(\ell _2\) regularization. Moreover, this paper shows that the VBJS method can be applied in situations where some of the measurement vectors may misrepresent the unknown signals or images of interest, which is illustrated in several numerical examples.











Similar content being viewed by others
Notes
Although there are subtle differences in the derivations and normalizations, the PA transform can be thought of as higher order total variation (HOTV). Because part of our investigation discusses parameter selection, which depends explicitly on \(||\mathcal {L} f||\), we will exclusively use the PA transform as it appears in [3] so as to avoid any confusion. Explicit formulations for the PA transform matrix can be found in [3]. We also note that the method can be easily adapted for other sparsifying transformations.
For this simple example, each of the \(K = 5\) false measurement vectors was formed by adding a single false data point, with height sampled from the corresponding distribution, (binary, uniform or Gaussian).
Specifically it approximates the jump function \([f](x) = f(x^+) - f(x^-)\) on a set of N grid points.
References
Adcock, B., Gelb, A., Song, G., Sui, Y.: Joint sparse recovery based on variances. SIAM J. Sci. Comput. (submitted)
Ao, D., Wang, R., Hu, C., Li, Y.: A sparse SAR imaging method based on multiple measurement vectors model. Remote Sens. 9(3), 297 (2017)
Archibald, R., Gelb, A., Platte, R.B.: Image reconstruction from undersampled Fourier data using the polynomial annihilation transform. J. Sci. Comput. 67(2), 432–452 (2016)
Archibald, R., Gelb, A., Yoon, J.: Polynomial fitting for edge detection in irregularly sampled signals and images. SIAM J. Numer. Anal. 43(1), 259–279 (2005)
Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. Springer, New York (2011)
Beck, A.: Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB. SIAM (2014)
Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell_1\) minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)
Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. SIAM J. Sci. Comput. 22(2), 503–516 (2000)
Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. pp. 3869–3872. IEEE, (2008)
Chen, J., Huo, X.: Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans. Signal Process. 54(12), 4634–4643 (2006)
Cotter, S.F., Rao, B.D., Engan, K., Kreutz-Delgado, K.: Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 53(7), 2477–2488 (2005)
Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.S.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63(1), 1–38 (2010)
Deng, W., Yin, W., Zhang, Y.: Group sparse optimization by alternating direction method. In: Technical Report. Rice University, Houston, TX, Department of Computationa and Applied Mathematics, (2012)
Denker, D., Gelb, A.: Edge detection of piecewise smooth functions from undersampled Fourier data using variance signatures. SIAM J. Sci. Comput. 39(2), A559–A592 (2017)
Eldar, Y.C., Mishali, M.: Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory 55(11), 5302–5316 (2009)
Eldar, Y.C., Rauhut, H.: Average case analysis of multichannel sparse recovery using convex relaxation. IEEE Trans. Inf. Theory 56(1), 505–519 (2010)
Fu, W., Li, S., Fang, L., Kang, X., Benediktsson, J.A.: Hyperspectral image classification via shape-adaptive joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9(2), 556–567 (2016)
Glowinski, R., Le Tallec, P.: Augmented Lagrangian and operator-splitting methods in nonlinear mechanics. Studies in applied and numerical mathematics, pp. 45–121. SIAM (1989)
Keydel, E.R., Lee, S.W., Moore, J.T.: MSTAR extended operating conditions: a tutorial. In: Aerospace/Defense Sensing and Controls, International Society for Optics and Photonics pp. 228–242 (1996)
Leviatan, D., Temlyakov, V.N.: Simultaneous approximation by greedy algorithms. Adv. Comput. Math. 25(1), 73–90 (2006)
Li, C.: An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing. In: Ph.D. Thesis, Citeseer (2009)
Liu, L., Esmalifalak, M., Ding, Q., Emesih, V.A., Han, Z.: Detecting false data injection attacks on power grid by sparse optimization. IEEE Trans. Smart Grid 5(2), 612–621 (2014)
Liu, Q.Y., Zhang, Q., Gu, F.F., Chen, Y.C., Kang, L., Qu, X.Y.: Downward-looking linear array 3D SAR imaging based on multiple measurement vectors model and continuous compressive sensing. J. Sens. 2017, 1–12 (2017)
Liu, Y., Ma, J., Fan, Y., Liang, Z.: Adaptive-weighted total variation minimization for sparse data toward low-dose X-ray computed tomography image reconstruction. Phys. Med. Biol. 57(23), 7923 (2012)
Mishali, M., Eldar, Y.C.: Reduce and boost: Recovering arbitrary sets of jointly sparse vectors. IEEE Trans. Signal Process. 56(10), 4692–4702 (2008)
Monajemi, H., Jafarpour, S., Gavish, M., Donoho, D.L., Ambikasaran, S., Bacallado, S., Bharadia, D., Chen, Y., Choi, Y., Chowdhury, M., et al.: Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices. Proc. Nat. Acad. Sci. 110(4), 1181–1186 (2013)
Niculescu, C., Persson, L.E.: Convex functions and their applications: a contemporary approach. Springer, New York (2006)
Parikh, N., Boyd, S., et al.: Proximal algorithms. Found. Trends Optim. 1(3), 127–239 (2014)
Sanders, T., Gelb, A., Platte, R.B.: Composite SAR imaging using sequential joint sparsity. J. Comput. Phys. 338, 357–370 (2017)
Singh, A., Dandapat, S.: Weighted mixed-norm minimization based joint compressed sensing recovery of multi-channel electrocardiogram signals. Comput. Electr. Eng. 53, 203–218 (2016)
Steffens, C., Pesavento, M., Pfetsch, M.E.: A compact formulation for the l21 mixed-norm minimization problem. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 4730–4734. IEEE, (2017)
Tropp, J.A.: Algorithms for simultaneous sparse approximation. Part II: convex relaxation. Signal Process. 86(3), 589–602 (2006)
Tropp, J.A., Gilbert, A.C., Strauss, M.J.: Simultaneous sparse approximation via greedy pursuit. In: IEEE International Conference and Proceedings on Acoustics, Speech, and Signal Processing, 2005. (ICASSP’05), vol. 5, pp. v–721. IEEE (2005)
Tropp, J.A., Gilbert, A.C., Strauss, M.J.: Algorithms for simultaneous sparse approximation. Part I: greedy pursuit. Signal Process. 86(3), 572–588 (2006)
Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 1(3), 248–272 (2008)
Wipf, D.P., Rao, B.D.: An empirical bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 55(7), 3704–3716 (2007)
Wright, S., Nocedal, J.: Numerical optimization. Science 35, 67–68 (1999)
Xie, W., Deng, Y., Wang, K., Yang, X., Luo, Q.: Reweighted l1 regularization for restraining artifacts in fmt reconstruction images with limited measurements. Opt. Lett. 39(14), 4148–4151 (2014)
Yang, Z., Xie, L.: Enhancing sparsity and resolution via reweighted atomic norm minimization. IEEE Trans. Signal Process. 64(4), 995–1006 (2016)
Ye, F., Luo, H., Lu, S., Zhang, L.: Statistical en-route filtering of injected false data in sensor networks. IEEE J. Sel. Areas Commun. 23(4), 839–850 (2005)
Zhang, Y.: Users guide for YALL1: your algorithms for l1 optimization. In: Technique Report, pp. 09–17 (2009)
Zhao, B., Fei-Fei, L., Xing, E.P.: Online detection of unusual events in videos via dynamic sparse coding. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3313–3320. IEEE (2011)
Zheng, C., Li, G., Liu, Y., Wang, X.: Subspace weighted l21 minimization for sparse signal recovery. EURASIP J. Adv. Signal Process. 2012(1), 98 (2012)
Zhou, F., Wu, R., Xing, M., Bao, Z.: Approach for single channel SAR ground moving target imaging and motion parameter estimation. IET Radar Sonar Navig. 1(1), 59–66 (2007)
Author information
Authors and Affiliations
Corresponding author
Additional information
Anne Gelb’s work is supported in part by the Grants NSF-DMS 1502640, NSF-DMS 1732434, and AFOSR FA9550-15-1-0152. Approved for public release. PA Approval #:[88AWB-2017-6162].
Proof of Lemma 1
Proof of Lemma 1
Proof
[Lemma 1] Following the technique described in [22] for the non-weighted, one-dimensional case, let \(x\in \mathbb {R}^{N\times N}\) and \(w_{i,j} \ge 0\) for all \(i,j = 1,\ldots ,N\). We drop the vec notation for simplicity.
Define the objective function \(H:\mathbb {R}^{N\times N} \rightarrow \mathbb {R}^{N\times N}\) as
To show H(x) is convex, we first observe that for \(\alpha \in (0,1)\) and \(p,q\in \mathbb {R}^{N\times N}\), we have
Applying (A.2) to H yields
Therefore H is convex. For \(p\ne q\), H is strictly/strongly convex and thus coercive [6, 7, 28]. Hence there exists at least one solution \(\hat{x}\) of (4.6), [38].
The subdifferential of \(f(x) = ||x||_{1,w}\) is given element-wise as
where the origin is required to be included according to the optimality condition for convex problems. According to (A.4), to minimize (A.1), each component \(\hat{x}_{i,j}\), \(i,j = 1,\ldots ,N\), must satisfy
If \({\hat{x}}_{i,j}\ne 0\), (A.5) yields
Since \(w_{i,j}/\beta >0\), (A.6) implies
Combining (A.6) and (A.7) gives
Thus, for \({\hat{x}}_{i,j} \ne 0\), we have
where we have used (A.7) and (A.8) in the result.
Conversely, we now show that \({\hat{x}}_{i,j} = 0\) if and only if
First assume that \({\hat{x}}_{i,j} = 0\). Then (A.10) follows from (A.5) since \(\beta > 0 \).
Now assume (A.10) holds for some \({\hat{x}}_{i,j} \ne 0\). By (A.5), \({\hat{x}}_{i,j}\) satisfies (A.7). Hence
which only holds for \({\hat{x}}_{i,j} = 0\). Hence by contradiction, \({\hat{x}}_{i,j} = 0\). Combining (A.10) with (A.9) yields
which is equivalent to (4.7) in matrix form.
Rights and permissions
About this article
Cite this article
Gelb, A., Scarnati, T. Reducing Effects of Bad Data Using Variance Based Joint Sparsity Recovery. J Sci Comput 78, 94–120 (2019). https://doi.org/10.1007/s10915-018-0754-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-018-0754-2