中文版 | English
题名

SORTED L1/L2 MINIMIZATION FOR SPARSE SIGNAL RECOVERY

其他题名
基于排序 𝐿1 /𝐿2 最优化模型的信号稀疏恢复
姓名
姓名拼音
YU Junjie
学号
12132909
学位类型
硕士
学位专业
0701 数学
学科门类/专业学位类别
07 理学
导师
王超
导师单位
统计与数据科学系
论文答辩日期
2023-04-28
论文提交日期
2023-06-29
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

In the field of data science, signals of interest are often sparse, either in their original form or after undergoing a linear transformation. Compressed sensing (CS) is a popular technique for recovering such sparse signals based on their sparsity properties. The $L_0$ norm is a well-known sparsity-promoting method in CS. However, due to its NP-hardness, convex and nonconvex relaxations have been proposed to replace this term and promote sparsity. Inspired by the success of $L_1/L_2$ regularization, we propose two variants of this regularization, namely the truncated $L_1/L_2$ and sorted $L_1/L_2$ models, and explore their applications in sparse recovery. The truncated $L_1/L_2$ model splits the index set of a signal into two parts, with the smaller magnitudes in the numerator of the $L_1/L_2$ and the complementary set in the denominator. Due to the model's strong non-convexity, we solve the minimization problem via the Alternating Direction Method of Multipliers (ADMM), making the model more robust to interference from large-scale indices. Numerical experiments demonstrate its convergence and superiority over other methods in finding the support and accurately recovering sparse solutions for the oversampled discrete cosine sensing matrices, especially under high coherence. Based on the truncated $L_1/L_2$, we generalize the regularization model as the sorted $L_1/L_2$, which orders the index set by assigning larger weights to smaller absolute value indices in the numerator and vice versa to protect the most significant contributing index part while promoting sparsity. This design preserves the original truncated $L_1/L_2$ structure while converting ``hard truncation“ to ``soft weighting“ based on scale sorting. We propose the sorted $L_1/L_2$ models in both noiseless and noisy cases, and theoretically prove the existence of solutions for both cases. We proposed the Difference of Convex Algorithm type (DCA-type) to solve these optimization problems. Furthermore, for the sub-problem, we formulate it as a linear programming problem and use the software, Gurobi for the noiseless case, and adopt ADMM for noisy scenarios. We conduct a series of numerical experiments to demonstrate that our proposed methods outperform the state-of-the-art in sparse signal recovery.

关键词
语种
英语
培养类别
独立培养
入学年份
2021
学位授予年份
2023-06
参考文献列表

[1] DONOHO D L. Compressed Sensing[J]. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306.
[2] NATARAJAN B K. Sparse Approximate Solutions to Linear Systems[J]. SIAM Journal on Computing, 1995: 227-234.
[3] CANDES E J, TAO T. Decoding by Linear Programming[J]. IEEE Transactions on Information Theory, 2005, 51(12): 4203-4215.
[4] CANDES E, RECHT B. Exact Matrix Completion via Convex Optimization[J/OL]. Foundations of Computational Mathematics, 2012, 55(6): 111–119. DOI: 10.1145/2184319.2184343.
[5] DONOHO D, ELAD M. Optimally Sparse Representation in General (Nonorthogonal) Dictionaries via L1 Minimization[J]. Proceedings of the National Academy of Sciences, 2003, 100: 2197-2202.
[6] GRIBONVAL R, NIELSEN M. Sparse Representations in Unions of Bases[J]. IEEE Transactions on Information Theory, 2003, 49(12): 3320-3325.
[7] TIBSHIRANI R. Regression Shrinkage and Selection via the Lasso[J]. Journal of the Royal Statistical Society: Series B (Methodological), 1996, 58(1): 267-288.
[8] RAHIMI Y, WANG C, DONG H, et al. A Scale-Invariant Approach for Sparse Signal Recovery[J]. SIAM Journal on Scientific Computing, 2019, 41(6): A3649-A3672.
[9] MA T H, LOU Y, HUANG T Z. Truncated l1−2 Models for Sparse Recovery and Rank Minimization[J/OL]. SIAM Journal on Imaging Sciences, 2017, 10(3): 1346-1380. DOI:10.1137/16M1098929.
[10] CHEN G, GULLY S M, EDEN D. Validation of a New General Self-Effcacy Scale[J/OL]. Organizational Research Methods, 2001, 4(1): 62-83. DOI: 10.1177/109442810141004.
[11] LV J, FAN Y. A Unified Approach to Model Selection and Sparse Recovery Using Regularized Least Squares[J]. The Annals of Statistics, 2009: 3498-3528.
[12] ZIBULEVSKY M, ELAD M. L1-L2 Optimization in Signal and Image Processing[J]. IEEE Signal Processing Magazine, 2010, 27(3): 76-88.
[13] LOU Y, YAN M. Fast L1-L2 Minimization via a Proximal Operator[J]. Journal of Scientific Computing, 2018, 74(2): 767-785.
[14] BUI K, LOU Y, PARK F, et al. Efficient Image Segmentation Framework with Difference of Anisotropic and Isotropic Total Variation for Blur and Poisson Noise Removal[A]. 2023.
[15] YIN P, ESSER E, XIN J. Ratio and Difference of L1 and L2 Norms and Sparse Representation with Coherent Dictionaries[J]. Communications in Information and Systems, 2014, 14: 87-109.
[16] BI N, TANG W S. A necessary and sufficient condition for sparse vector recovery via ℓ1 − ℓ2 minimization[J/OL]. Applied and Computational Harmonic Analysis, 2022, 56: 337-350. https://www.sciencedirect.com/science/article/pii/S1063520321000865. DOI: https://doi.org/10.1016/j.acha.2021.09.003.
[17] HE Z, HE H, LIU X, et al. An Improved Sufficient Condition for Sparse Signal Recovery With Minimization of L1-L2[J/OL]. IEEE Signal Processing Letters, 2022, 29: 907-911. DOI:10.1109/LSP.2022.3158839.
[18] XIE H, HUANG J. SCAD-Penalized Regression in High-Dimensional Partially Linear Models[J]. The Annals of Statistics, 2009, 37(2): 673-696.
[19] WANG Y, YIN W. Sparse Signal Reconstruction via Iterative Support Detection[J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 462-491.
[20] ZHANG C H. Nearly Unbiased Variable Selection under Minimax Concave Penalty[J]. The Annals of Statistics, 2010, 38(2): 894 - 942.
[21] CHARTRAND R. Exact Reconstruction of Sparse Signals via Nonconvex Minimization[J]. IEEE Signal Processing Letters, 2007, 10(14): 707-710.
[22] GUO W, LOU Y, QIN J, et al. A novel regularization based on the error function for sparse recovery[J]. Journal of Scientific Computing, 2021, 87(1): 31.
[23] HU M, LOU Y, WANG B, et al. Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum[J]. Journal of Scientific Computing, 2023, 95(1): 33.
[24] HU Y, ZHANG D, YE J, et al. Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(9): 2117-2130.
[25] WANG Y, YIN W. Sparse Signal Reconstruction via Iterative Support Detection[J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 462-491.
[26] MA T H, LOU Y, HUANG T Z. Truncated L1 − L2 Models for Sparse Recovery and Rank Minimization[J]. SIAM Journal on Imaging Sciences, 2017, 10(3): 1346-1380.
[27] HUANG X L, SHI L, YAN M. Nonconvex Sorted L1 Minimization for Sparse Approximation[J]. Journal of the Operations Research Society of China, 2015, 3(2): 207-229.
[28] WANG C, YAN M, RAHIMI Y, et al. Accelerated Schemes for the L1/L2 Minimization[J]. IEEE Transactions on Signal Processing, 2020, 68: 2660-2669.
[29] WANG C, TAO M, NAGY J G, et al. Limited-angle CT reconstruction via the L1/L2 minimization[J]. SIAM Journal on Imaging Sciences, 2021, 14(2): 749-777.
[30] WANG C, TAO M, CHUAH C N, et al. Minimizing L1 over L2 Norms on the Gradient[J]. Inverse Problems, 2022, 38(6): 065011.
[31] PELEG T, GRIBONVAL R, DAVIES M E. Compressed Sensing and Best Approximation from Unions of Subspaces: Beyond Dictionaries[C]//21st European Signal Processing Conference(EUSIPCO 2013). 2013: 1-5.
[32] PRANGPRAKHON M, FEESANTIA T, NIMANA N. An Adaptive Projection GradientMethod for Solving Nonlinear Fractional Programming[J/OL]. Fractal and Fractional, 2022,6(10). https://www.mdpi.com/2504-3110/6/10/566. DOI: 10.3390/fractalfract6100566.
[33] LI Q, SHEN L, ZHANG N, et al. A proximal algorithm with backtracked extrapolation for a class of structured fractional programming[J]. Applied and Computational Harmonic Analysis,2022, 56: 98-122
[34] PRANGPRAKHON M, FEESANTIA T, NIMANA N. An Adaptive Projection GradientMethod for Solving Nonlinear Fractional Programming[J]. Fractal and Fractional, 2022, 6(10):566.
[35] BOŢ R I, DAO M N, LI G. Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs[J]. Mathematics of Operations Research, 2022, 47(3): 2415-2443.
[36] WRIGHT S J. Coordinate Descent Algorithms[J/OL]. Mathematical Programming, 2015, 151(1): 3–34. DOI: 10.1007/s10107-015-0892-3.
[37] BOYD S, PARIKH N, CHU E, et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers[J]. Foundations and Trends in Machine Learning, 2011, 3(1): 1-122.
[38] HIRIART-URRUTY J B, Lemaréchal C. Convex Analysis and Minimization Algorithms[M]. Springer Verlag, Heidelberg, 1996.
[39] ZHANG G, HEUSDENS R. Distributed Optimization Using the Primal-Dual Method of Multipliers[J]. IEEE Transactions on Signal and Information Processing over Networks, 2017, 4(1):173-187.
[40] GOLDSTEIN T, OSHER S. The Split Bregman Method for L1-Regularized Problems[J/OL]. SIAM Journal on Imaging Sciences, 2009, 2(2): 323-343. DOI: 10.1137/080725891.
[41] BECK A, TEBOULLE M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems[J]. SIAM Journal on Imaging Sciences, 2009, 2(1): 183-202.
[42] MA S, GOLDFARB D, CHEN L. Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization[J]. Mathematical Programming, 2011, 128(1): 321-353.
[43] FANNJIANG A, LIAO W. Coherence Pattern–Guided Compressive Sensing with Unresolved Grids[J]. SIAM Journal on Imaging Sciences, 2012, 5(1): 179-202.
[44] LOU Y, YIN P, HE Q, et al. Computing Sparse Representation in a Highly Coherent Dictionary Based on Difference of L1 and L2[J]. Journal of Scientific Computing, 2015, 64(1): 178-196.
[45] YIN P, LOU Y, HE Q, et al. Minimization of L1−2 for compressed sensing[J]. SIAM Journal on Scientific Computing, 2015, 37(1): A536-A563.
[46] ZENG L, YU P, PONG T K. Analysis and Algorithms for Some Compressed Sensing Models Based on L1/L2 Minimization[J]. SIAM Journal on Optimization, 2021, 31(2): 1576-1603.
[47] TAO M. Minimization of L1 Over L2 for Sparse Signal Recovery with Convergence Guarantee[J]. SIAM Journal on Scientific Computing, 2022, 44(2): A770-A797.
[48] VAVASIS S A. Derivation of Compressive Sensing Theorems from the Spherical Section Property[J]. University of Waterloo, CO, 2009, 769.
[49] ZHANG Y. Theory of Compressive Sensing via L1-Minimization: A Non-RIP Analysis and Extensions[J]. Journal of the Operations Research Society of China, 2013, 1(1): 79-105.
[50] ZHANG S, XIN J. Minimization of Transformed L1 Penalty: Theory, Difference of Convex Function Algorithm, and Robust Application in Compressed Sensing[J]. Mathematical Programming, 2018, 169: 307-336.
[51] FANNJIANG A, LIAO W. Coherence Pattern–Guided Compressive Sensing with Unresolved Grids[J/OL]. SIAM Journal on Imaging Sciences, 2012, 5(1): 179-202. https://doi.org/10.1137/110838509.
[52] OPTIMIZATION G. Gurobi Optimizer Reference Manual[J]. URL: http://www. gurobi. com, 2015.

所在学位评定分委会
数学
国内图书分类号
O224
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/544591
专题理学院_统计与数据科学系
推荐引用方式
GB/T 7714
Yu JJ. SORTED L1/L2 MINIMIZATION FOR SPARSE SIGNAL RECOVERY[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12132909-郁俊杰-统计与数据科学(1305KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[郁俊杰]的文章
百度学术
百度学术中相似的文章
[郁俊杰]的文章
必应学术
必应学术中相似的文章
[郁俊杰]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。