SCIENTIA SINICA Informationis, Volume 47 , Issue 1 : 58-72(2017) https://doi.org/10.1360/N112016-00097

Approximate message passing algorithm for L_1/2 regularization

More info
  • ReceivedJun 14, 2016
  • AcceptedJul 8, 2016
  • PublishedDec 2, 2016


Funded by



[1] Akaike H. Information Theory and an Extension of the Maximum Likelihood Principle//Selected Papers of Hirotugu Akaike. New York: Springer, 1998. 199-213. Google Scholar

[2] Schwarz G. Estimating the dimension of a model. Ann Stat, 1978, 6: 461-464 CrossRef Google Scholar

[3] Mallows C L. Some comments on $C_{p}$. Technometrics, 1973, 15: 661-675. Google Scholar

[4] Tibshirani R. Regression shrinkage and selection via the lasso. J Royal Stat Soc Ser B, 1996, 58: 267-288. Google Scholar

[5] Donoho D L, Huo X. Uncertainty principles and ideal atomic decomposition. IEEE Trans Inform Theory, 2001, 47: 2845-2862 CrossRef Google Scholar

[6] Donoho D L, Elad E. Maximal sparsity representation via $L_{1}$ minimization. Proc Nat Aca Sci, 2003. 100: 2197-2202. Google Scholar

[7] Chen S S, Donoho D L, Saunders M A. Atomic decomposition by basis pursuit. SIAM Review, 2001, 43: 129-159 CrossRef Google Scholar

[8] Candes E J, Tao T. The dantzig selector: Statistical estimation when $p$ is much larger than $n$. Quality Contr Appl Stat, 2009, 54: 83-84. Google Scholar

[9] Fan J Q, Li R Z. Statistical chanllenges with high dimensionality: feature selection in knowledge discovery. arXiv:math/0602133. Google Scholar

[10] Zhang C H. Nearly unbiased variable selection under minimax concave penalty. Ann Stat, 2010, 38: 894-942 CrossRef Google Scholar

[11] Xu Z B, Zhang H, Wang Y, et al. $L_{\frac{1}{2}}$ regularization. Sci China Inf Sci, 2010, 53: 1159-1169. Google Scholar

[12] Zhang H, Xu Z B, Wang Y, et al. A sharp nonasymptotic bound and phase diagram of $L_{\frac{1}{2}}$ regularization. Acta Math Sin, 2014, 30: 1242-1258 CrossRef Google Scholar

[13] Liang Y, Liu C, Luan X Z, et al. Sparse logistic regression with a $L_{\frac{1}{2}}$ penalty for gene selection in cancer classification. BMC Bioinformatics, 2013, 14: 1-12 CrossRef Google Scholar

[14] Xu Z B, Chang X Y, Xu F M, et al. $L_{\frac{1}{2}}$ regularization: A thresholding represention threoy and a fast sover. IEEE Trans Neural Netw Lear Syst, 2012, 23: 1013-1027 CrossRef Google Scholar

[15] Jordan M I. Graphical models. Stat Sci, 2004, 19: 140-155 CrossRef Google Scholar

[16] Donoho D L, Maleki A, Montanari A. Message-passing algorithms for compressed sensing. Proc Natl Acad Sci, 2009, 106: 18914-18919 CrossRef Google Scholar

[17] Donoho D L, Maleki A, Montanari A. How to design message passing algorithms for compressed sensing. Preprint, 2011. Google Scholar

[18] Candes E J, Romberg J K, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math, 2006, 59: 1207-1223 CrossRef Google Scholar

[19] Donoho D L. Compressed sensing. IEEE Trans Inf Theory, 2006, 52: 1289-1306 CrossRef Google Scholar

[20] Majumdar A, Ward R K. Compressed sensing of color images. Signal Process, 2010, 90: 3122-3127 CrossRef Google Scholar

[21] Shi G M, Liu D H, Gao D H, et al. Advances in theory and application of compressed sensing. Acta Electron Sin, 2009, 37: 1070-1081 [石光明, 刘丹华, 高大化, 等. 压缩感知理论及其研究进展. 电子学报, 2009, 37: 1070-1081]. Google Scholar

[22] Zhang H, Liang Y, Xu Z B, et al. Compressive sensing with noise based on SCAD penalty. Acta Math Sin, 2013, 56: 767-776. Google Scholar

[23] Donoho D L. High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete Comput Geom, 2006, 35: 617-652 CrossRef Google Scholar

[24] Zhang H, Liang Y, Gou H L, et al. The essential ability of sparse reconstruction of different compressive sensing strategies. Sci China Inf Sci, 2012, 55: 2582-2589 CrossRef Google Scholar