Academic literature on the topic 'L0-norm optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'L0-norm optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "L0-norm optimization"

1

Zhu, Jiehua, and Xiezhang Li. "A Smoothed l0-Norm and l1-Norm Regularization Algorithm for Computed Tomography." Journal of Applied Mathematics 2019 (June 2, 2019): 1–8. http://dx.doi.org/10.1155/2019/8398035.

Full text
Abstract:
The nonmonotone alternating direction algorithm (NADA) was recently proposed for effectively solving a class of equality-constrained nonsmooth optimization problems and applied to the total variation minimization in image reconstruction, but the reconstructed images suffer from the artifacts. Though by the l0-norm regularization the edge can be effectively retained, the problem is NP hard. The smoothed l0-norm approximates the l0-norm as a limit of smooth convex functions and provides a smooth measure of sparsity in applications. The smoothed l0-norm regularization has been an attractive research topic in sparse image and signal recovery. In this paper, we present a combined smoothed l0-norm and l1-norm regularization algorithm using the NADA for image reconstruction in computed tomography. We resolve the computation challenge resulting from the smoothed l0-norm minimization. The numerical experiments demonstrate that the proposed algorithm improves the quality of the reconstructed images with the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the l1-norm regularization in absence of the smoothed l0-norm.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Jun, Changwei Chen, Shoubao Su, and Zinan Chang. "Compressive Sensing of Multichannel EEG Signals via lq Norm and Schatten-p Norm Regularization." Mathematical Problems in Engineering 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/2189563.

Full text
Abstract:
In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Recently, a simultaneous cosparsity and low-rank (SCLR) optimization model has shown the state-of-the-art performance in compressive sensing (CS) recovery of multichannel EEG signals. How to solve the resulting regularization problem, involving l0 norm and rank function which is known as an NP-hard problem, is critical to the recovery results. SCLR takes use of l1 norm and nuclear norm as a convex surrogate function for l0 norm and rank function. However, l1 norm and nuclear norm cannot well approximate the l0 norm and rank because there exist irreparable gaps between them. In this paper, an optimization model with lq norm and schatten-p norm is proposed to enforce cosparsity and low-rank property in the reconstructed multichannel EEG signals. An efficient iterative scheme is used to solve the resulting nonconvex optimization problem. Experimental results have demonstrated that the proposed algorithm can significantly outperform existing state-of-the-art CS methods for compressive sensing of multichannel EEG channels.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiezhang, Guocan Feng, and Jiehua Zhu. "An Algorithm of l1-Norm and l0-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection." International Journal of Biomedical Imaging 2020 (August 28, 2020): 1–6. http://dx.doi.org/10.1155/2020/8873865.

Full text
Abstract:
The l1-norm regularization has attracted attention for image reconstruction in computed tomography. The l0-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined l1-norm and l0-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving l0-norm regularization.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Kaizhu, Danian Zheng, Irwin King, and Michael R. Lyu. "Arbitrary Norm Support Vector Machines." Neural Computation 21, no. 2 (February 2009): 560–82. http://dx.doi.org/10.1162/neco.2008.12-07-667.

Full text
Abstract:
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L∞-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, − 9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Junjie, Yinan Sun, and XiuXia Ji. "High-Resolution ISAR Imaging Based on Improved Sparse Signal Recovery Algorithm." Wireless Communications and Mobile Computing 2021 (April 2, 2021): 1–7. http://dx.doi.org/10.1155/2021/5541116.

Full text
Abstract:
In order to solve the problem of high-resolution ISAR imaging under the condition of finite pulses, an improved smoothed L0 norm (SL0) sparse signal reconstruction ISAR imaging algorithm is proposed. Firstly, the ISAR imaging is transformed into the optimization problem of minimum L0 norm. Secondly, a single-loop structure is used instead of two loop layers in SL0 algorithm which increases the searching density of variable parameter to ensure the recovery accuracy. Finally, the compared step is added to ensure the optimization solution along the steepest descent gradient direction. The experimental results show that the proposed algorithm has better imaging effect.
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Ziran, Jianlin Zhang, Zhiyong Xu, Yongmei Huang, Yong Liu, and Xiangsuo Fan. "Gradient Projection with Approximate L0 Norm Minimization for Sparse Reconstruction in Compressed Sensing." Sensors 18, no. 10 (October 9, 2018): 3373. http://dx.doi.org/10.3390/s18103373.

Full text
Abstract:
In the reconstruction of sparse signals in compressed sensing, the reconstruction algorithm is required to reconstruct the sparsest form of signal. In order to minimize the objective function, minimal norm algorithm and greedy pursuit algorithm are most commonly used. The minimum L1 norm algorithm has very high reconstruction accuracy, but this convex optimization algorithm cannot get the sparsest signal like the minimum L0 norm algorithm. However, because the L0 norm method is a non-convex problem, it is difficult to get the global optimal solution and the amount of calculation required is huge. In this paper, a new algorithm is proposed to approximate the smooth L0 norm from the approximate L2 norm. First we set up an approximation function model of the sparse term, then the minimum value of the objective function is solved by the gradient projection, and the weight of the function model of the sparse term in the objective function is adjusted adaptively by the reconstruction error value to reconstruct the sparse signal more accurately. Compared with the pseudo inverse of L2 norm and the L1 norm algorithm, this new algorithm has a lower reconstruction error in one-dimensional sparse signal reconstruction. In simulation experiments of two-dimensional image signal reconstruction, the new algorithm has shorter image reconstruction time and higher image reconstruction accuracy compared with the usually used greedy algorithm and the minimum norm algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yuanqing, Andrzej Cichocki, and Shun-ichi Amari. "Analysis of Sparse Representation and Blind Source Separation." Neural Computation 16, no. 6 (June 1, 2004): 1193–234. http://dx.doi.org/10.1162/089976604773717586.

Full text
Abstract:
In this letter, we analyze a two-stage cluster-then-l1-optimization approach for sparse representation of a data matrix, which is also a promising approach for blind source separation (BSS) in which fewer sensors than sources are present. First, sparse representation (factorization) of a data matrix is discussed. For a given overcomplete basis matrix, the corresponding sparse solution (coefficient matrix) with minimum l1 norm is unique with probability one, which can be obtained using a standard linear programming algorithm. The equivalence of the l1—norm solution and the l0—norm solution is also analyzed according to a probabilistic framework. If the obtained l1—norm solution is sufficiently sparse, then it is equal to the l0—norm solution with a high probability. Furthermore, the l1—norm solution is robust to noise, but the l0—norm solution is not, showing that the l1—norm is a good sparsity measure. These results can be used as a recoverability analysis of BSS, as discussed. The basis matrix in this article is estimated using a clustering algorithm followed by normalization, in which the matrix columns are the cluster centers of normalized data column vectors. Zibulevsky, Pearlmutter, Boll, and Kisilev (2000) used this kind of two-stage approach in underdetermined BSS. Our recoverability analysis shows that this approach can deal with the situation in which the sources are overlapped to some degree in the analyzed
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Ming-Ming, Chun-Xi Dong, Yang-Yang Dong, and Guo-Qing Zhao. "Superresolution 2D DOA Estimation for a Rectangular Array via Reweighted Decoupled Atomic Norm Minimization." Mathematical Problems in Engineering 2019 (July 8, 2019): 1–13. http://dx.doi.org/10.1155/2019/6797168.

Full text
Abstract:
This paper proposes a superresolution two-dimensional (2D) direction of arrival (DOA) estimation algorithm for a rectangular array based on the optimization of the atomic l0 norm and a series of relaxation formulations. The atomic l0 norm of the array response describes the minimum number of sources, which is derived from the atomic norm minimization (ANM) problem. However, the resolution is restricted and high computational complexity is incurred by using ANM for 2D angle estimation. Although an improved algorithm named decoupled atomic norm minimization (DAM) has a reduced computational burden, the resolution is still relatively low in terms of angle estimation. To overcome these limitations, we propose the direct minimization of the atomic l0 norm, which is demonstrated to be equivalent to a decoupled rank optimization problem in the positive semidefinite (PSD) form. Our goal is to solve this rank minimization problem and recover two decoupled Toeplitz matrices in which the azimuth-elevation angles of interest are encoded. Since rank minimization is an NP-hard problem, a novel sparse surrogate function is further proposed to effectively approximate the two decoupled rank functions. Then, the new optimization problem obtained through the above relaxation can be implemented via the majorization-minimization (MM) method. The proposed algorithm offers greatly improved resolution while maintaining the same computational complexity as the DAM algorithm. Moreover, it is possible to use a single snapshot for angle estimation without prior information on the number of sources, and the algorithm is robust to noise due to its iterative nature. In addition, the proposed surrogate function can achieve local convergence faster than existing functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ma Min, 马敏, 刘一斐 Liu Yifei, and 王世喜 Wang Shixi. "基于近似L0范数的电容层析成像敏感场优化算法." Laser & Optoelectronics Progress 58, no. 12 (2021): 1210025. http://dx.doi.org/10.3788/lop202158.1210025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yujie, Benying Tan, Atsunori Kanemura, Shuxue Ding, and Wuhui Chen. "Analysis Sparse Representation for Nonnegative Signals Based on Determinant Measure by DC Programming." Complexity 2018 (April 24, 2018): 1–12. http://dx.doi.org/10.1155/2018/2685745.

Full text
Abstract:
Analysis sparse representation has recently emerged as an alternative approach to the synthesis sparse model. Most existing algorithms typically employ the l0-norm, which is generally NP-hard. Other existing algorithms employ the l1-norm to relax the l0-norm, which sometimes cannot promote adequate sparsity. Most of these existing algorithms focus on general signals and are not suitable for nonnegative signals. However, many signals are necessarily nonnegative such as spectral data. In this paper, we present a novel and efficient analysis dictionary learning algorithm for nonnegative signals with the determinant-type sparsity measure which is convex and differentiable. The analysis sparse representation can be cast in three subproblems, sparse coding, dictionary update, and signal update, because the determinant-type sparsity measure would result in a complex nonconvex optimization problem, which cannot be easily solved by standard convex optimization methods. Therefore, in the proposed algorithms, we use a difference of convex (DC) programming scheme for solving the nonconvex problem. According to our theoretical analysis and simulation study, the main advantage of the proposed algorithm is its greater dictionary learning efficiency, particularly compared with state-of-the-art algorithms. In addition, our proposed algorithm performs well in image denoising.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "L0-norm optimization"

1

Samarasinghe, Kasun M. "Sparse Signal Reconstruction Modeling for MEG Source Localization Using Non-convex Regularizers." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439304367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Soubies, Emmanuel. "Sur quelques problèmes de reconstruction en imagerie MA-TIRF et en optimisation parcimonieuse par relaxation continue exacte de critères pénalisés en norme-l0." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4082/document.

Full text
Abstract:
Cette thèse s'intéresse à deux problèmes rencontrés en traitement du signal et des images. Le premierconcerne la reconstruction 3D de structures biologiques à partir d'acquisitions multi-angles enmicroscopie par réflexion totale interne (MA-TIRF). Dans ce contexte, nous proposons de résoudre leproblème inverse avec une approche variationnelle et étudions l'effet de la régularisation. Une batteried'expériences, simples à mettre en oeuvre, sont ensuite proposées pour étalonner le système et valider lemodèle utilisé. La méthode proposée s'est montrée être en mesure de reconstruire avec précision unéchantillon phantom de géométrie connue sur une épaisseur de 400 nm, de co-localiser deux moléculesfluorescentes marquant les mêmes structures biologiques et d'observer des phénomènes biologiquesconnus, le tout avec une résolution axiale de l'ordre de 20 nm. La deuxième partie de cette thèseconsidère plus précisément la régularisation l0 et la minimisation du critère moindres carrés pénalisé (l2-l0) dans le contexte des relaxations continues exactes de cette fonctionnelle. Nous proposons dans unpremier temps la pénalité CEL0 (Continuous Exact l0) résultant en une relaxation de la fonctionnelle l2-l0 préservant ses minimiseurs globaux et pour laquelle de tout minimiseur local on peut définir unminimiseur local de l2-l0 par un simple seuillage. Par ailleurs, nous montrons que cette relaxation éliminedes minimiseurs locaux de la fonctionnelle initiale. La minimisation de cette fonctionnelle avec desalgorithmes d'optimisation non-convexe est ensuite utilisée pour différentes applications montrantl'intérêt de la minimisation de la relaxation par rapport à une minimisation directe du critère l2-l0. Enfin,une vue unifiée des pénalités continues de la littérature est proposée dans ce contexte de reformulationexacte du problème
This thesis is devoted to two problems encountered in signal and image processing. The first oneconcerns the 3D reconstruction of biological structures from multi-angle total interval reflectionfluorescence microscopy (MA-TIRF). Within this context, we propose to tackle the inverse problem byusing a variational approach and we analyze the effect of the regularization. A set of simple experimentsis then proposed to both calibrate the system and validate the used model. The proposed method hasbeen shown to be able to reconstruct precisely a phantom sample of known geometry on a 400 nmdepth layer, to co-localize two fluorescent molecules used to mark the same biological structures andalso to observe known biological phenomena, everything with an axial resolution of 20 nm. The secondpart of this thesis considers more precisely the l0 regularization and the minimization of the penalizedleast squares criteria (l2-l0) within the context of exact continuous relaxations of this functional. Firstly,we propose the Continuous Exact l0 (CEL0) penalty leading to a relaxation of the l2-l0 functional whichpreserves its global minimizers and for which from each local minimizer we can define a local minimizerof l2-l0 by a simple thresholding. Moreover, we show that this relaxed functional eliminates some localminimizers of the initial functional. The minimization of this functional with nonsmooth nonconvexalgorithms is then used on various applications showing the interest of minimizing the relaxation incontrast to a direct minimization of the l2-l0 criteria. Finally we propose a unified view of continuouspenalties of the literature within this exact problem reformulation framework
APA, Harvard, Vancouver, ISO, and other styles
3

Ben, mhenni Ramzi. "Méthodes de programmation en nombres mixtes pour l'optimisation parcimonieuse en traitement du signal." Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0008.

Full text
Abstract:
L’approximation parcimonieuse consiste à ajuster un modèle de données linéaire au sens des moindres carrés avec un faible nombre de composantes non nulles (la “norme” L0). En raison de sa complexité combinatoire, ce problème d’optimisation est souvent abordé par des méthodes sous-optimales. Il a cependant récemment été montré que sa résolution exacte était envisageable au moyen d’une reformulation en programme en nombres mixtes(MIP),couplée à un solveur MIP générique, mettant en œuvre des stratégies de type branch-and-bound. Cette thèse aborde le problème d’approximation parcimonieuse en norme L0 par la construction d’algorithmes branch-and-bound dédiés, exploitant les structures mathématiques du problème. D’une part, nous interprétons l’évaluation de chaque nœud comme l’optimisation d’un critère en norme L1, pour lequel nous proposons des méthodes dédiées. D’autre part, nous construisons une stratégie d’exploration efficace exploitant la parcimonie de la solution, privilégiant l’activation de variables non nulles dans le parcours de l’arbre de décision. La méthode proposée dépasse largement les performances du solveur CPLEX, réduisant le temps de calcul et permettant d’aborder des problèmes de plus grande taille. Dans un deuxième volet de la thèse, nous proposons et étudions des reformulations MIP du problème de démélange spectral sous contrainte de parcimonie en norme L0 et sous des contraintes plus complexes de parcimonie structurée, généralement abordées de manière relâchée dans la littérature.Nous montrons que, pour des problèmes de complexité limitée, la prise en compte de manière exacte de ces contraintes est possible et permet d’améliorer l’estimation par rapport aux approches existantes
Sparse approximation aims to fit a linear model in a least-squares sense, with a small number of non-zero components (the L0 “norm”). Due to its combinatorial nature, it is often addressed by suboptimal methods. It was recently shown, however, that exact resolution could be performed through a mixed integer program(MIP) reformulation solved by a generic solver, implementing branch-and-bound techniques. This thesis addresses the L0-norm sparse approximation problem with tailored branch-andbound resolution methods, exploiting the mathematical structures of the problem. First, we show that each node evaluation amounts to solving an L1-norm problem, for which we propose dedicated methods. Then, we build an efficient exploration strategy exploiting the sparsity of the solution, by activating first the non-zero variables in the tree search. The proposed method outperforms the CPLEX solver, reducing the computation time and making it possible to address larger problems. In a second part of the thesis, we propose and study the MIP reformulations of the spectral unmixing problem with L0-norm sparsity more advanced structured sparsity constraints, which are usually addressed through relaxations in the literature. We show that, for problems with limited complexity (highly sparse solutions, good signal-to-noise ratio), such constraints can be accounted for exactly and improve the estimation quality over standard approaches
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "L0-norm optimization"

1

Kim, Hwa-Young, Rae-Hong Park, and Ji-Eun Lee. "Image Representation Using a Sparsely Sampled Codebook for Super-Resolution." In Research Developments in Computer Vision and Image Processing, 1–14. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4558-5.ch001.

Full text
Abstract:
In this chapter, the authors propose a Super-Resolution (SR) method using a vector quantization codebook and filter dictionary. In the process of SR, we use the idea of compressive sensing to represent a sparsely sampled signal under the assumption that a combination of a small number of codewords can represent an image patch. A low-resolution image is obtained from an original high-resolution image, degraded by blurring and down-sampling. The authors propose a resolution enhancement using an alternative l1 norm minimization to overcome the convexity of l0 norm and the sparsity of l1 norm at the same time, where an iterative reweighted l1 norm minimization is used for optimization. After the reconstruction stage, because the optimization is implemented on image patch basis, an additional deblurring or denoising step is used to globally enhance the image quality. Experiment results show that the proposed SR method provides highly efficient results.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "L0-norm optimization"

1

Jiang, Aimin, Hon Keung Kwan, and Yanping Zhu. "Efficient design of FIR filters with minimum filter orders using l0-norm optimization." In 2014 International Conference on Digital Signal Processing (DSP). IEEE, 2014. http://dx.doi.org/10.1109/icdsp.2014.6900832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mhenni, Ramzi Ben, Sebastien Bourguignon, Marcel Mongeau, Jordan Ninin, and Herve Carfantan. "Sparse Branch and Bound for Exact Optimization of L0-Norm Penalized Least Squares." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xinqi, Jun Wang, and Sam Kwong. "Sparse Nonnegative Matrix Factorization Based on a Hyperbolic Tangent Approximation of L0-Norm and Neurodynamic Optimization." In 2020 12th International Conference on Advanced Computational Intelligence (ICACI). IEEE, 2020. http://dx.doi.org/10.1109/icaci49185.2020.9177819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography