To see the other types of publications on this topic, follow the link: L0-norm optimization.

Journal articles on the topic 'L0-norm optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 journal articles for your research on the topic 'L0-norm optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Jiehua, and Xiezhang Li. "A Smoothed l0-Norm and l1-Norm Regularization Algorithm for Computed Tomography." Journal of Applied Mathematics 2019 (June 2, 2019): 1–8. http://dx.doi.org/10.1155/2019/8398035.

Full text
Abstract:
The nonmonotone alternating direction algorithm (NADA) was recently proposed for effectively solving a class of equality-constrained nonsmooth optimization problems and applied to the total variation minimization in image reconstruction, but the reconstructed images suffer from the artifacts. Though by the l0-norm regularization the edge can be effectively retained, the problem is NP hard. The smoothed l0-norm approximates the l0-norm as a limit of smooth convex functions and provides a smooth measure of sparsity in applications. The smoothed l0-norm regularization has been an attractive research topic in sparse image and signal recovery. In this paper, we present a combined smoothed l0-norm and l1-norm regularization algorithm using the NADA for image reconstruction in computed tomography. We resolve the computation challenge resulting from the smoothed l0-norm minimization. The numerical experiments demonstrate that the proposed algorithm improves the quality of the reconstructed images with the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the l1-norm regularization in absence of the smoothed l0-norm.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Jun, Changwei Chen, Shoubao Su, and Zinan Chang. "Compressive Sensing of Multichannel EEG Signals via lq Norm and Schatten-p Norm Regularization." Mathematical Problems in Engineering 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/2189563.

Full text
Abstract:
In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Recently, a simultaneous cosparsity and low-rank (SCLR) optimization model has shown the state-of-the-art performance in compressive sensing (CS) recovery of multichannel EEG signals. How to solve the resulting regularization problem, involving l0 norm and rank function which is known as an NP-hard problem, is critical to the recovery results. SCLR takes use of l1 norm and nuclear norm as a convex surrogate function for l0 norm and rank function. However, l1 norm and nuclear norm cannot well approximate the l0 norm and rank because there exist irreparable gaps between them. In this paper, an optimization model with lq norm and schatten-p norm is proposed to enforce cosparsity and low-rank property in the reconstructed multichannel EEG signals. An efficient iterative scheme is used to solve the resulting nonconvex optimization problem. Experimental results have demonstrated that the proposed algorithm can significantly outperform existing state-of-the-art CS methods for compressive sensing of multichannel EEG channels.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiezhang, Guocan Feng, and Jiehua Zhu. "An Algorithm of l1-Norm and l0-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection." International Journal of Biomedical Imaging 2020 (August 28, 2020): 1–6. http://dx.doi.org/10.1155/2020/8873865.

Full text
Abstract:
The l1-norm regularization has attracted attention for image reconstruction in computed tomography. The l0-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined l1-norm and l0-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving l0-norm regularization.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Kaizhu, Danian Zheng, Irwin King, and Michael R. Lyu. "Arbitrary Norm Support Vector Machines." Neural Computation 21, no. 2 (February 2009): 560–82. http://dx.doi.org/10.1162/neco.2008.12-07-667.

Full text
Abstract:
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L∞-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, − 9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Junjie, Yinan Sun, and XiuXia Ji. "High-Resolution ISAR Imaging Based on Improved Sparse Signal Recovery Algorithm." Wireless Communications and Mobile Computing 2021 (April 2, 2021): 1–7. http://dx.doi.org/10.1155/2021/5541116.

Full text
Abstract:
In order to solve the problem of high-resolution ISAR imaging under the condition of finite pulses, an improved smoothed L0 norm (SL0) sparse signal reconstruction ISAR imaging algorithm is proposed. Firstly, the ISAR imaging is transformed into the optimization problem of minimum L0 norm. Secondly, a single-loop structure is used instead of two loop layers in SL0 algorithm which increases the searching density of variable parameter to ensure the recovery accuracy. Finally, the compared step is added to ensure the optimization solution along the steepest descent gradient direction. The experimental results show that the proposed algorithm has better imaging effect.
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Ziran, Jianlin Zhang, Zhiyong Xu, Yongmei Huang, Yong Liu, and Xiangsuo Fan. "Gradient Projection with Approximate L0 Norm Minimization for Sparse Reconstruction in Compressed Sensing." Sensors 18, no. 10 (October 9, 2018): 3373. http://dx.doi.org/10.3390/s18103373.

Full text
Abstract:
In the reconstruction of sparse signals in compressed sensing, the reconstruction algorithm is required to reconstruct the sparsest form of signal. In order to minimize the objective function, minimal norm algorithm and greedy pursuit algorithm are most commonly used. The minimum L1 norm algorithm has very high reconstruction accuracy, but this convex optimization algorithm cannot get the sparsest signal like the minimum L0 norm algorithm. However, because the L0 norm method is a non-convex problem, it is difficult to get the global optimal solution and the amount of calculation required is huge. In this paper, a new algorithm is proposed to approximate the smooth L0 norm from the approximate L2 norm. First we set up an approximation function model of the sparse term, then the minimum value of the objective function is solved by the gradient projection, and the weight of the function model of the sparse term in the objective function is adjusted adaptively by the reconstruction error value to reconstruct the sparse signal more accurately. Compared with the pseudo inverse of L2 norm and the L1 norm algorithm, this new algorithm has a lower reconstruction error in one-dimensional sparse signal reconstruction. In simulation experiments of two-dimensional image signal reconstruction, the new algorithm has shorter image reconstruction time and higher image reconstruction accuracy compared with the usually used greedy algorithm and the minimum norm algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yuanqing, Andrzej Cichocki, and Shun-ichi Amari. "Analysis of Sparse Representation and Blind Source Separation." Neural Computation 16, no. 6 (June 1, 2004): 1193–234. http://dx.doi.org/10.1162/089976604773717586.

Full text
Abstract:
In this letter, we analyze a two-stage cluster-then-l1-optimization approach for sparse representation of a data matrix, which is also a promising approach for blind source separation (BSS) in which fewer sensors than sources are present. First, sparse representation (factorization) of a data matrix is discussed. For a given overcomplete basis matrix, the corresponding sparse solution (coefficient matrix) with minimum l1 norm is unique with probability one, which can be obtained using a standard linear programming algorithm. The equivalence of the l1—norm solution and the l0—norm solution is also analyzed according to a probabilistic framework. If the obtained l1—norm solution is sufficiently sparse, then it is equal to the l0—norm solution with a high probability. Furthermore, the l1—norm solution is robust to noise, but the l0—norm solution is not, showing that the l1—norm is a good sparsity measure. These results can be used as a recoverability analysis of BSS, as discussed. The basis matrix in this article is estimated using a clustering algorithm followed by normalization, in which the matrix columns are the cluster centers of normalized data column vectors. Zibulevsky, Pearlmutter, Boll, and Kisilev (2000) used this kind of two-stage approach in underdetermined BSS. Our recoverability analysis shows that this approach can deal with the situation in which the sources are overlapped to some degree in the analyzed
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Ming-Ming, Chun-Xi Dong, Yang-Yang Dong, and Guo-Qing Zhao. "Superresolution 2D DOA Estimation for a Rectangular Array via Reweighted Decoupled Atomic Norm Minimization." Mathematical Problems in Engineering 2019 (July 8, 2019): 1–13. http://dx.doi.org/10.1155/2019/6797168.

Full text
Abstract:
This paper proposes a superresolution two-dimensional (2D) direction of arrival (DOA) estimation algorithm for a rectangular array based on the optimization of the atomic l0 norm and a series of relaxation formulations. The atomic l0 norm of the array response describes the minimum number of sources, which is derived from the atomic norm minimization (ANM) problem. However, the resolution is restricted and high computational complexity is incurred by using ANM for 2D angle estimation. Although an improved algorithm named decoupled atomic norm minimization (DAM) has a reduced computational burden, the resolution is still relatively low in terms of angle estimation. To overcome these limitations, we propose the direct minimization of the atomic l0 norm, which is demonstrated to be equivalent to a decoupled rank optimization problem in the positive semidefinite (PSD) form. Our goal is to solve this rank minimization problem and recover two decoupled Toeplitz matrices in which the azimuth-elevation angles of interest are encoded. Since rank minimization is an NP-hard problem, a novel sparse surrogate function is further proposed to effectively approximate the two decoupled rank functions. Then, the new optimization problem obtained through the above relaxation can be implemented via the majorization-minimization (MM) method. The proposed algorithm offers greatly improved resolution while maintaining the same computational complexity as the DAM algorithm. Moreover, it is possible to use a single snapshot for angle estimation without prior information on the number of sources, and the algorithm is robust to noise due to its iterative nature. In addition, the proposed surrogate function can achieve local convergence faster than existing functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ma Min, 马敏, 刘一斐 Liu Yifei, and 王世喜 Wang Shixi. "基于近似L0范数的电容层析成像敏感场优化算法." Laser & Optoelectronics Progress 58, no. 12 (2021): 1210025. http://dx.doi.org/10.3788/lop202158.1210025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yujie, Benying Tan, Atsunori Kanemura, Shuxue Ding, and Wuhui Chen. "Analysis Sparse Representation for Nonnegative Signals Based on Determinant Measure by DC Programming." Complexity 2018 (April 24, 2018): 1–12. http://dx.doi.org/10.1155/2018/2685745.

Full text
Abstract:
Analysis sparse representation has recently emerged as an alternative approach to the synthesis sparse model. Most existing algorithms typically employ the l0-norm, which is generally NP-hard. Other existing algorithms employ the l1-norm to relax the l0-norm, which sometimes cannot promote adequate sparsity. Most of these existing algorithms focus on general signals and are not suitable for nonnegative signals. However, many signals are necessarily nonnegative such as spectral data. In this paper, we present a novel and efficient analysis dictionary learning algorithm for nonnegative signals with the determinant-type sparsity measure which is convex and differentiable. The analysis sparse representation can be cast in three subproblems, sparse coding, dictionary update, and signal update, because the determinant-type sparsity measure would result in a complex nonconvex optimization problem, which cannot be easily solved by standard convex optimization methods. Therefore, in the proposed algorithms, we use a difference of convex (DC) programming scheme for solving the nonconvex problem. According to our theoretical analysis and simulation study, the main advantage of the proposed algorithm is its greater dictionary learning efficiency, particularly compared with state-of-the-art algorithms. In addition, our proposed algorithm performs well in image denoising.
APA, Harvard, Vancouver, ISO, and other styles
11

Choi, Young-Seok. "A New Subband Adaptive Filtering Algorithm for Sparse System Identification with Impulsive Noise." Journal of Applied Mathematics 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/704231.

Full text
Abstract:
This paper presents a novel subband adaptive filter (SAF) for system identification where an impulse response is sparse and disturbed with an impulsive noise. Benefiting from the uses ofl1-norm optimization andl0-norm penalty of the weight vector in the cost function, the proposedl0-norm sign SAF (l0-SSAF) achieves both robustness against impulsive noise and remarkably improved convergence behavior more than the classical adaptive filters. Simulation results in the system identification scenario confirm that the proposedl0-norm SSAF is not only more robust but also faster and more accurate than its counterparts in the sparse system identification in the presence of impulsive noise.
APA, Harvard, Vancouver, ISO, and other styles
12

Paik, Ji Woong, Joon-Ho Lee, and Wooyoung Hong. "An Enhanced Smoothed L0-Norm Direction of Arrival Estimation Method Using Covariance Matrix." Sensors 21, no. 13 (June 27, 2021): 4403. http://dx.doi.org/10.3390/s21134403.

Full text
Abstract:
An enhanced smoothed l0-norm algorithm for the passive phased array system, which uses the covariance matrix of the received signal, is proposed in this paper. The SL0 (smoothed l0-norm) algorithm is a fast compressive-sensing-based DOA (direction-of-arrival) estimation algorithm that uses a single snapshot from the received signal. In the conventional SL0 algorithm, there are limitations in the resolution and the DOA estimation performance, since a single sample is used. If multiple snapshots are used, the conventional SL0 algorithm can improve performance in terms of the DOA estimation. In this paper, a covariance-fitting-based SL0 algorithm is proposed to further reduce the number of optimization variables when using multiple snapshots of the received signal. A cost function and a new null-space projection term of the sparse recovery for the proposed scheme are presented. In order to verify the performance of the proposed algorithm, we present the simulation results and the experimental results based on the measured data.
APA, Harvard, Vancouver, ISO, and other styles
13

NAKACHI, Takayuki, and Hitoshi KIYA. "L0 Norm Optimization in Scrambled Sparse Representation Domain and Its Application to EtC System." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E103.A, no. 12 (December 1, 2020): 1589–98. http://dx.doi.org/10.1587/transfun.2020smp0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Jiaqi, Yi Cui, Fei Song, and Tao Lei. "Infrared Small Target Detection Based on Non-Overlapping Patch Model via l0-l1 Norm." Electronics 9, no. 9 (September 2, 2020): 1426. http://dx.doi.org/10.3390/electronics9091426.

Full text
Abstract:
Infrared small target detection technology has sufficient applications in many engineering fields, such as infrared early warning, infrared tracking, and infrared reconnaissance. Due to the tiny size of the infrared small target and the lack of shape and texture information, existing methods often leave residuals or miss the target. To address these issues, a novel method based on a non-overlapping patch (NOP) joint l0-l1 norm is proposed with the introduction of sparsity regularized principal component pursuit (SRPCP). The NOP model makes the patch lighter in the first place, reducing time consumption. The adoption of the l0 norm enhances the sparsity of the target, while the adoption of the l1 norm enhances the robustness of the algorithm under clutter. As a smart optimization method, SRPCP solves the NOP model fittingly and achieves stable separation of low-rank and sparse components, thereby improving detection capacity while suppressing the background efficiently. The proposed method ultimately yielded favorable detection results. Adequate experiment results demonstrate that the proposed method is competitive in terms of background suppression and true target detection with respect to state-of-the-art methods. In addition, our method also reduces the computational time.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Yizhen, Yepeng Guan, and Jiawen Wang. "Matrix Completion with Ordering Relation Constraints and its Applications." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 04 (May 20, 2015): 1551003. http://dx.doi.org/10.1142/s0218001415510039.

Full text
Abstract:
We relax the equality constraints in the very general and well-known affine Schatten p-norm minimization problem into complete loss function-based constraints. Owing to the imposed equality constraints, existing methods only have limited degree of model flexibility, via their optimization of the objective energy function. By our proposed transformation, the decision variables in the objective function can directly achieve L0 norm minimization via the process of enumerating the matrix rank (i.e. matrix ordering constraint). We show that, our new objective function is still reasonable, and its minimum can be obtained by a more general form of the Fixed-Point Continuation framework with almost the same computational cost at each matrix order enumeration. Experiments show that, our algorithm has good performance compared to its predecessor over some datasets and applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Feng, Junjie. "ISAR Imaging Based on Multiple Measurement Vector Model Sparse Signal Recovery Algorithm." Mathematical Problems in Engineering 2020 (July 13, 2020): 1–8. http://dx.doi.org/10.1155/2020/1743593.

Full text
Abstract:
A multiple measurement vector (MMV) model blocks sparse signal recovery. ISAR imaging algorithm is proposed to improve ISAR imaging quality. Firstly, the sparse imaging model is built, and block sparse signal recovery algorithm-based MMV model is applied to ISAR imaging. Then, a negative exponential function is proposed to approximately block L0 norm. The optimization solution of smoothed function is obtained by constructing a decreasing sequence. Finally, the correction steps are added to ensure the optimal solution of the block sparse signal along the fastest descent direction. Several simulations and real data simulation experiments verify the proposed algorithm has advantages in imaging time and quality.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Peng, Yin Tian, Xu Lei, and Dezhong Yao. "Neuroelectric source imaging using 3SCO: A space coding algorithm based on particle swarm optimization and l0 norm constraint." NeuroImage 51, no. 1 (May 2010): 183–205. http://dx.doi.org/10.1016/j.neuroimage.2010.01.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Hojin, Young Kyung Lim, Youngmoon Goh, Chiyoung Jeong, Ui-Jung Hwang, Sang Hyoun Choi, Byungchul Cho, and Jungwon Kwak. "Plan optimization with L0-norm and group sparsity constraints for a new rotational, intensity-modulated brachytherapy for cervical cancer." PLOS ONE 15, no. 7 (July 28, 2020): e0236585. http://dx.doi.org/10.1371/journal.pone.0236585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Yu, Dong Chen, and Xiufen Zou. "Inference of Biochemical S-Systems via Mixed-Variable Multiobjective Evolutionary Optimization." Computational and Mathematical Methods in Medicine 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/3020326.

Full text
Abstract:
Inference of the biochemical systems (BSs) via experimental data is important for understanding how biochemical components in vivo interact with each other. However, it is not a trivial task because BSs usually function with complex and nonlinear dynamics. As a popular ordinary equation (ODE) model, the S-System describes the dynamical properties of BSs by incorporating the power rule of biochemical reactions but behaves as a challenge because it has a lot of parameters to be confirmed. This work is dedicated to proposing a general method for inference of S-Systems by experimental data, using a biobjective optimization (BOO) model and a specially mixed-variable multiobjective evolutionary algorithm (mv-MOEA). Regarding that BSs are sparse in common sense, we introduce binary variables indicating network connections to eliminate the difficulty of threshold presetting and take data fitting error and the L0-norm as two objectives to be minimized in the BOO model. Then, a selection procedure that automatically runs tradeoff between two objectives is employed to choose final inference results from the obtained nondominated solutions of the mv-MOEA. Inference results of the investigated networks demonstrate that our method can identify their dynamical properties well, although the automatic selection procedure sometimes ignores some weak connections in BSs.
APA, Harvard, Vancouver, ISO, and other styles
20

Xiang, Jianhong, Huihui Yue, Xiangjun Yin, and Linyu Wang. "A New Smoothed L0 Regularization Approach for Sparse Signal Recovery." Mathematical Problems in Engineering 2019 (July 17, 2019): 1–12. http://dx.doi.org/10.1155/2019/1978154.

Full text
Abstract:
Sparse signal reconstruction, as the main link of compressive sensing (CS) theory, has attracted extensive attention in recent years. The essence of sparse signal reconstruction is how to recover the original signal accurately and effectively from an underdetermined linear system equation (ULSE). For this problem, we propose a new algorithm called regularization reweighted smoothed L0 norm minimization algorithm, which is simply called RRSL0 algorithm. Three innovations are made under the framework of this method: (1) a new smoothed function called compound inverse proportional function (CIPF) is proposed; (2) a new reweighted function is proposed; and (3) a mixed conjugate gradient (MCG) method is proposed. In this algorithm, the reweighted function and the new smoothed function are combined as the sparsity promoting objective, and the constraint condition y-Φx22 is taken as a deviation term. Both of them constitute an unconstrained optimization problem under the Tikhonov regularization criterion and the MCG method constructed is used to optimize the problem and realize high-precision reconstruction of sparse signals under noise conditions. Sparse signal recovery experiments on both the simulated and real data show the proposed RRSL0 algorithm performs better than other popular approaches and achieves state-of-the-art performances in signal and image processing.
APA, Harvard, Vancouver, ISO, and other styles
21

Ishii, Yoshinao, Satoshi Koide, and Keiichiro Hayakawa. "Learning low-dimensional manifolds under the L0-norm constraint for unsupervised outlier detection." International Journal of Data Science and Analytics, July 10, 2021. http://dx.doi.org/10.1007/s41060-021-00269-x.

Full text
Abstract:
AbstractUnsupervised outlier detection without the need for clean data has attracted great attention because it is suitable for real-world problems as a result of its low data collection costs. Reconstruction-based methods are popular approaches for unsupervised outlier detection. These methods decompose a data matrix into low-dimensional manifolds and an error matrix. Then, samples with a large error are detected as outliers. To achieve high outlier detection accuracy, when data are corrupted by large noise, the detection method should have the following two properties: (1) it should be able to decompose the data under the L0-norm constraint on the error matrix and (2) it should be able to reflect the nonlinear features of the data in the manifolds. Despite significant efforts, no method with both of these properties exists. To address this issue, we propose a novel reconstruction-based method: “L0-norm constrained autoencoders (L0-AE).” L0-AE uses autoencoders to learn low-dimensional manifolds that capture the nonlinear features of the data and uses a novel optimization algorithm that can decompose the data under the L0-norm constraints on the error matrix. This novel L0-AE algorithm provably guarantees the convergence of the optimization if the autoencoder is trained appropriately. The experimental results show that L0-AE is more robust, accurate and stable than other unsupervised outlier detection methods not only for artificial datasets with corrupted samples but also artificial datasets with well-known outlier distributions and real datasets. Additionally, the results show that the accuracy of L0-AE is moderately stable to changes in the parameter of the constrained term, and for real datasets, L0-AE achieves higher accuracy than the baseline non-robustified method for most parameter values.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography