Academic literature on the topic 'L1 norm support vector machine'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'L1 norm support vector machine.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "L1 norm support vector machine"

1

Qin, Chuandong, Zhenxia Xue, Quanxi Feng, and Xiaoyang Huang. "Selecting Parameters of an Improved Doubly Regularized Support Vector Machine based on Chaotic Particle Swarm Optimization Algorithm." JUCS - Journal of Universal Computer Science 23, no. (7) (2017): 603–18. https://doi.org/10.3217/jucs-023-07-0603.

Full text
Abstract:
Taking full advantages of the L1-norm support vector machine and the L2-norm support vector machine, a new improved double regularization support vector machine is proposed to analyze the datasets with small samples, high dimensions and high correlations in the parts of the variables. A kind of smooth function is used to approximately overcome the disdifferentiability of the L1-norm and the steepest descent method is used to solve the model. But the parameters of this model bring some trouble about the accuracy of the experiments. By use of the characteristics of chaotic systems, we propose a chaotic particle swarm optimization to select the parameters in the model. Experiments show the improvement gains the better effects.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Chunyan, Qiaolin Ye, Peng Luo, Ning Ye, and Liyong Fu. "Robust capped L1-norm twin support vector machine." Neural Networks 114 (June 2019): 47–59. http://dx.doi.org/10.1016/j.neunet.2019.01.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wei, Li Wei, Qiang Xiao, Ying Zhang, and Xiong Fei Ji. "Credit Risk Evaluation Using a New Classification Model: L1-LS-SVM." Applied Mechanics and Materials 321-324 (June 2013): 1917–20. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1917.

Full text
Abstract:
Least squares support vector machine (LS-SVM) has an outstanding advantage of lower computational complexity than that of standard support vector machines. Its shortcomings are the loss of sparseness and robustness. Thus it usually results in slow testing speed and poor generalization performance. In this paper, a least squares support vector machine with L1 penalty (L1-LS-SVM) is proposed to deal with above shortcomings. A minimum of 1-norm based object function is chosen to get the sparse and robust solution based on the idea of basis pursuit (BP) in the whole feasibility region. Some UCI datasets are used to demonstrate the effectiveness of this model. The experimental results show that L1-LS-SVM can obtain a small number of support vectors and improve the generalization ability of LS-SVM.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Chun-Na, Yuan-Hai Shao, and Nai-Yang Deng. "Robust L1-norm non-parallel proximal support vector machine." Optimization 65, no. 1 (2015): 169–83. http://dx.doi.org/10.1080/02331934.2014.994627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Kaizhu, Danian Zheng, Irwin King, and Michael R. Lyu. "Arbitrary Norm Support Vector Machines." Neural Computation 21, no. 2 (2009): 560–82. http://dx.doi.org/10.1162/neco.2008.12-07-667.

Full text
Abstract:
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L∞-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, − 9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
APA, Harvard, Vancouver, ISO, and other styles
6

YANG, Xiaohui, and Caixia GAO. "Optimality Conditions and Algorithms for Sparse Support Vector Machines with l1 Regular Terms." Frontiers of Chinese Pure Mathematics 2, no. 3 (2024): 16–24. http://dx.doi.org/10.48014/fcpm.20240411001.

Full text
Abstract:
Support Vector Machine (SVM) , as one of the main methods of machine learning, is a popular learning tool used to solve classification and regression tasks, and has attracted much attention in the fields of image classification, pattern recognition and disease diagnosis. In the context of the support vector machine model (SVM) , the loss function is considered optimal, with most existing loss functions acting as proxies. Since l1 norm has good sparsity, redundant features can be removed through feature selection. In this paper, a loss-based norm sparse support vector machine (called L0/1-SSVM) is proposed based on the L0/1 soft margin loss model. The existence of the model solution is proved, the KKT points and P-stable points of the model are given, and the relationship between the global optimal solution and KKT points is proved. The iterative framework of ADMM algorithm is designed by using the proximal operator of l1 norm, and the convergence analysis is carried out to prove that the algorithm converges to the P-stable point.
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Wei, Pei Jun Ma, and Xiao Hong Su. "Large Margin Feature Selection for Support Vector Machine." Applied Mechanics and Materials 274 (January 2013): 161–64. http://dx.doi.org/10.4028/www.scientific.net/amm.274.161.

Full text
Abstract:
Feature selection is an preprocessing step in pattern analysis and machine learning. In this paper, we design a algorithm for feature subset. We present L1-norm regularization technique for sparse feature weight. Margin loss are introduced to evaluate features, and we employs gradient descent to search the optimal solution to maximize margin. The proposed technique is tested on UCI data sets. Compared with four margin based loss functions for SVM, the proposed technique is effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Wei-Jie, Chun-Na Li, Yuan-Hai Shao, Ju Zhang, and Nai-Yang Deng. "Robust L1-norm multi-weight vector projection support vector machine with efficient algorithm." Neurocomputing 315 (November 2018): 345–61. http://dx.doi.org/10.1016/j.neucom.2018.04.083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Zhenfeng, Zhao Zhang, Jiabao Sun, and Bing Li. "Robust image recognition by L1-norm twin-projection support vector machine." Neurocomputing 223 (February 2017): 1–11. http://dx.doi.org/10.1016/j.neucom.2016.10.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hua, Xiaopeng, Sen Xu, Jun Gao, and Shifei Ding. "L1-norm loss-based projection twin support vector machine for binary classification." Soft Computing 23, no. 21 (2019): 10649–59. http://dx.doi.org/10.1007/s00500-019-04002-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "L1 norm support vector machine"

1

Hess, Eric. "Ramp Loss SVM with L1-Norm Regularizaion." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3538.

Full text
Abstract:
The Support Vector Machine (SVM) classification method has recently gained popularity due to the ease of implementing non-linear separating surfaces. SVM is an optimization problem with the two competing goals, minimizing misclassification on training data and maximizing a margin defined by the normal vector of a learned separating surface. We develop and implement new SVM models based on previously conceived SVM with L_1-Norm regularization with ramp loss error terms. The goal being a new SVM model that is both robust to outliers due to ramp loss, while also easy to implement in open source and off the shelf mathematical programming solvers and relatively efficient in finding solutions due to the mixed linear-integer form of the model. To show the effectiveness of the models we compare results of ramp loss SVM with L_1-Norm and L_2-Norm regularization on human organ microbial data and simulated data sets with outliers.
APA, Harvard, Vancouver, ISO, and other styles
2

Guan, Wei. "New support vector machine formulations and algorithms with application to biomedical data analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41126.

Full text
Abstract:
The Support Vector Machine (SVM) classifier seeks to find the separating hyperplane wx=r that maximizes the margin distance 1/||w||2^2. It can be formalized as an optimization problem that minimizes the hinge loss Ʃ[subscript i](1-y[subscript i] f(x[subscript i]))₊ plus the L₂-norm of the weight vector. SVM is now a mainstay method of machine learning. The goal of this dissertation work is to solve different biomedical data analysis problems efficiently using extensions of SVM, in which we augment the standard SVM formulation based on the application requirements. The biomedical applications we explore in this thesis include: cancer diagnosis, biomarker discovery, and energy function learning for protein structure prediction. Ovarian cancer diagnosis is problematic because the disease is typically asymptomatic especially at early stages of progression and/or recurrence. We investigate a sample set consisting of 44 women diagnosed with serous papillary ovarian cancer and 50 healthy women or women with benign conditions. We profile the relative metabolite levels in the patient sera using a high throughput ambient ionization mass spectrometry technique, Direct Analysis in Real Time (DART). We then reduce the diagnostic classification on these metabolic profiles into a functional classification problem and solve it with functional Support Vector Machine (fSVM) method. The assay distinguished between the cancer and control groups with an unprecedented 99\% accuracy (100\% sensitivity, 98\% specificity) under leave-one-out-cross-validation. This approach has significant clinical potential as a cancer diagnostic tool. High throughput technologies provide simultaneous evaluation of thousands of potential biomarkers to distinguish different patient groups. In order to assist biomarker discovery from these low sample size high dimensional cancer data, we first explore a convex relaxation of the L₀-SVM problem and solve it using mixed-integer programming techniques. We further propose a more efficient L₀-SVM approximation, fractional norm SVM, by replacing the L₂-penalty with L[subscript q]-penalty (q in (0,1)) in the optimization formulation. We solve it through Difference of Convex functions (DC) programming technique. Empirical studies on the synthetic data sets as well as the real-world biomedical data sets support the effectiveness of our proposed L₀-SVM approximation methods over other commonly-used sparse SVM methods such as the L₁-SVM method. A critical open problem in emph{ab initio} protein folding is protein energy function design. We reduce the problem of learning energy function for extit{ab initio} folding to a standard machine learning problem, learning-to-rank. Based on the application requirements, we constrain the reduced ranking problem with non-negative weights and develop two efficient algorithms for non-negativity constrained SVM optimization. We conduct the empirical study on an energy data set for random conformations of 171 proteins that falls into the {it ab initio} folding class. We compare our approach with the optimization approach used in protein structure prediction tool, TASSER. Numerical results indicate that our approach was able to learn energy functions with improved rank statistics (evaluated by pairwise agreement) as well as improved correlation between the total energy and structural dissimilarity.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "L1 norm support vector machine"

1

Christmann, Andreas. "Classification Based on the Support Vector Machine and on Regression Depth." In Statistical Data Analysis Based on the L1-Norm and Related Methods. Birkhäuser Basel, 2002. http://dx.doi.org/10.1007/978-3-0348-8201-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Rui, Qiaolin Ye, Dong Zhang, Ning Ye, and Xiaoqian Li. "1-Norm Projection Twin Support Vector Machine." In Communications in Computer and Information Science. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3002-4_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chien, Li-Jen, Yuh-Jye Lee, Zhi-Peng Kao, and Chih-Cheng Chang. "Robust 1-Norm Soft Margin Smooth Support Vector Machine." In Intelligent Data Engineering and Automated Learning – IDEAL 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15381-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Puthiyottil, Anagha, S. Balasundaram, and Yogendra Meena. "L1-Norm Support Vector Regression in Primal Based on Huber Loss Function." In Proceedings of ICETIT 2019. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30577-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gupta, Umesh, and Deepak Gupta. "Lagrangian Twin-Bounded Support Vector Machine Based on L2-Norm." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1280-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anguita, Davide, Alessandro Ghio, Luca Oneto, Jorge Luis Reyes-Ortiz, and Sandro Ridella. "A Novel Procedure for Training L1-L2 Support Vector Machine Classifiers." In Artificial Neural Networks and Machine Learning – ICANN 2013. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40728-4_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zheng, Xiaohan, Li Zhang, and Leilei Yan. "Sample Reduction Using $$\ell _1$$-Norm Twin Bounded Support Vector Machine." In Neural Computing for Advanced Applications. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5188-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Shi, Léon-Charles Tranchevent, Bart De Moor, and Yves Moreau. "L n -norm Multiple Kernel Learning and Least Squares Support Vector Machines." In Kernel-based Data Fusion for Machine Learning. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19406-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Hai Thanh, and Katrin Franke. "A General Lp-norm Support Vector Machine via Mixed 0-1 Programming." In Machine Learning and Data Mining in Pattern Recognition. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31537-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Borah, Parashjyoti, and Deepak Gupta. "A Two-Norm Squared Fuzzy-Based Least Squares Twin Parametric-Margin Support Vector Machine." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0923-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "L1 norm support vector machine"

1

Bai, Fusheng, and Yongjia Yuan. "l1-norm Nonparallel Support Vector Machine for PU Learning." In 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP). IEEE, 2018. http://dx.doi.org/10.1109/icdsp.2018.8631791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jianxin, Haoyi Zhou, Pengtao Xie, and Yingchun Zhang. "Improving the Generalization Performance of Multi-class SVM via Angular Regularization." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/296.

Full text
Abstract:
In multi-class support vector machine (MSVM) for classification, one core issue is to regularize the coefficient vectors to reduce overfitting. Various regularizers have been proposed such as L2, L1, and trace norm. In this paper, we introduce a new type of regularization approach -- angular regularization, that encourages the coefficient vectors to have larger angles such that class regions can be widen to flexibly accommodate unseen samples. We propose a novel angular regularizer based on the singular values of the coefficient matrix, where the uniformity of singular values reduces the correlation among different classes and drives the angles between coefficient vectors to increase. In generalization error analysis, we show that decreasing this regularizer effectively reduces generalization error bound. On various datasets, we demonstrate the efficacy of the regularizer in reducing overfitting.
APA, Harvard, Vancouver, ISO, and other styles
3

Xue, Hui, Yu Song, and Hai-Ming Xu. "Multiple Indefinite Kernel Learning for Feature Selection." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/448.

Full text
Abstract:
Multiple kernel learning for feature selection (MKL-FS) utilizes kernels to explore complex properties of features and performs better in embedded methods. However, the kernels in MKL-FS are generally limited to be positive definite. In fact, indefinite kernels often emerge in actual applications and can achieve better empirical performance. But due to the non-convexity of indefinite kernels, existing MKL-FS methods are usually inapplicable and the corresponding research is also relatively little. In this paper, we propose a novel multiple indefinite kernel feature selection method (MIK-FS) based on the primal framework of indefinite kernel support vector machine (IKSVM), which applies an indefinite base kernel for each feature and then exerts an l1-norm constraint on kernel combination coefficients to select features automatically. A two-stage algorithm is further presented to optimize the coefficients of IKSVM and kernel combination alternately. In the algorithm, we reformulate the non-convex optimization problem of primal IKSVM as a difference of convex functions (DC) programming and transform the non-convex problem into a convex one with the affine minorization approximation. Experiments on real-world datasets demonstrate that MIK-FS is superior to some related state-of-the-art methods in both feature selection and classification performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Xiao, Feiping Nie, Heng Huang, and Chris Ding. "Multi-Class L2,1-Norm Support Vector Machine." In 2011 IEEE 11th International Conference on Data Mining (ICDM). IEEE, 2011. http://dx.doi.org/10.1109/icdm.2011.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Yiheng, Wensheng Zhang, and Jue Wang. "1 and infinite norm support vector machine." In 2012 IEEE International Conference on Information Science and Technology (ICIST). IEEE, 2012. http://dx.doi.org/10.1109/icist.2012.6221693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tian, Yingjie, Jun Yu, and Wenjing Chen. "lp-norm support vector machine with CCCP." In 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2010. http://dx.doi.org/10.1109/fskd.2010.5569345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ghorai, Santanu, Shaikh Jahangir Hossian, Anirban Mukherjee, and Pranab K. Dutta. "Unity norm twin support vector machine classifier." In 2010 Annual IEEE India Conference (INDICON). IEEE, 2010. http://dx.doi.org/10.1109/indcon.2010.5712721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Lifeng, Xiaotong Shen, and Yuan Zheng. "On L_1-Norm Multi-class Support Vector Machines." In 2006 5th International Conference on Machine Learning and Applications (ICMLA'06). IEEE, 2006. http://dx.doi.org/10.1109/icmla.2006.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Xu, Yingan Liu, and Qiaolin Ye. "P-Order L2-Norm Distance Twin Support Vector Machine." In 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2017. http://dx.doi.org/10.1109/acpr.2017.134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Strack, Robert, and Vojislav Kecman. "Minimal Norm Support Vector Machines for Large Classification Tasks." In 2012 Eleventh International Conference on Machine Learning and Applications (ICMLA). IEEE, 2012. http://dx.doi.org/10.1109/icmla.2012.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography