Academic literature on the topic 'Lasso (L1)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Lasso (L1).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Lasso (L1)"

1

Fearn, Tom. "The Lasso and L1 Shrinkage." NIR news 24, no. 5 (2013): 23–27. http://dx.doi.org/10.1255/nirn.1382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jiasheng. "A L1 Regularized Logistic Regression Model for Highdimensional Questionnaire Data Analysis." Journal of Physics: Conference Series 2078, no. 1 (2021): 012052. http://dx.doi.org/10.1088/1742-6596/2078/1/012052.

Full text
Abstract:
Abstract The LI regularization method, or Lasso, is a technique for feature selection in high-dimensional statistical analysis. This method compresses the coefficients of the model by using the absolute value of the coefficient function as a penalty term. By adding L1 regularization to log-likelihood function of Logistic model, variable screening method based on the logistic regression model can be realized. The process of variable selection via Lasso is illustrated in Figure 1. The purpose of the experiment is to figure out the important factors that influence interviewees' subjective well-be
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Haipeng, Guobin Chang, Shubi Zhang, Yuhua Zhu, and Yajie Yu. "Application of Sparse Regularization in Spherical Radial Basis Functions-Based Regional Geoid Modeling in Colorado." Remote Sensing 15, no. 19 (2023): 4870. http://dx.doi.org/10.3390/rs15194870.

Full text
Abstract:
Spherical radial basis function (SRBF) is an effective method for calculating regional gravity field models. Calculating gravity field models with high accuracy and resolution requires dense basis functions, resulting in complex models. This study investigated the application of sparse regularization in SRBFs-based regional gravity field modeling. L1-norm regularization, also known as the least absolute shrinkage selection operator (LASSO), was employed in the parameter estimation procedure. LASSO differs from L2-norm regularization in that the solution obtained by LASSO is sparse, specificall
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Lu, Junheng Gao, Georgia Beasley, and Sin-Ho Jung. "LASSO and Elastic Net Tend to Over-Select Features." Mathematics 11, no. 17 (2023): 3738. http://dx.doi.org/10.3390/math11173738.

Full text
Abstract:
Machine learning methods have been a standard approach to select features that are associated with an outcome and to build a prediction model when the number of candidate features is large. LASSO is one of the most popular approaches to this end. The LASSO approach selects features with large regression estimates, rather than based on statistical significance, that are associated with the outcome by imposing an L1-norm penalty to overcome the high dimensionality of the candidate features. As a result, LASSO may select insignificant features while possibly missing significant ones. Furthermore,
APA, Harvard, Vancouver, ISO, and other styles
5

Saperas-Riera, Jordi, Glòria Mateu-Figueras, and Josep Antoni Martín-Fernández. "Lp-Norm for Compositional Data: Exploring the CoDa L1-Norm in Penalised Regression." Mathematics 12, no. 9 (2024): 1388. http://dx.doi.org/10.3390/math12091388.

Full text
Abstract:
The Least Absolute Shrinkage and Selection Operator (LASSO) regression technique has proven to be a valuable tool for fitting and reducing linear models. The trend of applying LASSO to compositional data is growing, thereby expanding its applicability to diverse scientific domains. This paper aims to contribute to this evolving landscape by undertaking a comprehensive exploration of the L1-norm for the penalty term of a LASSO regression in a compositional context. This implies first introducing a rigorous definition of the compositional Lp-norm, as the particular geometric structure of the com
APA, Harvard, Vancouver, ISO, and other styles
6

Boulesteix, Anne-Laure, Riccardo De Bin, Xiaoyu Jiang, and Mathias Fuchs. "IPF-LASSO: Integrative L1-Penalized Regression with Penalty Factors for Prediction Based on Multi-Omics Data." Computational and Mathematical Methods in Medicine 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/7691937.

Full text
Abstract:
As modern biotechnologies advance, it has become increasingly frequent that different modalities of high-dimensional molecular data (termed “omics” data in this paper), such as gene expression, methylation, and copy number, are collected from the same patient cohort to predict the clinical outcome. While prediction based on omics data has been widely studied in the last fifteen years, little has been done in the statistical literature on the integration of multiple omics modalities to select a subset of variables for prediction, which is a critical task in personalized medicine. In this paper,
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Huiyuan, Xiangxi Meng, Zhe Wang, Xin Zhou, Yang Liu, and Nan Li. "Predicting PD-L1 in Lung Adenocarcinoma Using 18F-FDG PET/CT Radiomic Features." Diagnostics 15, no. 5 (2025): 543. https://doi.org/10.3390/diagnostics15050543.

Full text
Abstract:
Background/Objectives: This study aims to retrospectively analyze the clinical and imaging data of 101 patients with lung adenocarcinoma who underwent [18F]FDG PET/CT examination and were pathologically confirmed in the Department of Nuclear Medicine at Peking University Cancer Hospital. This study explores the predictive value and important features of [18F]FDG PET/CT radiomics for PD-L1 expression levels in lung adenocarcinoma patients, assisting in screening patients who may benefit from immunotherapy. Methods: 101 patients with histologically confirmed lung adenocarcinoma who received pre-
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Lu, and Sin-Ho Jung. "Repeated Sieving for Prediction Model Building with High-Dimensional Data." Journal of Personalized Medicine 14, no. 7 (2024): 769. http://dx.doi.org/10.3390/jpm14070769.

Full text
Abstract:
Background: The prediction of patients’ outcomes is a key component in personalized medicine. Oftentimes, a prediction model is developed using a large number of candidate predictors, called high-dimensional data, including genomic data, lab tests, electronic health records, etc. Variable selection, also called dimension reduction, is a critical step in developing a prediction model using high-dimensional data. Methods: In this paper, we compare the variable selection and prediction performance of popular machine learning (ML) methods with our proposed method. LASSO is a popular ML method that
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Jessie. "The Proximal Bootstrap for Finite-Dimensional Regularized Estimators." AEA Papers and Proceedings 111 (May 1, 2021): 616–20. http://dx.doi.org/10.1257/pandp.20211036.

Full text
Abstract:
We propose a proximal bootstrap that can consistently estimate the limiting distribution of sqrt(n)-consistent estimators with nonstandardasymptotic distributions in a computationally efficient manner by formulating the proximal bootstrap estimator as the solution to aconvex optimization problem, which can have a closed-form solution for certain designs. This paper considers the application to finite-dimensionalregularized estimators, such as the lasso, l1-norm regularized quantile regression, l1-norm support vector regression, and trace regression via nuclear norm regularization.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Yingxue, Yinhui Yao, Ying Wang, Lingdi Wang, and Haipeng Cui. "PD-L1 and Immune Infiltration of m6A RNA Methylation Regulators and Its miRNA Regulators in Hepatocellular Carcinoma." BioMed Research International 2021 (May 15, 2021): 1–16. http://dx.doi.org/10.1155/2021/5516100.

Full text
Abstract:
Background. The aim of this study was to systematically evaluate the relationship between the expression of m6A RNA methylation regulators and prognosis in HCC. Methods. We compared the expression of m6A methylation modulators and PD-L1 between HCC and normal in TCGA database. HCC samples were divided into two subtypes by consensus clustering of data from m6A RNA methylation regulators. The differences in PD-L1, immune infiltration, and prognosis between the two subtypes were further compared. The LASSO regression was used to build a risk score for m6A modulators. In addition, we identified mi
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Lasso (L1)"

1

Patnaik, Kaushik. "Adaptive learning in lasso models." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54353.

Full text
Abstract:
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern (also known as model selection) in linear models from observations contaminated by noise. We examine a scenario where a fraction of the zero co-variates are highly correlated with non-zero co-variates making sparsity recovery difficult. We propose two methods that adaptively increment the regularization parameter to prune the Lasso solution set. We prove that the algorithms achieve consistent model selection with high probability while using fewer samples than traditional Lasso. The algorithm c
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Qing. "Predictor Selection in Linear Regression: L1 regularization of a subset of parameters and Comparison of L1 regularization and stepwise selection." Link to electronic thesis, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-051107-154052/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shi, Shujing. "Tuning Parameter Selection in L1 Regularized Logistic Regression." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2940.

Full text
Abstract:
Variable selection is an important topic in regression analysis and is intended to select the best subset of predictors. Least absolute shrinkage and selection operator (Lasso) was introduced by Tibshirani in 1996. This method can serve as a tool for variable selection because it shrinks some coefficients to exact zero by a constraint on the sum of absolute values of regression coefficients. For logistic regression, Lasso modifies the traditional parameter estimation method, maximum log likelihood, by adding the L1 norm of the parameters to the negative log likelihood function, so it turns a
APA, Harvard, Vancouver, ISO, and other styles
4

Ohlsson, Henrik. "Regularization for Sparseness and Smoothness : Applications in System Identification and Signal Processing." Doctoral thesis, Linköpings universitet, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-60531.

Full text
Abstract:
In system identification, the Akaike Information Criterion (AIC) is a well known method to balance the model fit against model complexity. Regularization here acts as a price on model complexity. In statistics and machine learning, regularization has gained popularity due to modeling methods such as Support Vector Machines (SVM), ridge regression and lasso. But also when using a Bayesian approach to modeling, regularization often implicitly shows up and can be associated with the prior knowledge. Regularization has also had a great impact on many applications, and very much so in clinical imag
APA, Harvard, Vancouver, ISO, and other styles
5

Tian, Ye. "Knowledge-fused Identification of Condition-specific Rewiring of Dependencies in Biological Networks." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/52557.

Full text
Abstract:
Gene network modeling is one of the major goals of systems biology research. Gene network modeling targets the middle layer of active biological systems that orchestrate the activities of genes and proteins. Gene network modeling can provide critical information to bridge the gap between causes and effects which is essential to explain the mechanisms underlying disease. Among the network construction tasks, the rewiring of relevant network structure plays critical roles in determining the behavior of diseases. To systematically characterize the selectively activated regulatory components and m
APA, Harvard, Vancouver, ISO, and other styles
6

Meynet, Caroline. "Sélection de variables pour la classification non supervisée en grande dimension." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00752613.

Full text
Abstract:
Il existe des situations de modélisation statistique pour lesquelles le problème classique de classification non supervisée (c'est-à-dire sans information a priori sur la nature ou le nombre de classes à constituer) se double d'un problème d'identification des variables réellement pertinentes pour déterminer la classification. Cette problématique est d'autant plus essentielle que les données dites de grande dimension, comportant bien plus de variables que d'observations, se multiplient ces dernières années : données d'expression de gènes, classification de courbes... Nous proposons une procédu
APA, Harvard, Vancouver, ISO, and other styles
7

Asif, Muhammad Salman. "Primal dual pursuit a homotopy based algorithm for the Dantzig selector /." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24693.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.<br>Committee Chair: Romberg, Justin; Committee Member: McClellan, James; Committee Member: Mersereau, Russell
APA, Harvard, Vancouver, ISO, and other styles
8

Tardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.

Full text
Abstract:
Considérons un vecteur gaussien Y de loi N (m,sigma²Idn) et X une matrice de dimension n x p avec Y observé, m inconnu, Sigma et X connus. Dans le cadre du modèle linéaire, m est supposé être une combinaison linéaire des colonnes de X. En petite dimension, lorsque n ≥ p et que ker (X) = 0, il existe alors un unique paramètre Beta* tel que m = X Beta* ; on peut alors réécrire Y sous la forme Y = X Beta* + Epsilon. Dans le cadre du modèle linéaire gaussien en petite dimension, nous construisons une nouvelle procédure de tests multiples contrôlant le FWER pour tester les hypothèses nulles Beta*i
APA, Harvard, Vancouver, ISO, and other styles
9

Huynh, Bao Tuyen. "Estimation and feature selection in high-dimensional mixtures-of-experts models." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC237.

Full text
Abstract:
Cette thèse traite de la modélisation et de l’estimation de modèles de mélanges d’experts de grande dimension, en vue d’efficaces estimation de densité, prédiction et classification de telles données complexes car hétérogènes et de grande dimension. Nous proposons de nouvelles stratégies basées sur l’estimation par maximum de vraisemblance régularisé des modèles pour pallier aux limites des méthodes standards, y compris l’EMV avec les algorithmes d’espérance-maximisation (EM), et pour effectuer simultanément la sélection des variables pertinentes afin d’encourager des solutions parcimonieuses
APA, Harvard, Vancouver, ISO, and other styles
10

Bouř, Vojtěch. "Povýběrová Inference: Lasso & Skupinové Lasso." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-357198.

Full text
Abstract:
The lasso is a popular tool that can be used for variable selection and esti- mation, however, classical statistical inference cannot be applied for its estimates. In this thesis the classical and the group lasso is described together with effici- ent algorithms for the solution. The key part is dedicated to the post-selection inference for the lasso estimates where we explain why the classical inference is not suitable. Three post-selection tests for the lasso are described and one test is proposed also for the group lasso. The tests are compared in simulations where finite sample properties
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Lasso (L1)"

1

Zhao, Yun-Bin. Sparse Optimization Theory and Methods. Taylor & Francis Group, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Yun-Bin. Sparse Optimization Theory and Methods. Taylor & Francis Group, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sparse Optimization Theory and Methods. Taylor & Francis Group, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Lasso (L1)"

1

Gao, Junbin, Michael Antolovich, and Paul W. Kwan. "L1 LASSO Modeling and Its Bayesian Inference." In AI 2008: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89378-3_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yuan, Sen, Si Chen, Feng Zhang, and Wentao Huang. "L1-Norm and Trace Lasso Based Locality Correlation Projection." In Communications in Computer and Information Science. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2336-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Awasthi, Naimisha, and Prateek Raj Gautam. "Android ransomware network traffic detection using decision tree and L1 LASSO regularization feature selection." In Intelligent Computing and Communication Techniques. CRC Press, 2025. https://doi.org/10.1201/9781003530190-104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Zhiyuan, Sayed Ameenuddin Irfan, Christopher Teoh, and Priyanka Hriday Bhoyar. "Regularization." In Numerical Machine Learning. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815136982123010004.

Full text
Abstract:
This chapter delves into L1 and L2 regularization techniques within the context of linear regression, focusing on minimizing overfitting risks while maintaining a concise presentation of mathematical theories. We explore these techniques through a concrete numerical example with a small dataset for predicting house sale prices, providing a step-by-step walkthrough of the process. To further enhance comprehension, we supply sample codes and draw comparisons with the Lasso and Ridge models implemented in the scikit-learn library. By the end of this chapter, readers will acquire a well-rounded un
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Lasso (L1)"

1

Wang, Cong, Xiaowen Yu, and Masayoshi Tomizuka. "Fast Modeling and Identification of Robot Dynamics Using the Lasso." In ASME 2013 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/dscc2013-3767.

Full text
Abstract:
This paper presents an approach for fast modeling and identification of robot dynamics. By using a data-driven machine learning approach, the process is simplified considerably from the conventional analytical method. Regressor selection using the Lasso (l1-norm penalized least squares regression) is used. The method is explained with a simple example of a two-link direct-drive robot. Further demonstration is given by applying the method to a three-link belt-driven robot. Promising result has been demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
2

Omer, Pareekhan. "Improving Prediction Accuracy of Lasso and Ridge Regression as an Alternative to LS Regression to Identify Variable Selection Problems." In 3rd International Conference of Mathematics and its Applications. Salahaddin University-Erbil, 2020. http://dx.doi.org/10.31972/ticma22.05.

Full text
Abstract:
This paper introduces the Lasso and Ridge Regression methods, which are two popular regularization approaches. The method they give a penalty to the coefficients differs in both of them. L1 Regularization refers to Lasso linear regression, while L2 Regularization refers to Ridge regression. As we all know, regression models serve two main purposes: explanation and prediction of scientific phenomena. Where prediction accuracy will be optimized by balancing each of the bias and variance of predictions, while explanation will be gained by constructing interpretable regression models by variable s
APA, Harvard, Vancouver, ISO, and other styles
3

Kinoshita, Ikuo. "Application of Sparse Estimation for Best Estimate Plus Uncertainty Analysis of a Small Break LOCA in PWRs." In ASME 2023 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/imece2023-111094.

Full text
Abstract:
Abstract To reduce the computational demand in the Best Estimate Plus Uncertainty analysis, an accurate and inexpensive surrogate model is expected to be used to replace the RELAP5 code for rapid determination of the uncertainties on the figure of merit of interest. One of the problems associated with the application of a surrogate model is overfitting. To ensure confidence in the model prediction, it is crucial to assess the generalized performance of the surrogate model. In this study, a surrogate model was generated from a small sample of a RELAP5 code uncertainty analysis on peak cladding
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!