To see the other types of publications on this topic, follow the link: Elastic-net regularization.

Journal articles on the topic 'Elastic-net regularization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Elastic-net regularization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Faiyaz, Chowdhury Abrar, Pabel Shahrear, Rakibul Alam Shamim, Thilo Strauss, and Taufiquar Khan. "Comparison of Different Radial Basis Function Networks for the Electrical Impedance Tomography (EIT) Inverse Problem." Algorithms 16, no. 10 (2023): 461. http://dx.doi.org/10.3390/a16100461.

Full text
Abstract:
This paper aims to determine whether regularization improves image reconstruction in electrical impedance tomography (EIT) using a radial basis network. The primary purpose is to investigate the effect of regularization to estimate the network parameters of the radial basis function network to solve the inverse problem in EIT. Our approach to studying the efficacy of the radial basis network with regularization is to compare the performance among several different regularizations, mainly Tikhonov, Lasso, and Elastic Net regularization. We vary the network parameters, including the fixed and variable widths for the Gaussian used for the network. We also perform a robustness study for comparison of the different regularizations used. Our results include (1) determining the optimal number of radial basis functions in the network to avoid overfitting; (2) comparison of fixed versus variable Gaussian width with or without regularization; (3) comparison of image reconstruction with or without regularization, in particular, no regularization, Tikhonov, Lasso, and Elastic Net; (4) comparison of both mean square and mean absolute error and the corresponding variance; and (5) comparison of robustness, in particular, the performance of the different methods concerning noise level. We conclude that by looking at the R2 score, one can determine the optimal number of radial basis functions. The fixed-width radial basis function network with regularization results in improved performance. The fixed-width Gaussian with Tikhonov regularization performs very well. The regularization helps reconstruct the images outside of the training data set. The regularization may cause the quality of the reconstruction to deteriorate; however, the stability is much improved. In terms of robustness, the RBF with Lasso and Elastic Net seem very robust compared to Tikhonov.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Yu-long, and Yun-long Feng. "Learning performance of elastic-net regularization." Mathematical and Computer Modelling 57, no. 5-6 (2013): 1395–407. http://dx.doi.org/10.1016/j.mcm.2012.11.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

De Mol, Christine, Ernesto De Vito, and Lorenzo Rosasco. "Elastic-net regularization in learning theory." Journal of Complexity 25, no. 2 (2009): 201–30. http://dx.doi.org/10.1016/j.jco.2009.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Delei, Cheng Wang, Xiongming Lai, Huizhen Zhang, Haibo Li, and Jianwei Chen. "Elastic-net regularization based multi-point vibration response prediction in situation of unknown uncorrelated multiple sources load." International Journal of Applied Electromagnetics and Mechanics 64, no. 1-4 (2020): 649–57. http://dx.doi.org/10.3233/jae-209375.

Full text
Abstract:
In order to reduce the influence of ill-posed inverse on response prediction in the situation of unknown uncorrelated multiple sources load, a response prediction method based on elastic-net regularization in the frequency domain was proposed. This method utilized the linear relationship between known responses and the unknown responses instead of the transfer function to predict the response. Moreover, the elastic-net regularization model has two regularization parameters combining l1, l2 regularization to reduce the influence of ill-posed inverse. The experiment results on the data of acoustic and vibration sources on cylindrical shells showed that the elastic-net regularization in predicting response could obtain higher accurate results compared with the method of transfer function and the method of ordinary least squares, and predict vibration response effectively and satisfy industrial requirements.
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Lihua. "Extreme Learning Machine with Elastic Net Regularization." Intelligent Automation & Soft Computing 26, no. 3 (2020): 421–27. http://dx.doi.org/10.32604/iasc.2020.013918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dölker, E. M., R. Schmidt, S. Gorges, et al. "Elastic Net Regularization in Lorentz force evaluation." NDT & E International 99 (October 2018): 141–54. http://dx.doi.org/10.1016/j.ndteint.2018.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kvyetnyy, R. N., and S. I. Borodkin. "Improved model of ELASTIC NET regularization for financial time series." Optoelectronic Information-Power Technologies 49, no. 1 (2025): 29–35. https://doi.org/10.31649/1681-7893-2025-49-1-29-35.

Full text
Abstract:
This paper proposes a modification of Elastic Net regression for short-term forecasting of financial time series by introducing Gaussian weight decay. The new approach is designed to smooth the abrupt “jumps” between the last historical observation and the first forecast—an issue typical of standard regularization. To assess its effectiveness, we formally derive the Elastic Net model with four weighting schemes (no decay, linear, exponential, and Gaussian) and conduct empirical experiments on the S&P 500, Dow Jones Industrial Average, and Nasdaq Composite indices over the period 2020–2025. The results demonstrate that Gaussian decay minimizes the transition gap and achieves the lowest RMSE and Deviation for the S&P 500 and Nasdaq Composite, whereas exponential decay proves optimal for the Dow Jones Industrial Average.
APA, Harvard, Vancouver, ISO, and other styles
8

LI, HONG, NA CHEN, and LUOQING LI. "ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY." International Journal of Wavelets, Multiresolution and Information Processing 10, no. 05 (2012): 1250050. http://dx.doi.org/10.1142/s0219691312500506.

Full text
Abstract:
This paper considers the problem of recovering a low-rank matrix from a small number of measurements consisting of linear combinations of the matrix entries. We extend the elastic-net regularization in compressive sensing to a more general setting, the matrix recovery setting, and consider the elastic-net regularization scheme for matrix recovery. To investigate on the statistical properties of this scheme and in particular on its convergence properties, we set up a suitable mathematic framework. We characterize some properties of the estimator and construct a natural iterative procedure to compute it. The convergence analysis shows that the sequence of iterates converges, which then underlies successful applications of the matrix elastic-net regularization algorithm. In addition, the error bounds of the proposed algorithm for low-rank matrix and even for full-rank matrix are presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Kereta, Zeljko, and Valeriya Naumova. "On an unsupervised method for parameter selection for the elastic net." Mathematics in Engineering 4, no. 6 (2022): 1–36. http://dx.doi.org/10.3934/mine.2022053.

Full text
Abstract:
<abstract><p>Despite recent advances in regularization theory, the issue of parameter selection still remains a challenge for most applications. In a recent work the framework of statistical learning was used to approximate the optimal Tikhonov regularization parameter from noisy data. In this work, we improve their results and extend the analysis to the elastic net regularization. Furthermore, we design a data-driven, automated algorithm for the computation of an approximate regularization parameter. Our analysis combines statistical learning theory with insights from regularization theory. We compare our approach with state-of-the-art parameter selection criteria and show that it has superior accuracy.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
10

Nakkiran, Arunadevi, and Vidyaa Thulasiraman. "Elastic net feature selected multivariate discriminant mapreduce classification." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (2022): 587. http://dx.doi.org/10.11591/ijeecs.v26.i1.pp587-596.

Full text
Abstract:
Analyzing the <span>big stream data and other valuable information is a significant task. Several conventional methods are designed to analyze the big stream data. But the scheduling accuracy and time complexity is a significant issue. To resolve, an elastic-net kernelized multivariate discriminant map reduce classification (EKMDMC) is introduced with the novelty of elastic-net regularization-based feature selection and kernelized multivariate fisher Discriminant MapReduce classifier. Initially, the EKMDMC technique executes the feature selection to improve the prediction accuracy using the Elastic-Net regularization method. Elastic-Net regularization method selects relevant features such as central processing unit (CPU) time, memory and bandwidth, energy based on regression function. After selecting relevant features, kernelized multivariate fisher discriminant mapr classifier is used to schedule the tasks to optimize the processing unit. Kernel function is used to find higher similarity of stream data tasks and mean of available classes. Experimental evaluation of proposed EKMDMC technique provides better performance in terms of resource aware predictive scheduling efficiency, false positive rate, scheduling time and memory consumption.</span>
APA, Harvard, Vancouver, ISO, and other styles
11

Nakkiran, Arunadevi, and Vidyaa Thulasiraman. "Elastic net feature selected multivariate discriminant mapreduce classification." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (2022): 587–96. https://doi.org/10.11591/ijeecs.v26.i1.pp587-596.

Full text
Abstract:
Analyzing the big stream data and other valuable information is a significant task. Several conventional methods are designed to analyze the big stream data. But the scheduling accuracy and time complexity is a significant issue. To resolve, an elastic-net kernelized multivariate discriminant map reduce classification (EKMDMC) is introduced with the novelty of elastic-net regularization-based feature selection and kernelized multivariate fisher Discriminant MapReduce classifier. Initially, the EKMDMC technique executes the feature selection to improve the prediction accuracy using the Elastic-Net regularization method. Elastic-Net regularization method selects relevant features such as central processing unit (CPU) time, memory and bandwidth, energy based on regression function. After selecting relevant features, kernelized multivariate fisher discriminant mapr classifier is used to schedule the tasks to optimize the processing unit. Kernel function is used to find higher similarity of stream data tasks and mean of available classes. Experimental evaluation of proposed EKMDMC technique provides better performance in terms of resource aware predictive scheduling efficiency, false positive rate, scheduling time and memory consumption.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Xueyan, Shuo Dai, Mengyu Wang, and Yining Zhang. "Compressed Sensing Photoacoustic Imaging Reconstruction Using Elastic Net Approach." Molecular Imaging 2022 (December 20, 2022): 1–9. http://dx.doi.org/10.1155/2022/7877049.

Full text
Abstract:
Photoacoustic imaging involves reconstructing an estimation of the absorbed energy density distribution from measured ultrasound data. The reconstruction task based on incomplete and noisy experimental data is usually an ill-posed problem that requires regularization to obtain meaningful solutions. The purpose of the work is to propose an elastic network (EN) model to improve the quality of reconstructed photoacoustic images. To evaluate the performance of the proposed method, a series of numerical simulations and tissue-mimicking phantom experiments are performed. The experiment results indicate that, compared with the L 1 -norm and L 2 -normbased regularization methods with different numerical phantoms, Gaussian noise of 10-50 dB, and different regularization parameters, the EN method with α = 0.5 has better image quality, calculation speed, and antinoise ability.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Hailong, and Liang Ding. "Generalized conditional gradient method for elastic-net regularization." Journal of Computational and Applied Mathematics 403 (March 2022): 113872. http://dx.doi.org/10.1016/j.cam.2021.113872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zou, Hui, and Trevor Hastie. "Regularization and variable selection via the elastic net." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, no. 2 (2005): 301–20. http://dx.doi.org/10.1111/j.1467-9868.2005.00503.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Bo, Liping Jing, Jian Yu, and Jia Li. "Robust graph learning via constrained elastic-net regularization." Neurocomputing 171 (January 2016): 299–312. http://dx.doi.org/10.1016/j.neucom.2015.06.059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hong Li, Na Chen, and Luoqing Li. "Error Analysis for Matrix Elastic-Net Regularization Algorithms." IEEE Transactions on Neural Networks and Learning Systems 23, no. 5 (2012): 737–48. http://dx.doi.org/10.1109/tnnls.2012.2188906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Afraa A. Hamada. "Elastic Net Principal Component Regression With an Application." International Journal of Science and Mathematics Education 1, no. 3 (2024): 13–23. http://dx.doi.org/10.62951/ijsme.v1i3.25.

Full text
Abstract:
To overcome the difficulties of high-dimensional data, Elastic Net Principal Component Regression (ENPCR), a potent statistical technique, combines Elastic Net regularization with Principal Component regression (PCR). When dealing with Multicollinearity among predictors, this method is especially helpful because it enables efficient variable selection while preserving interpretability. PCA is initially used in ENPCR to reduce the dataset's dimensionality by converting correlated variables into a group of uncorrelated principal components. The Elastic Net regression model then uses these elements as inputs and penalizes the regression coefficients using both L1 and L2 penalties. By promoting sparsity, this dual regularization lessens overfitting and helps the model concentrate on its most important components. simulated studies and Real datasets are used to demonstrate the our proposed method .
APA, Harvard, Vancouver, ISO, and other styles
18

Feng, Yunlong, Shao-Gao Lv, Hanyuan Hang, and Johan A. K. Suykens. "Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery." Neural Computation 28, no. 3 (2016): 525–62. http://dx.doi.org/10.1162/neco_a_00812.

Full text
Abstract:
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Wentao, Jiaxuan Liang, Rong Liu, Yunquan Song, and Min Zhang. "A Robust Variable Selection Method for Sparse Online Regression via the Elastic Net Penalty." Mathematics 10, no. 16 (2022): 2985. http://dx.doi.org/10.3390/math10162985.

Full text
Abstract:
Variable selection has been a hot topic, with various popular methods including lasso, SCAD, and elastic net. These penalized regression algorithms remain sensitive to noisy data. Furthermore, “concept drift” fundamentally distinguishes streaming data learning from batch learning. This article presents a method for noise-resistant regularization and variable selection in noisy data streams with multicollinearity, dubbed canal-adaptive elastic net, which is similar to elastic net and encourages grouping effects. In comparison to lasso, the canal adaptive elastic net is especially advantageous when the number of predictions (p) is significantly larger than the number of observations (n), and the data are multi-collinear. Numerous simulation experiments have confirmed that canal-adaptive elastic net has higher prediction accuracy than lasso, ridge regression, and elastic net in data with multicollinearity and noise.
APA, Harvard, Vancouver, ISO, and other styles
20

Dai, Ronghuo, Cheng Yin, and Da Peng. "An Application of Elastic-Net Regularized Linear Inverse Problem in Seismic Data Inversion." Applied Sciences 13, no. 3 (2023): 1525. http://dx.doi.org/10.3390/app13031525.

Full text
Abstract:
In exploration geophysics, seismic impedance is a physical characteristic parameter of underground formations. It can mark rock characteristics and help stratigraphic analysis. Hence, seismic data inversion for impedance is a key technology in oil and gas reservoir prediction. To invert impedance from seismic data, one can perform reflectivity series inversion first. Then, under a simple exponential integration transformation, the inverted reflectivity series can give the final inverted impedance. The quality of the inverted reflectivity series directly affects the quality of impedance. Sparse-spike inversion is the most common method to obtain reflectivity series with high resolution. It adopts a sparse regularization to impose sparsity on the inverted reflectivity series. However, the high resolution of sparse-spike-like reflectivity series is obtained at the cost of sacrificing small reflectivity. This is the inherent problem of sparse regularization. In fact, the reflectivity series from the actual impedance well log is not strictly sparse. It contains not only the sparse major large reflectivity, but also small reflectivity between major reflectivity. That is to say, the large reflectivity is sparse, but the small reflectivity is dense. To combat this issue, we adopt elastic-net regularization to replace sparse regularization in seismic impedance inversion. The elastic net is a hybrid regularization that combines sparse regularization and dense regularization. The proposed inversion method was performed on a synthetic seismic trace, which is created from an actual well log. Then, a real seismic data profile was used to test the practice application. The inversion results showed that it provides an effective new alternative method to invert impedance.
APA, Harvard, Vancouver, ISO, and other styles
21

Tak, Nihat, and Deniz İnan. "Type-1 fuzzy forecasting functions with elastic net regularization." Expert Systems with Applications 199 (August 2022): 116916. http://dx.doi.org/10.1016/j.eswa.2022.116916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zou, Hui, and Trevor Hastie. "Addendum: Regularization and variable selection via the elastic net." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, no. 5 (2005): 768. http://dx.doi.org/10.1111/j.1467-9868.2005.00527.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jin, Bangti, Dirk A. Lorenz, and Stefan Schiffler. "Elastic-net regularization: error estimates and active set methods." Inverse Problems 25, no. 11 (2009): 115022. http://dx.doi.org/10.1088/0266-5611/25/11/115022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Xin, Hua, Yuhlong Lio, Hsien-Ching Chen, and Tzong-Ru Tsai. "Zero-Inflated Binary Classification Model with Elastic Net Regularization." Mathematics 12, no. 19 (2024): 2990. http://dx.doi.org/10.3390/math12192990.

Full text
Abstract:
Zero inflation and overfitting can reduce the accuracy rate of using machine learning models for characterizing binary data sets. A zero-inflated Bernoulli (ZIBer) model can be the right model to characterize zero-inflated binary data sets. When the ZIBer model is used to characterize zero-inflated binary data sets, overcoming the overfitting problem is still an open question. To improve the overfitting problem for using the ZIBer model, the minus log-likelihood function of the ZIBer model with the elastic net regularization rule for an overfitting penalty is proposed as the loss function. An estimation procedure to minimize the loss function is developed in this study using the gradient descent method (GDM) with the momentum term as the learning rate. The proposed estimation method has two advantages. First, the proposed estimation method can be a general method that simultaneously uses L1- and L2-norm terms for penalty and includes the ridge and least absolute shrinkage and selection operator methods as special cases. Second, the momentum learning rate can accelerate the convergence of the GDM and enhance the computation efficiency of the proposed estimation procedure. The parameter selection strategy is studied, and the performance of the proposed method is evaluated using Monte Carlo simulations. A diabetes example is used as an illustration.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Weijian, Chen Xu, Bin Zou, Huidong Jin, and Jie Xu. "Kernelized Elastic Net Regularization based on Markov selective sampling." Knowledge-Based Systems 163 (January 2019): 57–68. http://dx.doi.org/10.1016/j.knosys.2018.08.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cui, ZhenKai, Cheng Wang, Jianwei Chen, and Ting He. "Multipoint Vibration Response Prediction under Uncorrelated Multiple Sources Load Based on Elastic-Net Regularization in Frequency Domain." Shock and Vibration 2021 (March 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/6614020.

Full text
Abstract:
In order to solve the problems of large number of conditions at inherent frequencies and low prediction accuracy when using multiple multivariate linear regression methods for vibration response prediction alone, an elastic-net regularization method is proposed. Firstly, a multi-input and multioutput linear regression model of the multipoint frequency domain vibration response is trained using historical data at each frequency point. Secondly, the trained model under each frequency point is improved by the elastic regularization. Finally, the model is used in a working situation. The predicted vibration response on the experimental dataset of cylindrical shell acoustic vibration showed that the improvement of the multivariate regression vibration response prediction model by elastic regularization can better improve the accuracy and reduce the large number of conditions at some frequencies.
APA, Harvard, Vancouver, ISO, and other styles
27

Çiftsüren, Mehmet Nur, and Suna Akkol. "Prediction of internal egg quality characteristics and variable selection using regularization methods: ridge, LASSO and elastic net." Archives Animal Breeding 61, no. 3 (2018): 279–84. http://dx.doi.org/10.5194/aab-61-279-2018.

Full text
Abstract:
Abstract. This study was conducted to determine the inner quality characteristics of eggs using external egg quality characteristics. The variables were selected in order to obtain the simplest model using ridge, LASSO and elastic net regularization methods. For this purpose, measurements of the internal and external characteristics of 117 Japanese quail eggs were made. Internal quality characteristics were egg yolk weight and albumen weight; external quality characteristics were egg width, egg length, egg weight, shape index and shell weight. An ordinary least square method was applied to the data. Ridge, LASSO and elastic net regularization methods were performed to remove the multicollinearity of the data. The regression estimating equations of the internal egg quality were significant for all methods (P<0.01). The goodness of fit of the regression estimating equations for egg yolk weight was 58.34, 59.17 and 59.11 % for the ridge, LASSO and elastic net methods, respectively. For egg albumen weight the goodness of fit of the regression estimating equations was 75.60 %, 75.94 % and 75.81 % for the respective ridge, LASSO and elastic net methods. It was revealed that LASSO, including two predictors for both egg yolk weight and egg albumen weight, was the best model with regard to high predictive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
28

Ohyver, Margaretha, Purhadi, and Achmad Choiruddin. "Parameter Estimation of Geographically and Temporally Weighted Elastic Net Ordinal Logistic Regression." Mathematics 13, no. 8 (2025): 1345. https://doi.org/10.3390/math13081345.

Full text
Abstract:
Geographically and Temporally Weighted Elastic Net Ordinal Logistic Regression is a parsimonious ordinal logistic regression with consideration of the existence of spatial and temporal effects. This model has been developed with the following three considerations: the spatial effect, the temporal effect, and predictor selection. The last point prompted the use of Elastic Net regularization in choosing predictors while handling multicollinearity, which often arises when there are many predictors involved. The Elastic Net penalty combines ridge and LASSO penalties, leading to the determination of the appropriate λEN and αEN. Therefore, the objective of this study is to determine the parameter estimator using Maximum Likelihood Estimation. The estimation process comprises defining the likelihood function, determining the natural logarithm of the likelihood function, and maximizing the function using Berndt–Hall–Hall–Hausman. These steps continue until the estimator converges on the values that maximize the likelihood function. This study contributes by developing an estimation framework that integrates spatial and temporal effects with Elastic Net regularization, allowing for improved model interpretation and stability. The findings provide an advanced methodological approach for ordinal logistic regression models that incorporate spatial and temporal dependencies. This framework is particularly useful for applications in fields such as economic forecasting, epidemiology, and environmental studies, where ordinal responses exhibit spatial and temporal patterns.
APA, Harvard, Vancouver, ISO, and other styles
29

Umanità, Veronica, and Silvia Villa. "Elastic-Net Regularization: Iterative Algorithms and Asymptotic Behavior of Solutions." Numerical Functional Analysis and Optimization 31, no. 12 (2010): 1406–32. http://dx.doi.org/10.1080/01630563.2010.513782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhao, Gen, Jun Hu, Jinliang He, and Shan X. Wang. "A Novel Current Reconstruction Method Based on Elastic Net Regularization." IEEE Transactions on Instrumentation and Measurement 69, no. 10 (2020): 7484–93. http://dx.doi.org/10.1109/tim.2020.2984819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Khan, Christopher, and Brett Byram. "GENRE (GPU Elastic-Net REgression): A CUDA-Accelerated Package for Massively Parallel Linear Regression with Elastic-Net Regularization." Journal of Open Source Software 5, no. 54 (2020): 2644. http://dx.doi.org/10.21105/joss.02644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, De-Han, Bernd Hofmann та Jun Zou. "Elastic-net regularization versus ℓ 1 -regularization for linear inverse problems with quasi-sparse solutions". Inverse Problems 33, № 1 (2016): 015004. http://dx.doi.org/10.1088/1361-6420/33/1/015004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cai, Jia, Guanglong Xu, and Zhensheng Hu. "Sketch-based image retrieval via CAT loss with elastic net regularization." Mathematical Foundations of Computing 3, no. 4 (2020): 219–27. http://dx.doi.org/10.3934/mfc.2020013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Qiang, Bo Xie, Jane You, Wei Bian, and Dacheng Tao. "Correlated Logistic Model With Elastic Net Regularization for Multilabel Image Classification." IEEE Transactions on Image Processing 25, no. 8 (2016): 3801–13. http://dx.doi.org/10.1109/tip.2016.2577382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

De Vito, Ernesto, Veronica Umanità, and Silvia Villa. "A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization." Journal of Complexity 27, no. 2 (2011): 188–200. http://dx.doi.org/10.1016/j.jco.2011.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sun, Shasha, Wenxing Bao, Kewen Qu, Wei Feng, Xuan Ma, and Xiaowu Zhang. "Hyperspectral-multispectral image fusion using subspace decomposition and Elastic Net Regularization." International Journal of Remote Sensing 45, no. 12 (2024): 3962–91. http://dx.doi.org/10.1080/01431161.2024.2357840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Thippa, Thippa, Rutvij H. Jhaveri, Faisal Mohammed alotaibi, and Thippa Reddy Gadekallu. "Integrating Clustering and Regularization for Robust LSTM-Based Stock Price Prediction." Fusion: Practice and Applications 18, no. 2 (2025): 251–61. https://doi.org/10.54216/fpa.180218.

Full text
Abstract:
Stock price forecasting has oftentimes interested several researchers around the world. Making predictions for the future largely depends on the data that will be used to train the model. In general, historical data are used to train models, which contain a features of different types, out of which, not all are necessarily helpful in making predictions. It is, hence, crucial to select the features that can be most useful to make precise predictions. This article proposes a feature selection approach based on the K-means clustering algorithm and elastic net regularization. We have used the K-means algorithm to cluster all the correlated features together and apply elastic net regularization to select the most predictive features within each cluster. We use the selected features to train an LSTM model which predicts the future closing price of a stock for the upcoming trading day. We evaluate the performance of our proposed approach in comparison to the existing approach and observe performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Xiaojun, Zhenqi Jiang, Xiao Han, Xiaolin Wang, and Xiaoying Tang. "Research of magnetic particle imaging reconstruction based on the elastic net regularization." Biomedical Signal Processing and Control 69 (August 2021): 102823. http://dx.doi.org/10.1016/j.bspc.2021.102823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Jing, Bo Han, and Wei Wang. "Elastic-net regularization for nonlinear electrical impedance tomography with a splitting approach." Applicable Analysis 98, no. 12 (2018): 2201–17. http://dx.doi.org/10.1080/00036811.2018.1451644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cho, Seoae, Haseong Kim, Sohee Oh, Kyunga Kim, and Taesung Park. "Elastic-net regularization approaches for genome-wide association studies of rheumatoid arthritis." BMC Proceedings 3, Suppl 7 (2009): S25. http://dx.doi.org/10.1186/1753-6561-3-s7-s25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Elmsili, Bilal, and Benaceur Outtaj. "Portfolio Selection Using Multiple Factors: A Machine Learning Approach." International Journal of Economics, Business and Management Research 06, no. 12 (2022): 81–103. http://dx.doi.org/10.51505/ijebmr.2022.61207.

Full text
Abstract:
In this paper we present a novel framework for portfolio selection using deep neural networks and elastic net regularization: At the beginning of each month T, we follow a three-step methodology. First, for each stock, we use the previous seven years of data in order to compute over 36 firm-specific factors. Second, we perform features selection using elastic net regularization. Finally, we train a deep neural network in order to learn portfolio weights and hold this portfolio until the end of the month T. Compared with momentum, long-term reversal, and short-term reversal strategies, our approach demonstrates a superior performance in terms of the monthly rate of return (2% versus 1.22% for long-term reversal, 1.15% for momentum, and only 0.68% for short-term reversal), Sharpe ratio (21.67% versus 19.31% for momentum, 15.51% for long-term reversal, and 8.69% for short-term reversal), and the monthly risk-adjusted return (1.85% versus 0.74% for momentum, 0.72% for long-term reversal, and 0.31% for shortterm reversal). The results of our approach are all statistically significant at 1% level.
APA, Harvard, Vancouver, ISO, and other styles
42

Laria, Juan Carlos, Line H. Clemmensen, Bjarne K. Ersbøll, and David Delgado-Gómez. "A Generalized Linear Joint Trained Framework for Semi-Supervised Learning of Sparse Features." Mathematics 10, no. 16 (2022): 3001. http://dx.doi.org/10.3390/math10163001.

Full text
Abstract:
The elastic net is among the most widely used types of regularization algorithms, commonly associated with the problem of supervised generalized linear model estimation via penalized maximum likelihood. Its attractive properties, originated from a combination of ℓ1 and ℓ2 norms, endow this method with the ability to select variables, taking into account the correlations between them. In the last few years, semi-supervised approaches that use both labeled and unlabeled data have become an important component in statistical research. Despite this interest, few researchers have investigated semi-supervised elastic net extensions. This paper introduces a novel solution for semi-supervised learning of sparse features in the context of generalized linear model estimation: the generalized semi-supervised elastic net (s2net), which extends the supervised elastic net method, with a general mathematical formulation that covers, but is not limited to, both regression and classification problems. In addition, a flexible and fast implementation for s2net is provided. Its advantages are illustrated in different experiments using real and synthetic data sets. They show how s2net improves the performance of other techniques that have been proposed for both supervised and semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
43

De Leone, Renato, Nadaniela Egidi, and Lorella Fatone. "The use of grossone in elastic net regularization and sparse support vector machines." Soft Computing 24, no. 23 (2020): 17669–77. http://dx.doi.org/10.1007/s00500-020-05185-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yu, XiaoYuan, and Wei Xie. "Single Image Blind Deblurring Based on Salient Edge-Structures and Elastic-Net Regularization." Journal of Mathematical Imaging and Vision 62, no. 8 (2020): 1049–61. http://dx.doi.org/10.1007/s10851-020-00949-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, HuanLin, Jing Wu, WeiWei Zhang, and HongWei Ma. "Fractional-order elastic net regularization for identifying various types of unknown external forces." Mechanical Systems and Signal Processing 205 (December 2023): 110842. http://dx.doi.org/10.1016/j.ymssp.2023.110842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lukman, Adewale Folaranmi, Jeza Allohibi, Segun Light Jegede, Emmanuel Taiwo Adewuyi, Segun Oke, and Abdulmajeed Atiah Alharbi. "Kibria–Lukman-Type Estimator for Regularization and Variable Selection with Application to Cancer Data." Mathematics 11, no. 23 (2023): 4795. http://dx.doi.org/10.3390/math11234795.

Full text
Abstract:
Following the idea presented with regard to the elastic-net and Liu-LASSO estimators, we proposed a new penalized estimator based on the Kibria–Lukman estimator with L1-norms to perform both regularization and variable selection. We defined the coordinate descent algorithm for the new estimator and compared its performance with those of some existing machine learning techniques, such as the least absolute shrinkage and selection operator (LASSO), the elastic-net, Liu-LASSO, the GO estimator and the ridge estimator, through simulation studies and real-life applications in terms of test mean squared error (TMSE), coefficient mean squared error (βMSE), false-positive (FP) coefficients and false-negative (FN) coefficients. Our results revealed that the new penalized estimator performs well for both the simulated low- and high-dimensional data in simulations. Also, the two real-life results show that the new method predicts the target variable better than the existing ones using the test RMSE metric.
APA, Harvard, Vancouver, ISO, and other styles
47

Singh, Devesh, and Maciej Turała. "Machine Learning and Regularization Technique to Determine Foreign Direct Investment in Hungarian Counties." DANUBE 13, no. 4 (2022): 269–91. http://dx.doi.org/10.2478/danb-2022-0017.

Full text
Abstract:
Abstract Recent studies show regional factors play an important role to attract Foreign Direct Investment (FDI) and, the performance of these factors varies within the country. Therefore, it is important to develop a measurement system to analyse the insight of these FDI factors. In this study, we used the regularization method with machine learning to get insight into the FDI determinants at the regional level. We used 18-years post-socialist period data at the county level from Hungary and applied a machine learning algorithm on different methods of regressions such as a linear, ridge, lasso, and elastic net. We analyse the relation of two dependent variables, the total amount of FDI inflow in a county and disparity of FDI inflow in companies within the county, and used urbanization, GDP per capita, labour productivity, market share of the companies, agglomeration of industries, and growth rate of the companies as predictors. Our results show that the elastic net is the best method to determine the predictive performance of FDI at the regional level.
APA, Harvard, Vancouver, ISO, and other styles
48

Khamidah, Nur, Kusman Sadik, Agus M Soleh, and Gerry Alfa Dito. "Regularisasi model pembelajaran mesin dengan regresi terpenalti pada data yang mengandung multikolinearitas (Studi kasus prediksi Indeks Pembangunan Manusia di 34 provinsi di Indonesia)." Majalah Ilmiah Matematika dan Statistika 24, no. 1 (2024): 12. http://dx.doi.org/10.19184/mims.v24i1.40360.

Full text
Abstract:
This research intends to model high-dimensional data that contains multicollinearity in four machine-learning algorithms: Random Forest, K-Nearest Neighbor, XGBoost, and Regression Tree. Previously, regularization was carried out with penalized ridge regression, least absolute shrinkage and selection operator (LASSO) regression, and Elastic Net regression. A total of 100 predictor variables and 1 response variable which are the Development Index 2022 data of 34 provinces in Indonesia from BPS were used and standardized. The simulation is also applied to highly correlated data on two distributions, uniform and normal with parameter values taken from existing empirical data. The results showed that the ridge regularization method is the best for producing accurate and stable predictions. Furthermore, there was no difference in the root mean square error (RMSE) results between the data with standardization and without standardization, wherein all the data analyzed it was found that the kNN model was better than other models on simulation data, and the Random Forest and XGBoost models were better than other models on empirical data. In addition, the Regression Tree model is not recommended according to the results of this study.
 Keywords: regularization, multicollinearity, ridge, LASSO, elastic netMSC2020: 62J07
APA, Harvard, Vancouver, ISO, and other styles
49

R. Rooba,, A. R. Karthekeyan,. "Youtube Comment Feature Selection And Classification Using Fused Machine Learning." Tuijin Jishu/Journal of Propulsion Technology 44, no. 4 (2023): 1108–24. http://dx.doi.org/10.52783/tjjpt.v44.i4.982.

Full text
Abstract:
The exponential rise of internet platforms, notably YouTube, has resulted in a massive volume of user-generated material, including video comments. Understanding audience input and enhancing user experience require analyzing and forecasting the mood of YouTube comments. This work provides a comprehensive method for YouTube comment prediction and sentiment classification that combines feature selection using Recursive Feature Elimination (RFE), Elastic Net Random Forest with Logistic Regression (RF with LR), and Principal Component Analysis (PCA). The first stage is to choose the most informative features from a given dataset using RFE, a common approach for doing so. RFE aids in the elimination of unnecessary or redundant features, resulting in enhanced model performance and decreased computing complexity. The Elastic Net Random Forest with Logistic Regression (RF with LR) technique is then used to construct a robust sentiment classification model. Elastic Net regularization combines the advantages of both L1 (Lasso) and L2 (Ridge) regularization, allowing for improved feature selection and management of multicollinearity concerns. By integrating many decision trees, the Random Forest ensemble approach improves the model's predictive potential even more. We employ Principal Component Analysis (PCA) to increase the classification model's effectiveness and solve possible difficulties created by high-dimensional data. PCA decreases the dataset's complexity while retaining its fundamental qualities, resulting in a more manageable and efficient feature space for classification. Finally, we compare the performance of three prominent classifiers on the preprocessed dataset: Linear Support Vector Machine (LSVM), Gaussian Naive Bayes (GNB), Logistic Regression (LR), and Decision Tree (DT). We can select the best-performing model for YouTube comment categorization by comparing these classifiers.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Rongzhe, Tonglin Li, Shuai Zhou, and Xinhui Deng. "Joint MT and Gravity Inversion Using Structural Constraints: A Case Study from the Linjiang Copper Mining Area, Jilin, China." Minerals 9, no. 7 (2019): 407. http://dx.doi.org/10.3390/min9070407.

Full text
Abstract:
We present a joint 2D inversion approach for magnetotelluric (MT) and gravity data with elastic-net regularization and cross-gradient constraints. We describe the main features of the approach and verify the inversion results against a synthetic model. The results indicate that the best fit solution using the L2 is overly smooth, while the best fit solution for the L1 norm is too sparse. However, the elastic-net regularization method, a convex combination term of L2 norm and L1 norm, can not only enforce the stability to preserve local smoothness, but can also enforce the sparsity to preserve sharp boundaries. Cross-gradient constraints lead to models with close structural resemblance and improve the estimates of the resistivity and density of the synthetic dataset. We apply the novel approach to field datasets from a copper mining area in the northeast of China. Our results show that the method can generate much more detail and a sharper boundary as well as better depth resolution. Relative to the existing solution, the large area divergence phenomenon under the anomalous bodies is eliminated, and the fine anomalous bodies boundary appeared in the smooth region. This method can provide important technical support for detecting deep concealed deposits.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography