Academic literature on the topic 'Regularized linear least-squares'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Regularized linear least-squares.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Regularized linear least-squares"

1

Yong-Li Xu and Di-Rong Chen. "Partially-Linear Least-Squares Regularized Regression for System Identification." IEEE Transactions on Automatic Control 54, no. 11 (November 2009): 2637–41. http://dx.doi.org/10.1109/tac.2009.2031566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carp, Doina, Constantin Popa, Tobias Preclik, and Ulrich Rüde. "Iterative Solution of Weighted Linear Least Squares Problems." Analele Universitatii "Ovidius" Constanta - Seria Matematica 28, no. 2 (July 1, 2020): 53–65. http://dx.doi.org/10.2478/auom-2020-0019.

Full text
Abstract:
AbstractIn this report we show that the iterated regularization scheme due to Riley and Golub, sometimes also called the iterated Tikhonov regularization, can be generalized to damped least squares problems where the weights matrix D is not necessarily the identity but a general symmetric and positive definite matrix. We show that the iterative scheme approaches the same point as the unique solutions of the regularized problem, when the regularization parameter goes to 0. Furthermore this point can be characterized as the solution of a weighted minimum Euclidean norm problem. Finally several numerical experiments were performed in the field of rigid multibody dynamics supporting the theoretical claims.
APA, Harvard, Vancouver, ISO, and other styles
3

Wenwu Zhu, Yao Wang, N. P. Galatsanos, and Jun Zhang. "Regularized total least squares approach for nonconvolutional linear inverse problems." IEEE Transactions on Image Processing 8, no. 11 (1999): 1657–61. http://dx.doi.org/10.1109/83.799895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dehghani, Mohsen, Andrew Lambe, and Dominique Orban. "A regularized interior-point method for constrained linear least squares." INFOR: Information Systems and Operational Research 58, no. 2 (February 19, 2019): 202–24. http://dx.doi.org/10.1080/03155986.2018.1559428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tong, Hongzhi, and Michael Ng. "Analysis of regularized least squares for functional linear regression model." Journal of Complexity 49 (December 2018): 85–94. http://dx.doi.org/10.1016/j.jco.2018.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shimizu, Yusuke. "Moment convergence of regularized least-squares estimator for linear regression model." Annals of the Institute of Statistical Mathematics 69, no. 5 (August 9, 2016): 1141–54. http://dx.doi.org/10.1007/s10463-016-0577-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xiaobo, Jian Yang, Qirong Mao, and Fei Han. "Regularized least squares fisher linear discriminant with applications to image recognition." Neurocomputing 122 (December 2013): 521–34. http://dx.doi.org/10.1016/j.neucom.2013.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Zhengqing, and Lanlan Guo. "Tikhonov Regularized Variable Projection Algorithms for Separable Nonlinear Least Squares Problems." Complexity 2019 (November 25, 2019): 1–9. http://dx.doi.org/10.1155/2019/4861708.

Full text
Abstract:
This paper considers the classical separable nonlinear least squares problem. Such problems can be expressed as a linear combination of nonlinear functions, and both linear and nonlinear parameters are to be estimated. Among the existing results, ill-conditioned problems are less often considered. Hence, this paper focuses on an algorithm for ill-conditioned problems. In the proposed linear parameter estimation process, the sensitivity of the model to disturbance is reduced using Tikhonov regularisation. The Levenberg–Marquardt algorithm is used to estimate the nonlinear parameters. The Jacobian matrix required by LM is calculated by the Golub and Pereyra, Kaufman, and Ruano methods. Combining the nonlinear and linear parameter estimation methods, three estimation models are obtained and the feasibility and stability of the model estimation are demonstrated. The model is validated by simulation data and real data. The experimental results also illustrate the feasibility and stability of the model.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Yang, and Yanan Kong. "Continuous Regularized Least Squares Polynomial Approximation on the Sphere." Mathematical Problems in Engineering 2020 (August 20, 2020): 1–9. http://dx.doi.org/10.1155/2020/9172385.

Full text
Abstract:
In this paper, we consider the problem of polynomial reconstruction of smooth functions on the sphere from their noisy values at discrete nodes on the two-sphere. The method considered in this paper is a weighted least squares form with a continuous regularization. Preliminary error bounds in terms of regularization parameter, noise scale, and smoothness are proposed under two assumptions: the mesh norm of the data point set and the perturbation bound of the weight. Condition numbers of the linear systems derived by the problem are discussed. We also show that spherical tϵ-designs, which can be seen as a generalization of spherical t-designs, are well applied to this model. Numerical results show that the method has good performance in view of both the computation time and the approximation quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Duan, Peihu, Zhisheng Duan, Guanrong Chen, and Ling Shi. "Distributed state estimation for uncertain linear systems: A regularized least-squares approach." Automatica 117 (July 2020): 109007. http://dx.doi.org/10.1016/j.automatica.2020.109007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Regularized linear least-squares"

1

Slagel, Joseph Tanner. "The Sherman Morrison Iteration." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52966.

Full text
Abstract:
The Sherman Morrison iteration method is developed to solve regularized least squares problems. Notions of pivoting and splitting are deliberated on to make the method more robust. The Sherman Morrison iteration method is shown to be effective when dealing with an extremely underdetermined least squares problem. The performance of the Sherman Morrison iteration is compared to classic direct methods, as well as iterative methods, in a number of experiments. Specific Matlab implementation of the Sherman Morrison iteration is discussed, with Matlab codes for the method available in the appendix.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Suliman, Mohamed Abdalla Elhag. "Regularization Techniques for Linear Least-Squares Problems." Thesis, 2016. http://hdl.handle.net/10754/609160.

Full text
Abstract:
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Regularized linear least-squares"

1

Jiang, Kaifeng, Defeng Sun, and Kim-Chuan Toh. "Solving Nuclear Norm Regularized and Semidefinite Matrix Least Squares Problems with Linear Equality Constraints." In Discrete Geometry and Optimization, 133–62. Heidelberg: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00200-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nikolova, Mila. "Should We Search for a Global Minimizer of Least Squares Regularized with an ℓ0 Penalty to Get the Exact Solution of an under Determined Linear System?" In Lecture Notes in Computer Science, 508–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24785-9_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Regularized linear least-squares"

1

Rifkin, Ryan, Ken Schutte, Michelle Saad, Jake Bouvrie, and Jim Glass. "Noise Robust Phonetic Classificationwith Linear Regularized Least Squares and Second-Order Features." In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.367211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rusch, Konstantin, Martin Siggel, and Richard-Gregor Becker. "Reproducing Existing Nacelle Geometries With the Free-Form Deformation Parametrization." In ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/gt2018-76445.

Full text
Abstract:
In the conceptual and preliminary aircraft design phase the Free-Form Deformation (FFD) is one of various parametrization schemes to define the geometry of an engine’s nacelle. This paper presents a method that is able to create a C2 continuous periodic approximation of existing reference nacelles with the B-spline based FFD, which is a generalization of the classical FFD. The basic principle of this method is to start with a rotational symmetric B-spline approximation of the reference nacelle, which is subsequently deformed with a FFD grid that is placed around the initial geometry. A method is derived that computes the displacement of the FFD grid points, such that the deformed nacelle approximates the reference nacelle with minimal deviations. As this turns out to be a linear inverse problem, it can be solved with a linear least squares fit. To avoid overfitting effects — like degenerative FFD grids which imply excessive local deformations — the inverse problem is regularized with the Tikhonov approach. The NASA CRM model and the IAE V2500 engine have been selected as reference geometries. Both resemble nacelles that are typically found on common aircraft models and both deviate sufficiently from the rotational symmetry. It is demonstrated that the mean error of the approximation decreases with an increase of the number of FFD grid points and how the regularization affects these results. Finally, the B-spline based FFD with the classical Bernstein based FFD are compared for both models. The results conceptually prove the usability of the FFD approach for the construction of nacelle geometries in the preliminary aircraft design phase.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography