To see the other types of publications on this topic, follow the link: Regularized linear least-squares.

Journal articles on the topic 'Regularized linear least-squares'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Regularized linear least-squares.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yong-Li Xu and Di-Rong Chen. "Partially-Linear Least-Squares Regularized Regression for System Identification." IEEE Transactions on Automatic Control 54, no. 11 (November 2009): 2637–41. http://dx.doi.org/10.1109/tac.2009.2031566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carp, Doina, Constantin Popa, Tobias Preclik, and Ulrich Rüde. "Iterative Solution of Weighted Linear Least Squares Problems." Analele Universitatii "Ovidius" Constanta - Seria Matematica 28, no. 2 (July 1, 2020): 53–65. http://dx.doi.org/10.2478/auom-2020-0019.

Full text
Abstract:
AbstractIn this report we show that the iterated regularization scheme due to Riley and Golub, sometimes also called the iterated Tikhonov regularization, can be generalized to damped least squares problems where the weights matrix D is not necessarily the identity but a general symmetric and positive definite matrix. We show that the iterative scheme approaches the same point as the unique solutions of the regularized problem, when the regularization parameter goes to 0. Furthermore this point can be characterized as the solution of a weighted minimum Euclidean norm problem. Finally several numerical experiments were performed in the field of rigid multibody dynamics supporting the theoretical claims.
APA, Harvard, Vancouver, ISO, and other styles
3

Wenwu Zhu, Yao Wang, N. P. Galatsanos, and Jun Zhang. "Regularized total least squares approach for nonconvolutional linear inverse problems." IEEE Transactions on Image Processing 8, no. 11 (1999): 1657–61. http://dx.doi.org/10.1109/83.799895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dehghani, Mohsen, Andrew Lambe, and Dominique Orban. "A regularized interior-point method for constrained linear least squares." INFOR: Information Systems and Operational Research 58, no. 2 (February 19, 2019): 202–24. http://dx.doi.org/10.1080/03155986.2018.1559428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tong, Hongzhi, and Michael Ng. "Analysis of regularized least squares for functional linear regression model." Journal of Complexity 49 (December 2018): 85–94. http://dx.doi.org/10.1016/j.jco.2018.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shimizu, Yusuke. "Moment convergence of regularized least-squares estimator for linear regression model." Annals of the Institute of Statistical Mathematics 69, no. 5 (August 9, 2016): 1141–54. http://dx.doi.org/10.1007/s10463-016-0577-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xiaobo, Jian Yang, Qirong Mao, and Fei Han. "Regularized least squares fisher linear discriminant with applications to image recognition." Neurocomputing 122 (December 2013): 521–34. http://dx.doi.org/10.1016/j.neucom.2013.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Zhengqing, and Lanlan Guo. "Tikhonov Regularized Variable Projection Algorithms for Separable Nonlinear Least Squares Problems." Complexity 2019 (November 25, 2019): 1–9. http://dx.doi.org/10.1155/2019/4861708.

Full text
Abstract:
This paper considers the classical separable nonlinear least squares problem. Such problems can be expressed as a linear combination of nonlinear functions, and both linear and nonlinear parameters are to be estimated. Among the existing results, ill-conditioned problems are less often considered. Hence, this paper focuses on an algorithm for ill-conditioned problems. In the proposed linear parameter estimation process, the sensitivity of the model to disturbance is reduced using Tikhonov regularisation. The Levenberg–Marquardt algorithm is used to estimate the nonlinear parameters. The Jacobian matrix required by LM is calculated by the Golub and Pereyra, Kaufman, and Ruano methods. Combining the nonlinear and linear parameter estimation methods, three estimation models are obtained and the feasibility and stability of the model estimation are demonstrated. The model is validated by simulation data and real data. The experimental results also illustrate the feasibility and stability of the model.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Yang, and Yanan Kong. "Continuous Regularized Least Squares Polynomial Approximation on the Sphere." Mathematical Problems in Engineering 2020 (August 20, 2020): 1–9. http://dx.doi.org/10.1155/2020/9172385.

Full text
Abstract:
In this paper, we consider the problem of polynomial reconstruction of smooth functions on the sphere from their noisy values at discrete nodes on the two-sphere. The method considered in this paper is a weighted least squares form with a continuous regularization. Preliminary error bounds in terms of regularization parameter, noise scale, and smoothness are proposed under two assumptions: the mesh norm of the data point set and the perturbation bound of the weight. Condition numbers of the linear systems derived by the problem are discussed. We also show that spherical tϵ-designs, which can be seen as a generalization of spherical t-designs, are well applied to this model. Numerical results show that the method has good performance in view of both the computation time and the approximation quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Duan, Peihu, Zhisheng Duan, Guanrong Chen, and Ling Shi. "Distributed state estimation for uncertain linear systems: A regularized least-squares approach." Automatica 117 (July 2020): 109007. http://dx.doi.org/10.1016/j.automatica.2020.109007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Garrigos, Guillaume, Lorenzo Rosasco, and Silvia Villa. "Thresholding gradient methods in Hilbert spaces: support identification and linear convergence." ESAIM: Control, Optimisation and Calculus of Variations 26 (2020): 28. http://dx.doi.org/10.1051/cocv/2019011.

Full text
Abstract:
We study the ℓ1 regularized least squares optimization problem in a separable Hilbert space. We show that the iterative soft-thresholding algorithm (ISTA) converges linearly, without making any assumption on the linear operator into play or on the problem. The result is obtained combining two key concepts: the notion of extended support, a finite set containing the support, and the notion of conditioning over finite-dimensional sets. We prove that ISTA identifies the solution extended support after a finite number of iterations, and we derive linear convergence from the conditioning property, which is always satisfied for ℓ1 regularized least squares problems. Our analysis extends to the entire class of thresholding gradient algorithms, for which we provide a conceptually new proof of strong convergence, as well as convergence rates.
APA, Harvard, Vancouver, ISO, and other styles
12

Reginska, Teresa. "DISCREPANCY SETS FOR COMBINED LEAST SQUARES PROJECTION AND TIKHONOV REGULARIZATION." Mathematical Modelling and Analysis 22, no. 2 (March 18, 2017): 202–12. http://dx.doi.org/10.3846/13926292.2017.1289987.

Full text
Abstract:
To solve a linear ill-posed problem, a combination of the finite dimensional least squares projection method and the Tikhonov regularization is considered. The dimension of the projection is treated as the second parameter of regularization. A two-parameter discrepancy principle defines a discrepancy set for any data error bound. The aim of the paper is to describe this set and to indicate its subset such that for regularization parameters from this subset the related regularized solution has the same order of accuracy as the Tikhonov regularization with the standard discrepancy principle but without any discretization.
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Wenyuan, Chan Li, and Hong Zhao. "Label Distribution Learning by Regularized Sample Self-Representation." Mathematical Problems in Engineering 2018 (2018): 1–11. http://dx.doi.org/10.1155/2018/1090565.

Full text
Abstract:
Multilabel learning that focuses on an instance of the corresponding related or unrelated label can solve many ambiguity problems. Label distribution learning (LDL) reflects the importance of the related label to an instance and offers a more general learning framework than multilabel learning. However, the current LDL algorithms ignore the linear relationship between the distribution of labels and the feature. In this paper, we propose a regularized sample self-representation (RSSR) approach for LDL. First, the label distribution problem is formalized by sample self-representation, whereby each label distribution can be represented as a linear combination of its relevant features. Second, the LDL problem is solved by L2-norm least-squares and L2,1-norm least-squares methods to reduce the effects of outliers and overfitting. The corresponding algorithms are named RSSR-LDL2 and RSSR-LDL21. Third, the proposed algorithms are compared with four state-of-the-art LDL algorithms using 12 public datasets and five evaluation metrics. The results demonstrate that the proposed algorithms can effectively identify the predictive label distribution and exhibit good performance in terms of distance and similarity evaluations.
APA, Harvard, Vancouver, ISO, and other styles
14

Nikolova, Mila. "Solve exactly an under determined linear system by minimizing least squares regularized with an penalty." Comptes Rendus Mathematique 349, no. 21-22 (November 2011): 1145–50. http://dx.doi.org/10.1016/j.crma.2011.08.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lampe, J., and H. Voss. "GLOBAL CONVERGENCE OF RTLSQEP: A SOLVER OF REGULARIZED TOTAL LEAST SQUARES PROBLEMS VIA QUADRATIC EIGENPROBLEMS." Mathematical Modelling and Analysis 13, no. 1 (March 31, 2008): 55–66. http://dx.doi.org/10.3846/1392-6292.2008.13.55-66.

Full text
Abstract:
The total least squares (TLS) method is a successful approach for linear problems if both the matrix and the right hand side are contaminated by some noise. In a recent paper Sima, Van Huffel and Golub suggested an iterative method for solving regularized TLS problems, where in each iteration step a quadratic eigenproblem has to be solved. In this paper we prove its global convergence, and we present an efficient implementation using an iterative projection method with thick updates.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Yang, and Sergey Fomel. "Seismic data interpolation beyond aliasing using regularized nonstationary autoregression." GEOPHYSICS 76, no. 5 (September 2011): V69—V77. http://dx.doi.org/10.1190/geo2010-0231.1.

Full text
Abstract:
Seismic data are often inadequately or irregularly sampled along spatial axes. Irregular sampling can produce artifacts in seismic imaging results. We have developed a new approach to interpolate aliased seismic data based on adaptive prediction-error filtering (PEF) and regularized nonstationary autoregression. Instead of cutting data into overlapping windows (patching), a popular method for handling nonstationarity, we obtain smoothly nonstationary PEF coefficients by solving a global regularized least-squares problem. We employ shaping regularization to control the smoothness of adaptive PEFs. Finding the interpolated traces can be treated as another linear least-squares problem, which solves for data values rather than filter coefficients. Compared with existing methods, the advantages of the proposed method include an intuitive selection of regularization parameters and fast iteration convergence. The technique was tested on benchmark synthetic and field data to prove it can successfully reconstruct data with decimated or missing traces.
APA, Harvard, Vancouver, ISO, and other styles
17

Hainmueller, Jens, and Chad Hazlett. "Kernel Regularized Least Squares: Reducing Misspecification Bias with a Flexible and Interpretable Machine Learning Approach." Political Analysis 22, no. 2 (2014): 143–68. http://dx.doi.org/10.1093/pan/mpt019.

Full text
Abstract:
We propose the use of Kernel Regularized Least Squares (KRLS) for social science modeling and inference problems. KRLS borrows from machine learning methods designed to solve regression and classification problems without relying on linearity or additivity assumptions. The method constructs a flexible hypothesis space that uses kernels as radial basis functions and finds the best-fitting surface in this space by minimizing a complexity-penalized least squares problem. We argue that the method is well-suited for social science inquiry because it avoids strong parametric assumptions, yet allows interpretation in ways analogous to generalized linear models while also permitting more complex interpretation to examine nonlinearities, interactions, and heterogeneous effects. We also extend the method in several directions to make it more effective for social inquiry, by (1) deriving estimators for the pointwise marginal effects and their variances, (2) establishing unbiasedness, consistency, and asymptotic normality of the KRLS estimator under fairly general conditions, (3) proposing a simple automated rule for choosing the kernel bandwidth, and (4) providing companion software. We illustrate the use of the method through simulations and empirical examples.
APA, Harvard, Vancouver, ISO, and other styles
18

Sundararajan, S., Shirish Shevade, and S. Sathiya Keerthi. "Fast Generalized Cross-Validation Algorithm for Sparse Model Learning." Neural Computation 19, no. 1 (January 2007): 283–301. http://dx.doi.org/10.1162/neco.2007.19.1.283.

Full text
Abstract:
We propose a fast, incremental algorithm for designing linear regression models. The proposed algorithm generates a sparse model by optimizing multiple smoothing parameters using the generalized cross-validation approach. The performances on synthetic and real-world data sets are compared with other incremental algorithms such as Tipping and Faul's fast relevance vector machine, Chen et al.'s orthogonal least squares, and Orr's regularized forward selection. The results demonstrate that the proposed algorithm is competitive.
APA, Harvard, Vancouver, ISO, and other styles
19

Aghamiry, H. S., A. Gholami, and S. Operto. "Full waveform inversion by proximal Newton method using adaptive regularization." Geophysical Journal International 224, no. 1 (September 11, 2020): 169–80. http://dx.doi.org/10.1093/gji/ggaa434.

Full text
Abstract:
SUMMARY Regularization is necessary for solving non-linear ill-posed inverse problems arising in different fields of geosciences. The base of a suitable regularization is the prior expressed by the regularizer, which can be non-adaptive or adaptive (data-driven), smooth or non-smooth, variational-based or not. Nevertheless, tailoring a suitable and easy-to-implement prior for describing geophysical models is a non-trivial task. In this paper, we propose two generic optimization algorithms to implement arbitrary regularization in non-linear inverse problems such as full-waveform inversion (FWI), where the regularization task is recast as a denoising problem. We assess these optimization algorithms with the plug-and-play block matching (BM3D) regularization algorithm, which determines empirical priors adaptively without any optimization formulation. The non-linear inverse problem is solved with a proximal Newton method, which generalizes the traditional Newton step in such a way to involve the gradients/subgradients of a (possibly non-differentiable) regularization function through operator splitting and proximal mappings. Furthermore, it requires to account for the Hessian matrix in the regularized least-squares optimization problem. We propose two different splitting algorithms for this task. In the first, we compute the Newton search direction with an iterative method based upon the first-order generalized iterative shrinkage-thresholding algorithm (ISTA), and hence Newton-ISTA (NISTA). The iterations require only Hessian-vector products to compute the gradient step of the quadratic approximation of the non-linear objective function. The second relies on the alternating direction method of multipliers (ADMM), and hence Newton-ADMM (NADMM), where the least-squares optimization subproblem and the regularization subproblem in the composite objective function are decoupled through auxiliary variable and solved in an alternating mode. The least-squares subproblem can be solved with exact, inexact, or quasi-Newton methods. We compare NISTA and NADMM numerically by solving FWI with BM3D regularization. The tests show promising results obtained by both algorithms. However, NADMM shows a faster convergence rate than NISTA when using L-BFGS to solve the Newton system.
APA, Harvard, Vancouver, ISO, and other styles
20

Weng, Haolei, and Arian Maleki. "Low noise sensitivity analysis of -minimization in oversampled systems." Information and Inference: A Journal of the IMA 9, no. 1 (January 27, 2019): 113–55. http://dx.doi.org/10.1093/imaiai/iay024.

Full text
Abstract:
Abstract The class of $\ell _q$-regularized least squares (LQLS) are considered for estimating $\beta \in \mathbb{R}^p$ from its $n$ noisy linear observations $y=X\beta + w$. The performance of these schemes are studied under the high-dimensional asymptotic setting in which the dimension of the signal grows linearly with the number of measurements. In this asymptotic setting, phase transition (PT) diagrams are often used for comparing the performance of different estimators. PT specifies the minimum number of observations required by a certain estimator to recover a structured signal, e.g. a sparse one, from its noiseless linear observations. Although PT analysis is shown to provide useful information for compressed sensing, the fact that it ignores the measurement noise not only limits its applicability in many application areas, but also may lead to misunderstandings. For instance, consider a linear regression problem in which $n>p$ and the signal is not exactly sparse. If the measurement noise is ignored in such systems, regularization techniques, such as LQLS, seem to be irrelevant since even the ordinary least squares (OLS) returns the exact solution. However, it is well known that if $n$ is not much larger than $p$, then the regularization techniques improve the performance of OLS. In response to this limitation of PT analysis, we consider the low-noise sensitivity analysis. We show that this analysis framework (i) reveals the advantage of LQLS over OLS, (ii) captures the difference between different LQLS estimators even when $n>p$, and (iii) provides a fair comparison among different estimators in high signal-to-noise ratios. As an application of this framework, we will show that under mild conditions LASSO outperforms other LQLS even when the signal is dense. Finally, by a simple transformation, we connect our low-noise sensitivity framework to the classical asymptotic regime in which $n/p \rightarrow \infty$, and characterize how and when regularization techniques offer improvements over ordinary least squares, and which regularizer gives the most improvement when the sample size is large.
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Fangfang, and Peng Pan. "A New Algorithm for Positive Semidefinite Matrix Completion." Journal of Applied Mathematics 2016 (2016): 1–5. http://dx.doi.org/10.1155/2016/1659019.

Full text
Abstract:
Positive semidefinite matrix completion (PSDMC) aims to recover positive semidefinite and low-rank matrices from a subset of entries of a matrix. It is widely applicable in many fields, such as statistic analysis and system control. This task can be conducted by solving the nuclear norm regularized linear least squares model with positive semidefinite constraints. We apply the widely used alternating direction method of multipliers to solve the model and get a novel algorithm. The applicability and efficiency of the new algorithm are demonstrated in numerical experiments. Recovery results show that our algorithm is helpful.
APA, Harvard, Vancouver, ISO, and other styles
22

ZHANG, YU, GUOXU ZHOU, JING JIN, QIBIN ZHAO, XINGYU WANG, and ANDRZEJ CICHOCKI. "AGGREGATION OF SPARSE LINEAR DISCRIMINANT ANALYSES FOR EVENT-RELATED POTENTIAL CLASSIFICATION IN BRAIN-COMPUTER INTERFACE." International Journal of Neural Systems 24, no. 01 (December 18, 2013): 1450003. http://dx.doi.org/10.1142/s0129065714500038.

Full text
Abstract:
Two main issues for event-related potential (ERP) classification in brain–computer interface (BCI) application are curse-of-dimensionality and bias-variance tradeoff, which may deteriorate classification performance, especially with insufficient training samples resulted from limited calibration time. This study introduces an aggregation of sparse linear discriminant analyses (ASLDA) to overcome these problems. In the ASLDA, multiple sparse discriminant vectors are learned from differently l1-regularized least-squares regressions by exploiting the equivalence between LDA and least-squares regression, and are subsequently aggregated to form an ensemble classifier, which could not only implement automatic feature selection for dimensionality reduction to alleviate curse-of-dimensionality, but also decrease the variance to improve generalization capacity for new test samples. Extensive investigation and comparison are carried out among the ASLDA, the ordinary LDA and other competing ERP classification algorithms, based on different three ERP datasets. Experimental results indicate that the ASLDA yields better overall performance for single-trial ERP classification when insufficient training samples are available. This suggests the proposed ASLDA is promising for ERP classification in small sample size scenario to improve the practicability of BCI.
APA, Harvard, Vancouver, ISO, and other styles
23

Beiser-McGrath, Janina, and Liam F. Beiser-McGrath. "Problems with products? Control strategies for models with interaction and quadratic effects." Political Science Research and Methods 8, no. 4 (May 18, 2020): 707–30. http://dx.doi.org/10.1017/psrm.2020.17.

Full text
Abstract:
AbstractModels testing interactive and quadratic hypotheses are common in Political Science but control strategies for these models have received little attention. Common practice is to simply include additive control variables, without relevant product terms, into models with interaction or quadratic terms. In this paper, we show in Monte Carlos that interaction terms can absorb the effects of other un-modeled interaction and non-linear effects and analogously, that included quadratic terms can reflect omitted interactions and non-linearities. This problem even occurs when included and omitted product terms do not share any constitutive terms. We show with Monte Carlo experiments that regularized estimators, the adaptive Lasso, Kernel Regularized Least Squares (KRLS), and Bayesian Additive Regression Trees (BART) can prevent the misattribution of interactive/quadratic effects, minimize the problems of efficiency loss and overfitting, and have low false-positive rates. We illustrate how inferences drawn can change when relevant product terms are used in the control strategy using a recent paper. Implementing the recommendations of this paper would increase the reliability of conditional and non-linear relationships estimated in many papers in the literature.
APA, Harvard, Vancouver, ISO, and other styles
24

Ajeel, Sherzad M., and Hussein A. Hashem. "Comparison Some Robust Regularization Methods in Linear Regression via Simulation Study." Academic Journal of Nawroz University 9, no. 2 (August 20, 2020): 244. http://dx.doi.org/10.25007/ajnu.v9n2a818.

Full text
Abstract:
In this paper, we reviewed some variable selection methods in linear regression model. Conventional methodologies such as the Ordinary Least Squares (OLS) technique is one of the most commonly used method in estimating the parameters in linear regression. But the OLS estimates performs poorly when the dataset suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. To address this problem, robust regularized regression methods like Huber Lasso (Rosset and Zhu, 2007) and quantile regression (Koenker and Bassett ,1978] were proposed. This paper focuses on comparing the performance of the seven methods, the quantile regression estimates, the Huber Lasso estimates, the adaptive Huber Lasso estimates, the adaptive LAD Lasso, the Gamma-divergence estimates, the Maximum Tangent Likelihood Lasso (MTE) estimates and Semismooth Newton Coordinate Descent Algorithm (SNCD ) Huber loss estimates.
APA, Harvard, Vancouver, ISO, and other styles
25

Strahl, Jonathan, Jaakko Peltonen, Hirsohi Mamitsuka, and Samuel Kaski. "Scalable Probabilistic Matrix Factorization with Graph-Based Priors." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5851–58. http://dx.doi.org/10.1609/aaai.v34i04.6043.

Full text
Abstract:
In matrix factorization, available graph side-information may not be well suited for the matrix completion problem, having edges that disagree with the latent-feature relations learnt from the incomplete data matrix. We show that removing these contested edges improves prediction accuracy and scalability. We identify the contested edges through a highly-efficient graphical lasso approximation. The identification and removal of contested edges adds no computational complexity to state-of-the-art graph-regularized matrix factorization, remaining linear with respect to the number of non-zeros. Computational load even decreases proportional to the number of edges removed. Formulating a probabilistic generative model and using expectation maximization to extend graph-regularised alternating least squares (GRALS) guarantees convergence. Rich simulated experiments illustrate the desired properties of the resulting algorithm. On real data experiments we demonstrate improved prediction accuracy with fewer graph edges (empirical evidence that graph side-information is often inaccurate). A 300 thousand dimensional graph with three million edges (Yahoo music side-information) can be analyzed in under ten minutes on a standard laptop computer demonstrating the efficiency of our graph update.
APA, Harvard, Vancouver, ISO, and other styles
26

Bakushinsky, Anatoly, and Alexandra Smirnova. "A study of frozen iteratively regularized Gauss–Newton algorithm for nonlinear ill-posed problems under generalized normal solvability condition." Journal of Inverse and Ill-posed Problems 28, no. 2 (April 1, 2020): 275–86. http://dx.doi.org/10.1515/jiip-2019-0099.

Full text
Abstract:
AbstractA parameter identification inverse problem in the form of nonlinear least squares is considered. In the lack of stability, the frozen iteratively regularized Gauss–Newton (FIRGN) algorithm is proposed and its convergence is justified under what we call a generalized normal solvability condition. The penalty term is constructed based on a semi-norm generated by a linear operator yielding a greater flexibility in the use of qualitative and quantitative a priori information available for each particular model. Unlike previously known theoretical results on the FIRGN method, our convergence analysis does not rely on any nonlinearity conditions and it is applicable to a large class of nonlinear operators. In our study, we leverage the nature of ill-posedness in order to establish convergence in the noise-free case. For noise contaminated data, we show that, at least theoretically, the process does not require a stopping rule and is no longer semi-convergent. Numerical simulations for a parameter estimation problem in epidemiology illustrate the efficiency of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Juefu, Henning Kuehl, and Mauricio D. Sacchi. "High-resolution wave-equation AVA imaging: Algorithm and tests with a data set from the Western Canadian Sedimentary Basin." GEOPHYSICS 70, no. 5 (September 2005): S91—S99. http://dx.doi.org/10.1190/1.2076748.

Full text
Abstract:
This paper presents a 3D least-squares wave-equation migration method that yields regularized common-image gathers (CIGs) for amplitude-versus-angle (AVA) analysis. In least-squares migration, we pose seismic imaging as a linear inverse problem; this provides at least two advantages. First, we are able to incorporate model-space weighting operators that improve the amplitude fidelity of CIGs. Second, the influence of improperly sampled data (footprint noise) can be diminished by incorporating data-space weighting operators. To investigate the viability of this class of methods for oil and gas exploration, we test the algorithm with a real-data example from the Western Canadian Sedimentary Basin. To make our problem computationally feasible, we utilize the 3D common-azimuth approximation in the migration algorithm. The inversion algorithm uses the method of conjugate gradients with the addition of a ray-parameter-dependent smoothing constraint that minimizes sampling and aperture artifacts. We show that more robust AVA attributes can be obtained by properly selecting the model and data-space regularization operators. The algorithm is implemented in conjunction with a preconditioning strategy to accelerate convergence. Posing the migration problem as an inverse problem leads to enhanced event continuity in CIGs and, hence, more reliable AVA estimates. The vertical resolution of the inverted image also improves as a consequence of increased coherence in CIGs and, in addition, by implicitly introducing migration deconvolution in the inversion.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhdanov, Michael S., and Ekaterina Tolstaya. "A novel approach to the model appraisal and resolution analysis of regularized geophysical inversion." GEOPHYSICS 71, no. 6 (November 2006): R79—R90. http://dx.doi.org/10.1190/1.2336347.

Full text
Abstract:
The existing techniques for appraisal of geophysical inverse images are based on calculating the model resolution and the model covariance matrices. In some applications, however, it becomes desirable to evaluate the upper bounds of the variations in the solution of the inverse problem. It is possible to use the Cauchy inequality for the regularized least-squares inversion to quantify the ability of an experiment to discriminate between two similar models in the presence of noise in the data. We present a new method for resolution analysis based on evaluating the spatial distribution of the upper bounds of the model variations and introduce a new characteristic of geophysical inversion, resolution density, as an inverse of these upper bounds. We derive an efficient numerical technique to compute the resolution density based on the spectral Lanczos decomposition method (SLDM). The methodology was tested on 3D synthetic linear and nonlinear electromagnetic (EM) data inversions, and also to interpret the helicopter-borne EM data collected by INCO Exploration in the Voisey’s Bay area of Canada.
APA, Harvard, Vancouver, ISO, and other styles
29

Jin, Kangkang, Jian Xu, Zichen Wang, Can Lu, Long Fan, Zhongzheng Li, and Jiaxin Zhou. "Deep Learning Convolutional Neural Network Applying for the Arctic Acoustic Tomography Current Inversion Accuracy Improvement." Journal of Marine Science and Engineering 9, no. 7 (July 8, 2021): 755. http://dx.doi.org/10.3390/jmse9070755.

Full text
Abstract:
Warm current has a strong impact on the melting of sea ice, so clarifying the current features plays a very important role in the Arctic sea ice coverage forecasting study field. Currently, Arctic acoustic tomography is the only feasible method for the large-range current measurement under the Arctic sea ice. Furthermore, affected by the high latitudes Coriolis force, small-scale variability greatly affects the accuracy of Arctic acoustic tomography. However, small-scale variability could not be measured by empirical parameters and resolved by Regularized Least Squares (RLS) in the inverse problem of Arctic acoustic tomography. In this paper, the convolutional neural network (CNN) is proposed to enhance the prediction accuracy in the Arctic, and especially, Gaussian noise is added to reflect the disturbance of the Arctic environment. First, we use the finite element method to build the background ocean model. Then, the deep learning CNN method constructs the non-linear mapping relationship between the acoustic data and the corresponding flow velocity. Finally, the simulation result shows that the deep learning convolutional neural network method being applied to Arctic acoustic tomography could achieve 45.87% accurate improvement than the common RLS method in the current inversion.
APA, Harvard, Vancouver, ISO, and other styles
30

Schultz, G., and C. Ruppel. "Inversion of inductive electromagnetic data in highly conductive terrains." GEOPHYSICS 70, no. 1 (January 2005): G16—G28. http://dx.doi.org/10.1190/1.1852775.

Full text
Abstract:
Despite the increasing use of controlled-source frequency-domain EM data to characterize shallow subsurface structures, relatively few inversion algorithms have been widely applied to data from real-world settings, particularly in high-conductivity terrains. In this study, we develop robust and convergent regularized, least-squares inversion algorithms based on both linear and nonlinear formulations of mutual dipole induction for the forward problem. A modified version of the discrepancy principle based on a priori information is implemented to select optimal smoothing parameters that simultaneously guarantee the stability and best-fit criteria. To investigate the problems of resolution and equivalence, we consider typical layered-earth models in one and two dimensions using both synthetic and observed data. Synthetic examples show that inversions based on the nonlinear forward model more accurately resolve subsurface structure, and that inversions based on the linear forward model tend to drastically underpredict high conductivities at depth. Inversions of actual field data from well-characterized sites (e.g., National Geotechnical Experimentation Site; sand-dominated coastal aquifer in the Georgia Bight) are used to test the applicability of the model to terrains with different characteristic conductivity structure. A comparison of our inversion results with existing cone-penetrometer and downhole-conductivity data from these field sites demonstrates the ability of the inversions to constrain conductivity variations in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Xie, Wei, Jie-sheng Wang, Cheng Xing, Sha-Sha Guo, Meng-wei Guo, and Ling-feng Zhu. "Adaptive Hybrid Soft-Sensor Model of Grinding Process Based on Regularized Extreme Learning Machine and Least Squares Support Vector Machine Optimized by Golden Sine Harris Hawk Optimization Algorithm." Complexity 2020 (May 28, 2020): 1–26. http://dx.doi.org/10.1155/2020/6457517.

Full text
Abstract:
Soft-sensor technology plays a vital role in tracking and monitoring the key production indicators of the grinding and classifying process. Least squares support vector machine (LSSVM), as a soft-sensor model with strong generalization ability, can be used to predict key production indicators in complex grinding processes. The traditional crossvalidation method cannot obtain the ideal structure parameters of LSSVM. In order to improve the prediction accuracy of LSSVM, a golden sine Harris Hawk optimization (GSHHO) algorithm was proposed to optimize the structure parameters of LSSVM models with linear kernel, sigmoid kernel, polynomial kernel, and radial basis kernel, and the influences of GSHHO algorithm on the prediction accuracy under these LSSVM models were studied. In order to deal with the problem that the prediction accuracy of the model decreases due to changes of industrial status, this paper adopts moving window (MW) strategy to adaptively revise the LSSVM (MW-LSSVM), which greatly improves the prediction accuracy of the LSSVM. The prediction accuracy of the regularized extreme learning machine with MW strategy (MW-RELM) is higher than that of MW-LSSVM at some moments. Based on the training errors of LSSVM and RELM within the window, this paper proposes an adaptive hybrid soft-sensing model that switches between LSSVM and RELM. Compared with the previous MW-LSSVM, MW-neural network trained with extended Kalman filter(MW-KNN), and MW-RELM, the prediction accuracy of the hybrid model is further improved. Simulation results show that the proposed hybrid adaptive soft-sensor model has good generalization ability and prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Jing-Xian, Jun Ma, Ya-Nan Tang, Wei-Xiong Wu, Jin-Hua Shao, Wan-Ben Wu, Shu-Yun Wei, Yi-Fei Liu, Yuan-Chen Wang, and Hai-Qiang Guo. "Estimation of Sugarcane Yield Using a Machine Learning Approach Based on UAV-LiDAR Data." Remote Sensing 12, no. 17 (August 31, 2020): 2823. http://dx.doi.org/10.3390/rs12172823.

Full text
Abstract:
Sugarcane is a multifunctional crop mainly used for sugar and renewable bioenergy production. Accurate and timely estimation of the sugarcane yield before harvest plays a particularly important role in the management of agroecosystems. The rapid development of remote sensing technologies, especially Light Detecting and Ranging (LiDAR), significantly enhances aboveground fresh weight (AFW) estimations. In our study, we evaluated the capability of LiDAR mounted on an Unmanned Aerial Vehicle (UAV) in estimating the sugarcane AFW in Fusui county, Chongzuo city of Guangxi province, China. We measured the height and the fresh weight of sugarcane plants in 105 sampling plots, and eight variables were extracted from the field-based measurements. Six regression algorithms were used to build the sugarcane AFW model: multiple linear regression (MLR), stepwise multiple regression (SMR), generalized linear model (GLM), generalized boosted model (GBM), kernel-based regularized least squares (KRLS), and random forest regression (RFR). The results demonstrate that RFR (R2 = 0.96, RMSE = 1.27 kg m−2) performs better than other models in terms of prediction accuracy. The final fitted sugarcane AFW distribution maps exhibited good agreement with the observed values (R2 = 0.97, RMSE = 1.33 kg m−2). Canopy cover, the distance to the road, and tillage methods all have an impact on sugarcane AFW. Our study provides guidance for calculating the optimum planting density, reducing the negative impact of human activities, and selecting suitable tillage methods in actual cultivation and production.
APA, Harvard, Vancouver, ISO, and other styles
33

Fedi, Maurizio, Per Christian Hansen, and Valeria Paoletti. "Analysis of depth resolution in potential-field inversion." GEOPHYSICS 70, no. 6 (November 2005): A1—A11. http://dx.doi.org/10.1190/1.2122408.

Full text
Abstract:
We study the inversion of potential fields and evaluate the degree of depth resolution achievable for a given problem. To this end, we introduce a powerful new tool: the depth-resolution plot (DRP). The DRP allows a theoretical study of how much the depth resolution in a potential-field inversion is influenced by the way the problem is discretized and regularized. The DRP also allows a careful study of the influence of various kinds of ambiguities, such as those from data errors or of a purely algebraic nature. The achievable depth resolution is related to the given discretization, regularization, and data noise level. We compute DRP by means of singular-value decomposition (SVD) or its generalization (GSVD), depending on the particular regularization method chosen. To illustrate the use of the DRP, we assume a source volume of specified depth and horizontal extent in which the solution is piecewise constant within a 3D grid of blocks. We consider various linear regularization terms in a Tikhonov (damped least-squares) formulation, some based on using higher-order derivatives in the objective function. DRPs are illustrated for both synthetic and real data. Our analysis shows that if the algebraic ambiguity is not too large and a suitable smoothing norm is used, some depth resolution can be obtained without resorting to any subjective choice of depth weighting.
APA, Harvard, Vancouver, ISO, and other styles
34

Jiang, Jian-Hui, Roumiana Tsenkova, Yuqing Wu, Ru-Qin Yu, and Yukihiro Ozaki. "Principal Discriminant Variate Method for Classification of Multicollinear Data: Applications to Near-Infrared Spectra of Cow Blood Samples." Applied Spectroscopy 56, no. 4 (April 2002): 488–501. http://dx.doi.org/10.1366/0003702021954944.

Full text
Abstract:
A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for effectively handling multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. This motivation furnishes the PDV method with improved stability in prediction without significant loss of separability. Different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Two sets of near-infrared (NIR) spectra data, one corresponding to the blood plasma samples from two cows and the other associated with the whole blood samples from mastitic and healthy cows, have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least-squares (DPLS), soft independent modeling of class analogies (SIMCA), and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the NIR spectra of blood plasma samples from different classes are clearly discriminated by the PDV method, and the proposed method provides superior performance to PCA, DPLS, SIMCA, and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional differences.
APA, Harvard, Vancouver, ISO, and other styles
35

Christiansen, Bo. "Reconstructing the NH Mean Temperature: Can Underestimation of Trends and Variability Be Avoided?" Journal of Climate 24, no. 3 (February 1, 2011): 674–92. http://dx.doi.org/10.1175/2010jcli3646.1.

Full text
Abstract:
Abstract There are indications that hemispheric-mean climate reconstructions seriously underestimate the amplitude of low-frequency variability and trends. Some of the theory of linear regression and error-in-variables models is reviewed to identify the sources of this problem. On the basis of the insight gained, a reconstruction method that is supposed to minimize the underestimation is formulated. The method consists of reconstructing the local temperatures at the geographical locations of the proxies, followed by calculating the hemispheric average. The method is tested by applying it to an ensemble of surrogate temperature fields based on two climate simulations covering the last 500 and 1000 yr. Compared to the regularized expectation maximization (RegEM) truncated total least squares (TTLS) method and a composite-plus-scale method—two methods recently used in the literature—the new method strongly improves the behavior regarding low-frequency variability and trends. The potential importance in real-world situations is demonstrated by applying the methods to a set of 14 decadally smoothed proxies. Here the new method shows much larger low-frequency variability and a much colder preindustrial temperature level than the other reconstruction methods. However, this should mainly be seen as a demonstration of the potential losses and gains of variability, as the reconstructions based on the 14 decadally smoothed proxies are not very robust.
APA, Harvard, Vancouver, ISO, and other styles
36

Kumar, Sawan, Varsha Sreenivasan, Partha Talukdar, Franco Pestilli, and Devarajan Sridharan. "ReAl-LiFE: Accelerating the Discovery of Individualized Brain Connectomes on GPUs." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 630–38. http://dx.doi.org/10.1609/aaai.v33i01.3301630.

Full text
Abstract:
Diffusion imaging and tractography enable mapping structural connections in the human brain, in-vivo. Linear Fascicle Evaluation (LiFE) is a state-of-the-art approach for pruning spurious connections in the estimated structural connectome, by optimizing its fit to the measured diffusion data. Yet, LiFE imposes heavy demands on computing time, precluding its use in analyses of large connectome databases. Here, we introduce a GPU-based implementation of LiFE that achieves 50-100x speedups over conventional CPU-based implementations for connectome sizes of up to several million fibers. Briefly, the algorithm accelerates generalized matrix multiplications on a compressed tensor through efficient GPU kernels, while ensuring favorable memory access patterns. Leveraging these speedups, we advance LiFE’s algorithm by imposing a regularization constraint on estimated fiber weights during connectome pruning. Our regularized, accelerated, LiFE algorithm (“ReAl-LiFE”) estimates sparser connectomes that also provide more accurate fits to the underlying diffusion signal. We demonstrate the utility of our approach by classifying pathological signatures of structural connectivity in patients with Alzheimer’s Disease (AD). We estimated million fiber whole-brain connectomes, followed by pruning with ReAl-LiFE, for 90 individuals (45 AD patients and 45 healthy controls). Linear classifiers, based on support vector machines, achieved over 80% accuracy in classifying AD patients from healthy controls based on their ReAl-LiFE pruned structural connectomes alone. Moreover, classification based on the ReAl-LiFE pruned connectome outperformed both the unpruned connectome, as well as the LiFE pruned connectome, in terms of accuracy. We propose our GPU-accelerated approach as a widely relevant tool for non-negative least squares optimization, across many domains.
APA, Harvard, Vancouver, ISO, and other styles
37

Mahmood, Mohammed Shuker, and D. Lesnic. "Identification of conductivity in inhomogeneous orthotropic media." International Journal of Numerical Methods for Heat & Fluid Flow 29, no. 1 (January 7, 2019): 165–83. http://dx.doi.org/10.1108/hff-11-2017-0469.

Full text
Abstract:
Purpose The purpose of this paper is to solve numerically the identification of the thermal conductivity of an inhomogeneous and possibly anisotropic medium from interior/internal temperature measurements. Design/methodology/approach The formulated coefficient identification problem is inverse and ill-posed, and therefore, to obtain a stable solution, a non-linear regularized least-squares approach is used. For the numerical discretization of the orthotropic heat equation, the finite-difference method is applied, while the non-linear minimization is performed using the MATLAB toolbox routine lsqnonlin. Findings Numerical results show the accuracy and stability of solution even in the presence of noise (modelling inexact measurements) in the input temperature data. Research limitations/implications The mathematical formulation uses temporal temperature measurements taken at many points inside the sample, and this may be too much information that is provided to identify a space-wise dependent only conductivity tensor. Practical implications As noisy data are inverted, the paper models real situations in which practical temperature measurements recorded using thermocouples are inherently contaminated with random noise. Social implications The identification of the conductivity of inhomogeneous and orthotropic media will be of great interest to the inverse problems community with applications in geophysics, groundwater flow and heat transfer. Originality/value The current investigation advances the field of coefficient identification problems by generalizing the conductivity to be anisotropic in addition of being heterogeneous. The originality lies in performing, for the first time, numerical simulations of inversion to find the orthotropic and inhomogeneous thermal conductivity from noisy temperature measurements. Further value and physical significance are brought in by determining the degree of cure in a resin transfer molding process, in addition to obtaining the inhomogeneous thermal conductivity of the tested material.
APA, Harvard, Vancouver, ISO, and other styles
38

Valenciano, Alejandro A., Biondo L. Biondi, and Robert G. Clapp. "Imaging by target-oriented wave-equation inversion." GEOPHYSICS 74, no. 6 (November 2009): WCA109—WCA120. http://dx.doi.org/10.1190/1.3250267.

Full text
Abstract:
Wave-equation inversion is a powerful technique able to build higher-resolution images with balanced amplitudes in complex subsurface areas relative to migration alone. Wave-equation inversion can be performed in image space without making velocity-model or acquisition-geometry approximations. Our method explicitly computes the least-squares Hessian matrix, defined from the modeling/migration operators, and uses a linear solver to find the solution of the resulting system of equations. One important advantage of the explicit computation of the Hessian, compared to iterative modeling/migration operations schemes, is that most of the work (precomputing the Hessian) is done up front; afterward, different inversion parameters or schemes can be tried at lower cost. Another advantage is that the method canhandle 3D data in a target-oriented fashion. The inversion in the presence of a complex overburden leads to an ill-conditioned system of equations that must be regularized to obtain a stable numerical solution. Regularization can be implemented in the poststack-image domain (zero subsurface offset), where the options for a regularization operator are limited to a customary damping, or in the prestack-image domain (subsurface offset), where a physically inspired regularization operator (differential semblance) can be applied. Though the prestack-image-domain inversion is more expensive than the poststack-image-domain inversion, it can improve the reflectors' continuity into the shadow zones with an enhanced signal-to-noise ratio. Improved subsalt-sediment images in the Sigsbee2b synthetic model and a 3D Gulf of Mexico field data set confirm the benefits of the inversion.
APA, Harvard, Vancouver, ISO, and other styles
39

Riel, Bryan, Brent Minchew, and Ian Joughin. "Observing traveling waves in glaciers with remote sensing: new flexible time series methods and application to Sermeq Kujalleq (Jakobshavn Isbræ), Greenland." Cryosphere 15, no. 1 (January 28, 2021): 407–29. http://dx.doi.org/10.5194/tc-15-407-2021.

Full text
Abstract:
Abstract. The recent influx of remote sensing data provides new opportunities for quantifying spatiotemporal variations in glacier surface velocity and elevation fields. Here, we introduce a flexible time series reconstruction and decomposition technique for forming continuous, time-dependent surface velocity and elevation fields from discontinuous data and partitioning these time series into short- and long-term variations. The time series reconstruction consists of a sparsity-regularized least-squares regression for modeling time series as a linear combination of generic basis functions of multiple temporal scales, allowing us to capture complex variations in the data using simple functions. We apply this method to the multitemporal evolution of Sermeq Kujalleq (Jakobshavn Isbræ), Greenland. Using 555 ice velocity maps generated by the Greenland Ice Mapping Project and covering the period 2009–2019, we show that the amplification in seasonal velocity variations in 2012–2016 was coincident with a longer-term speedup initiating in 2012. Similarly, the reduction in post-2017 seasonal velocity variations was coincident with a longer-term slowdown initiating around 2017. To understand how these perturbations propagate through the glacier, we introduce an approach for quantifying the spatially varying and frequency-dependent phase velocities and attenuation length scales of the resulting traveling waves. We hypothesize that these traveling waves are predominantly kinematic waves based on their long periods, coincident changes in surface velocity and elevation, and connection with variations in the terminus position. This ability to quantify wave propagation enables an entirely new framework for studying glacier dynamics using remote sensing data.
APA, Harvard, Vancouver, ISO, and other styles
40

Panagakis, Yannis, and Constantine Kotropoulos. "Telephone Handset Identification by Collaborative Representations." International Journal of Digital Crime and Forensics 5, no. 4 (October 2013): 1–14. http://dx.doi.org/10.4018/ijdcf.2013100101.

Full text
Abstract:
Recorded speech signals convey information not only for the speakers' identity and the spoken language, but also for the acquisition devices used for their recording. Therefore, it is reasonable to perform acquisition device identification by analyzing the recorded speech signal. To this end, recording-level spectral, cepstral, and fusion of spectral and cepstral features are employed as suitable representations for device identification. The feature vectors extracted from the training speech recordings are used to form overcomplete dictionaries for the devices. Each test feature vector is represented as a linear combination of all the dictionary columns (i.e., atoms). Since the dimensionality of the feature vectors is much smaller than the number of training speech recordings, there are infinitely many representations of each test feature vector with respect to the dictionary. These representations are referred to as collaborative representations in the sense that all the dictionary atoms collaboratively represent any test feature vector. By imposing the representation to be either sparse (i.e., to admit the minimum norm) or to have the minimum norm, unique collaborative representations are obtained. The classification is performed by assigning each test feature vector the device identity of the dictionary atoms yielding the minimum reconstruction error. This classification method is referred to as the sparse representation-based classifier (SRC) if the sparse collaborative representation is employed and as the least squares collaborative representation-based classifier (LSCRC) in the case of the minimum norm regularized collaborative representation is used for reconstructing the test sample. By employing the LSCRC, state of the art identification accuracy of 97.67% is obtained on a set of 8 telephone handsets, from Lincoln-Labs Handset Database.
APA, Harvard, Vancouver, ISO, and other styles
41

Lednyts'kyy, O., C. von Savigny, K. U. Eichmann, and M. G. Mlynczak. "Atomic oxygen retrievals in the MLT region from SCIAMACHY nightglow limb measurements." Atmospheric Measurement Techniques Discussions 7, no. 10 (October 30, 2014): 10829–81. http://dx.doi.org/10.5194/amtd-7-10829-2014.

Full text
Abstract:
Abstract. Vertical profiles of atomic oxygen concentration in the mesosphere and lower thermosphere (MLT) region were retrieved from sun-synchronous SCIAMACHY/Envisat limb observations of the oxygen 557.7 nm green line emission occurring in the terrestrial nightglow. A band pass filter with noise detection was applied to eliminate contributions from other emissions, the impact of noise and auroral activity. Assuming horizontal homogeneity of each atmospheric layer, and absence of absorption and scattering, vertical volume emission rate profiles were retrieved from integrated limb emission rate profiles. The radiative transfer problem was treated with a linear forward model and inverted using regularized total least squares minimization. Atomic oxygen concentration ([O]) profiles were retrieved at altitudes from 85 to 105 km with approximately 4 km vertical resolution for the period from August 2002 to April 2012 at a constant local time (LT) of approximately 22:00. The retrieval of [O] profiles was based on the generally accepted 2-step Barth transfer scheme including consideration of quenching processes and the use of different available sources of temperature and atmospheric density profiles. A sensitivity analysis was performed for the retrieved [O] profiles to estimate the maximum uncertainty, assuming independent contributions of uncertainty components. The retrieved [O] profiles were compared with reference [O] profiles measured by SABER/TIMED and modelled using NRLMSISE-00 and SD-WACCM4. A comparison of the retrieved [O] profiles with the reference [O] profiles enabled the selection of the most appropriate photochemical model accounting for quenching processes and the most appropriate source of temperature and density profiles for further application of our approach to the [O] profile retrieval. The obtained [O] profile time series show characteristic seasonal variations in agreement with atmospheric models and satellite observations based on analysis of OH Meinel band emissions. Furthermore, a pronounced 11 year solar cycle variation can be identified in the atomic oxygen concentration time series, which will be the subject of further studies.
APA, Harvard, Vancouver, ISO, and other styles
42

Sacchi, Mauricio D., and Tadeusz J. Ulrych. "High‐resolution velocity gathers and offset space reconstruction." GEOPHYSICS 60, no. 4 (July 1995): 1169–77. http://dx.doi.org/10.1190/1.1443845.

Full text
Abstract:
We present a high‐resolution procedure to reconstruct common‐midpoint (CMP) gathers. First, we describe the forward and inverse transformations between offset and velocity space. Then, we formulate an underdetermined linear inverse problem in which the target is the artifacts‐free, aperture‐compensated velocity gather. We show that a sparse inversion leads to a solution that resembles the infinite‐aperture velocity gather. The latter is the velocity gather that should have been estimated with a simple conjugate operator designed from an infinite‐aperture seismic array. This high‐resolution velocity gather is then used to reconstruct the offset space. The algorithm is formally derived using two basic principles. First, we use the principle of maximum entropy to translate prior information about the unknown parameters into a probabilistic framework, in other words, to assign a probability density function to our model. Second, we apply Bayes’s rule to relate the a priori probability density function (pdf) with the pdf corresponding to the experimental uncertainties (likelihood function) to construct the a posteriori distribution of the unknown parameters. Finally the model is evaluated by maximizing the a posteriori distribution. When the problem is correctly regularized, the algorithm converges to a solution characterized by different degrees of sparseness depending on the required resolution. The solutions exhibit minimum entropy when the entropy is measured in terms of Burg’s definition. We emphasize two crucial differences in our approach with the familiar Burg method of maximum entropy spectral analysis. First, Burg’s entropy is minimized rather than maximized, which is equivalent to inferring as much as possible about the model from the data. Second, our approach uses the data as constraints in contrast with the classic maximum entropy spectral analysis approach where the autocorrelation function is the constraint. This implies that we recover not only amplitude information but also phase information, which serves to extrapolate the data outside the original aperture of the array. The tradeoff is controlled by a single parameter that under asymptotic conditions reduces the method to a damped least‐squares solution. Finally, the high‐resolution or aperture‐compensated velocity gather is used to extrapolate near‐ and far‐offset traces.
APA, Harvard, Vancouver, ISO, and other styles
43

Wei, Wei. "Block Updating/Downdating Algorithms for Regularised Least Squares Problems and Applications to Linear Discriminant Analysis." East Asian Journal on Applied Mathematics 10, no. 4 (June 2020): 679–97. http://dx.doi.org/10.4208/eajam.171219.220220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bartlett, Peter L., David P. Helmbold, and Philip M. Long. "Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks." Neural Computation 31, no. 3 (March 2019): 477–502. http://dx.doi.org/10.1162/neco_a_01164.

Full text
Abstract:
We analyze algorithms for approximating a function [Formula: see text] mapping [Formula: see text] to [Formula: see text] using deep linear neural networks, that is, that learn a function [Formula: see text] parameterized by matrices [Formula: see text] and defined by [Formula: see text]. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least-squares matrix [Formula: see text], in the case where the initial hypothesis [Formula: see text] has excess loss bounded by a small enough constant. We also show that gradient descent fails to converge for [Formula: see text] whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If [Formula: see text] is symmetric positive definite, we show that an algorithm that initializes [Formula: see text] learns an [Formula: see text]-approximation of [Formula: see text] using a number of updates polynomial in [Formula: see text], the condition number of [Formula: see text], and [Formula: see text]. In contrast, we show that if the least-squares matrix [Formula: see text] is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that [Formula: see text] satisfies [Formula: see text] for all [Formula: see text] but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant [Formula: see text] for all [Formula: see text] and the other that “balances” [Formula: see text] so that they have the same singular values.
APA, Harvard, Vancouver, ISO, and other styles
45

Vallat, Laurent D., Corey A. Kemper, Yuhyun Park, Laura Z. Rassenti, John W. Fisher, and John G. Gribben. "Reprogramming the Transcriptional Response of Chronic Lymphocytic Leukemia (CLL) Cells: Influencing the Temporal Gene Regulatory Network." Blood 104, no. 11 (November 16, 2004): 1128. http://dx.doi.org/10.1182/blood.v104.11.1128.1128.

Full text
Abstract:
Abstract Inferring a temporal gene network from a crucial signaling pathway in leukemic cells is a leading problem in oncology. We built a temporal transcriptional network of B-cell receptor (BCR) crosslinking in CLL and healthy B-cells, a critical step to understand the dynamics of BCR gene expression and to understand gene regulation at a system level. CLL cells have defects in apoptosis and the BCR pathway appears crucial in this process, leading to differential signaling and cell response according to the Ig gene mutational status and zap70 expression. We built this network by analysis of the gene expression profile after BCR crosslinking in six mutated (M) and six unmutated (UM) CLL cells and six healthy B-cells. After a pilot study examining multiple time points, total RNA was purified at four time points (60 to 390 min) from stimulated (S) and unstimulated (US) cells, for a total of 170 HU133plus2.0 DNA-chips analyzed. The logarithmic ratio data log(S/US) for each time point and each patient and the linear combination for four time points were analyzed to score expression over time. This temporal clustering discriminates healthy B-cells from CLL-cells, but now also distinguishes two groups of patients, one mainly UM with higher zap70 protein levels. BCR engagement induces different gene expression for this group of aggressive CLL. We built a temporal model of gene expression for these three groups using two iterative steps. The first step is a modified K-means clustering approach using log(S/US) of temporal gene expression. This results in groups of genes with common temporal structure whose expression exhibits significant differences after BCR stimulation. The number of considered genes were then reduced, keeping a small number with the largest increase in expression within each group. Most of these genes are important in BCR transcription including JUN, DUSP1 and NFkB and most first wave genes are transcription factors. The second step is to construct predictive models of gene expression, considering only causal linear predictive models. Specifically the expression of an output gene at each time is predicted using a weighted linear combination of the expression of another gene at past time points. The method groups pairs of genes by common predictive model. While paired genes may reside in different initial clusters, upon convergence they are clustered by which predictive models they use. The procedure first assigns random pairings of genes and then we iterate between two steps, computing the best predictive model using a regularized least squares algorithm emphasizing sparse models and optimal gene pairings using a modified Hungarian bipartite graph matching approach. In practice the method converges in a small number of iterations. To refine and test this model we use RNAi to silence genes in the first wave of transcription after BCR stimulation and study the impact on the model. From the global gene regulatory network, we aim to predict the minimal number of gene to silence to influence the global structure of the BCR regulatory network. Influencing the transcriptional structure of aggressive CLL toward that of indolent CLL and healthy B-cells is a first step to reprogram the transcriptional response of leukemic cells.
APA, Harvard, Vancouver, ISO, and other styles
46

Xia, Xiao-Lei, Suxiang Qian, Xueqin Liu, and Huanlai Xing. "Efficient Model Selection for Sparse Least-Square SVMs." Mathematical Problems in Engineering 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/712437.

Full text
Abstract:
The Forward Least-Squares Approximation (FLSA) SVM is a newly-emerged Least-Square SVM (LS-SVM) whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Geladi, Paul, and Harald Martens. "A Calibration Tutorial for Spectral Data. Part 1: Data Pretreatment and Principal Component Regression Using Matlab." Journal of Near Infrared Spectroscopy 4, no. 1 (January 1996): 225–42. http://dx.doi.org/10.1255/jnirs.93.

Full text
Abstract:
Regression and calibration play an important role in analytical chemistry. All analytical instrumentation is dependent on a calibration that uses some regression model for a set of calibration samples. The ordinary least squares (OLS) method of building a multivariate linear regression (MLR) model has strict limitations. Therefore, biased or regularised regression models have been introduced. Some selected ones are ridge regression (RR), principal component regression (PCR) and partial least squares regression (PLS or PLSR). Also, artificial neural networks (ANN) based on back-propagation can be used as regression models. In order to understand regression models more is needed than just a set of statistical parameters. A deeper understanding of the underlying chemistry and physics is always equally important. For spectral data this means that a basic understanding of spectra and their errors is useful and that spectral representation should be included in judging the usefulness of the data treatment. A “constructed” spectrometric example is introduced. It consists of real spectrometric measurements in the range 408–1176 nm for 26 calibration samples and 10 test samples. The main response variable is litmus concentration, but other constituents such as bromocresolgreen and ZnO are added as interferents and also the pH is changed. The example is introduced as a tutorial. All calculations are shown in detail in Matlab. This makes it easy for the reader to follow and understand the calculations. It also makes the calculations completely traceable. The raw data are available as a file. In Part 1, the emphasis is on pretreatment of the data and on visualisation in different stages of the calculations. Part 1 ends with principal component regression calculations. Partial least squares calculations and some ANN results are presented in Part 2.
APA, Harvard, Vancouver, ISO, and other styles
48

Lu, Lian, Guowei Tong, Ge Guo, and Shi Liu. "Split Bregman iteration based reconstruction algorithm for electrical capacitance tomography." Transactions of the Institute of Measurement and Control 41, no. 9 (October 25, 2018): 2389–99. http://dx.doi.org/10.1177/0142331218799841.

Full text
Abstract:
The electrical capacitance tomography (ECT) technique uses the measured capacitance data to reconstruct the permittivity distribution in a specific measurement area, in which the performances of reconstruction algorithms play a crucial role in the reliability of measurement results. According to the Tikhonov regularization technique, a new cost function with the total least squares technique and the ℓ1-norm based regularizer is presented, in which measurement noises, model deviations and the influence of the outliers in the measurement data are simultaneously considered. The split Bregman technique and the fast-iterative shrinkage-thresholding method are combined into a new iterative scheme to solve the proposed cost function efficiently. Numerical experiment results show that the proposed algorithm achieves the boost in the precision of reconstruction, and under the noise-free condition the image errors for the imaging targets simulated in this paper, that is, 8.4%, 12.4%, 13.5% and 6.4%, are smaller than the linear backprojection (LBP) algorithm, the Tikhonov regularization (TR) algorithm, the truncated singular value decomposition (TSVD) algorithm, the Landweber algorithm and the algebraic reconstruction technique (ART).
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Lei, Charalampos Papachristou, Pankaj K. Choudhary, and Swati Biswas. "A Bayesian Hierarchical Framework for Pathway Analysis in Genome-Wide Association Studies." Human Heredity 84, no. 6 (2019): 240–55. http://dx.doi.org/10.1159/000508664.

Full text
Abstract:
<b><i>Background:</i></b> Pathway analysis allows joint consideration of multiple SNPs belonging to multiple genes, which in turn belong to a biologically defined pathway. This type of analysis is usually more powerful than single-SNP analyses for detecting joint effects of variants in a pathway. <b><i>Methods:</i></b> We develop a Bayesian hierarchical model by fully modeling the 3-level hierarchy, namely, SNP-gene-pathway that is naturally inherent in the structure of the pathways, unlike the currently used ad hoc ways of combining such information. We model the effects at each level conditional on the effects of the levels preceding them within the generalized linear model framework. To deal with the high dimensionality, we regularize the regression coefficients through an appropriate choice of priors. The model is fit using a combination of iteratively weighted least squares and expectation-maximization algorithms to estimate the posterior modes and their standard errors. A normal approximation is used for inference. <b><i>Results:</i></b> We conduct simulations to study the proposed method and find that our method has higher power than some standard approaches in several settings for identifying pathways with multiple modest-sized variants. We illustrate the method by analyzing data from two genome-wide association studies on breast and renal cancers. <b><i>Conclusion:</i></b> Our method can be helpful in detecting pathway association.
APA, Harvard, Vancouver, ISO, and other styles
50

Bartoszewski, Zbigniew. "Remarks on the convergence of an iterative method of solution of generalized least squares problem." Demonstratio Mathematica 43, no. 4 (January 1, 2010). http://dx.doi.org/10.1515/dema-2013-0276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography