Academic literature on the topic 'Reweighted Least-Squares'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reweighted Least-Squares.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Reweighted Least-Squares"

1

Rontogiannis, Athanasios A., Paris V. Giampouras, and Konstantinos D. Koutroumbas. "Online Reweighted Least Squares Robust PCA." IEEE Signal Processing Letters 27 (2020): 1340–44. http://dx.doi.org/10.1109/lsp.2020.3011896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Colin. "Distributed iteratively reweighted least squares and applications." Statistics and Its Interface 6, no. 4 (2013): 585–93. http://dx.doi.org/10.4310/sii.2013.v6.n4.a15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

KONISHI, Katsumi. "Reweighted Least Squares Heuristic for SARX System Identification." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E95.A, no. 9 (2012): 1627–30. http://dx.doi.org/10.1587/transfun.e95.a.1627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Debruyne, Michiel, Andreas Christmann, Mia Hubert, and Johan A. K. Suykens. "Robustness of reweighted Least Squares Kernel Based Regression." Journal of Multivariate Analysis 101, no. 2 (2010): 447–63. http://dx.doi.org/10.1016/j.jmva.2009.09.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burrus, C. S., J. A. Barreto, and I. W. Selesnick. "Iterative reweighted least-squares design of FIR filters." IEEE Transactions on Signal Processing 42, no. 11 (1994): 2926–36. http://dx.doi.org/10.1109/78.330353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

O’Leary, Dianne P. "Robust Regression Computation Using Iteratively Reweighted Least Squares." SIAM Journal on Matrix Analysis and Applications 11, no. 3 (1990): 466–80. http://dx.doi.org/10.1137/0611032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ba, Demba, Behtash Babadi, Patrick L. Purdon, and Emery N. Brown. "Robust spectrotemporal decomposition by iteratively reweighted least squares." Proceedings of the National Academy of Sciences 111, no. 50 (2014): E5336—E5345. http://dx.doi.org/10.1073/pnas.1320637111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dollinger, Michael B., and Robert G. Staudte. "Influence Functions of Iteratively Reweighted Least Squares Estimators." Journal of the American Statistical Association 86, no. 415 (1991): 709–16. http://dx.doi.org/10.1080/01621459.1991.10475099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Daubechies, Ingrid, Ronald DeVore, Massimo Fornasier, and C. Si̇nan Güntürk. "Iteratively reweighted least squares minimization for sparse recovery." Communications on Pure and Applied Mathematics 63, no. 1 (2010): 1–38. http://dx.doi.org/10.1002/cpa.20303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sigl, Juliane. "Nonlinear residual minimization by iteratively reweighted least squares." Computational Optimization and Applications 64, no. 3 (2016): 755–92. http://dx.doi.org/10.1007/s10589-016-9829-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Reweighted Least-Squares"

1

Popov, Dmitriy. "Iteratively reweighted least squares minimization with prior information a new approach." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4822.

Full text
Abstract:
Iteratively reweighted least squares (IRLS) algorithms provide an alternative to the more standard l[sub1]-minimization approach in compressive sensing. Daubechies et al. introduced a particularly stable version of an IRLS algorithm and rigorously proved its convergence in 2010. They did not, however, consider the case in which prior information on the support of the sparse domain of the solution is available. In 2009, Miosso et al. proposed an IRLS algorithm that makes use of this information to further reduce the number of measurements required to recover the solution with specified accuracy. Although Miosso et al. obtained a number of simulation results strongly confirming the utility of their approach, they did not rigorously establish the convergence properties of their algorithm. In this paper, we introduce prior information on the support of the sparse domain of the solution into the algorithm of Daubechies et al. We then provide a rigorous proof of the convergence of the resulting algorithm.<br>ID: 030646220; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2011.; Includes bibliographical references (p. 37-38).<br>M.S.<br>Masters<br>Mathematics<br>Sciences<br>Mathematical Science
APA, Harvard, Vancouver, ISO, and other styles
2

Sigl, Juliane [Verfasser], Massimo [Akademischer Betreuer] Fornasier, Rachel [Gutachter] Ward, Sergei [Gutachter] Pereverzyev, and Massimo [Gutachter] Fornasier. "Iteratively Reweighted Least Squares - Nonlinear Regression and Low-Dimensional Structure Learning for Big Data / Juliane Sigl ; Gutachter: Rachel Ward, Sergei Pereverzyev, Massimo Fornasier ; Betreuer: Massimo Fornasier." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1160034850/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Palkki, Ryan D. "Chemical identification under a poisson model for Raman spectroscopy." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45935.

Full text
Abstract:
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory. The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB). Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference. In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class. The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem. Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Mengmeng. "Generalized quantile regression." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2012. http://dx.doi.org/10.18452/16569.

Full text
Abstract:
Die generalisierte Quantilregression, einschließlich der Sonderfälle bedingter Quantile und Expektile, ist insbesondere dann eine nützliche Alternative zum bedingten Mittel bei der Charakterisierung einer bedingten Wahrscheinlichkeitsverteilung, wenn das Hauptinteresse in den Tails der Verteilung liegt. Wir bezeichnen mit v_n(x) den Kerndichteschätzer der Expektilkurve und zeigen die stark gleichmßige Konsistenzrate von v-n(x) unter allgemeinen Bedingungen. Unter Zuhilfenahme von Extremwerttheorie und starken Approximationen der empirischen Prozesse betrachten wir die asymptotischen maximalen Abweichungen sup06x61 |v_n(x) − v(x)|. Nach Vorbild der asymptotischen Theorie konstruieren wir simultane Konfidenzb änder um die geschätzte Expektilfunktion. Wir entwickeln einen funktionalen Datenanalyseansatz um eine Familie von generalisierten Quantilregressionen gemeinsam zu schätzen. Dabei gehen wir in unserem Ansatz davon aus, dass die generalisierten Quantile einige gemeinsame Merkmale teilen, welche durch eine geringe Anzahl von Hauptkomponenten zusammengefasst werden können. Die Hauptkomponenten sind als Splinefunktionen modelliert und werden durch Minimierung eines penalisierten asymmetrischen Verlustmaßes gesch¨atzt. Zur Berechnung wird ein iterativ gewichteter Kleinste-Quadrate-Algorithmus entwickelt. Während die separate Schätzung von individuell generalisierten Quantilregressionen normalerweise unter großer Variablit¨at durch fehlende Daten leidet, verbessert unser Ansatz der gemeinsamen Schätzung die Effizienz signifikant. Dies haben wir in einer Simulationsstudie demonstriert. Unsere vorgeschlagene Methode haben wir auf einen Datensatz von 150 Wetterstationen in China angewendet, um die generalisierten Quantilkurven der Volatilität der Temperatur von diesen Stationen zu erhalten<br>Generalized quantile regressions, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We denote $v_n(x)$ as the kernel smoothing estimator of the expectile curves. We prove the strong uniform consistency rate of $v_{n}(x)$ under general conditions. Moreover, using strong approximations of the empirical process and extreme value theory, we consider the asymptotic maximal deviation $\sup_{ 0 \leqslant x \leqslant 1 }|v_n(x)-v(x)|$. According to the asymptotic theory, we construct simultaneous confidence bands around the estimated expectile function. We develop a functional data analysis approach to jointly estimate a family of generalized quantile regressions. Our approach assumes that the generalized quantiles share some common features that can be summarized by a small number of principal components functions. The principal components are modeled as spline functions and are estimated by minimizing a penalized asymmetric loss measure. An iteratively reweighted least squares algorithm is developed for computation. While separate estimation of individual generalized quantile regressions usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 150 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations
APA, Harvard, Vancouver, ISO, and other styles
5

Barreto, Jose Antonio. "L(p)-approximation by the iteratively reweighted least squares method and the design of digital FIR filters in one dimension." Thesis, 1993. http://hdl.handle.net/1911/13689.

Full text
Abstract:
In this thesis a new and simple to program approach is proposed in order to obtain an $L\sb{p}$ approximation, 2 $<$ p $<$ $\infty$, based on the Iteratively Reweighted Least Squares (IRLS) method, for designing a linear phase digital finite impulse response (FIR) filter. This technique, interesting in its own right, can also be used as an intermediate design method between the least squared error and the minimum Chebyshev error criteria. Various IRLS algorithms are evaluated through comparison of the number of iterations required for convergence. It is shown that Kahng's (or Fletcher's et al) method with a modified acceleration technique developed in this work performs better, for most practical cases, than the other algorithms examined. A filter design method which allows different norms in different bands is proposed and implemented. An important extension of this method also considers the case of different p's (or different norms) in the stopband.
APA, Harvard, Vancouver, ISO, and other styles
6

"Synthetic Aperture Radar Image Formation Via Sparse Decomposition." Master's thesis, 2011. http://hdl.handle.net/2286/R.I.9211.

Full text
Abstract:
abstract: Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.<br>Dissertation/Thesis<br>M.S. Electrical Engineering 2011
APA, Harvard, Vancouver, ISO, and other styles
7

Masák, Tomáš. "Velká data - extrakce klíčových informací pomocí metod matematické statistiky a strojového učení." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-357228.

Full text
Abstract:
This thesis is concerned with data analysis, especially with principal component analysis and its sparse modi cation (SPCA), which is NP-hard-to- solve. SPCA problem can be recast into the regression framework in which spar- sity is usually induced with ℓ1-penalty. In the thesis, we propose to use iteratively reweighted ℓ2-penalty instead of the aforementioned ℓ1-approach. We compare the resulting algorithm with several well-known approaches to SPCA using both simulation study and interesting practical example in which we analyze voting re- cords of the Parliament of the Czech Republic. We show experimentally that the proposed algorithm outperforms the other considered algorithms. We also prove convergence of both the proposed algorithm and the original regression-based approach to PCA. vi
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Reweighted Least-Squares"

1

Dollinger, Michael B., and Robert G. Staudte. "Efficiency of Reweighted Least Squares Iterates." In Directions in Robust Statistics and Diagnostics. Springer New York, 1991. http://dx.doi.org/10.1007/978-1-4615-6861-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Marx, Brian D. "Iterative Reweighted Partial Least Squares Estimation for GLMs." In Statistical Modelling. Springer New York, 1995. http://dx.doi.org/10.1007/978-1-4612-0789-4_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Samejima, Masaki, and Yasuyuki Matsushita. "Fast General Norm Approximation via Iteratively Reweighted Least Squares." In Computer Vision – ACCV 2016 Workshops. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54427-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Knight, Keith. "A Continuous-Time Iteratively Reweighted Least Squares Algorithm for $$L_\infty $$ L ∞ Estimation." In Springer Proceedings in Mathematics & Statistics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28665-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Iteratively Reweighted Least Squares." In Probability Methods for Cost Uncertainty Analysis. Chapman and Hall/CRC, 2015. http://dx.doi.org/10.1201/b19143-23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gill, Jeff, and Kenneth J. Meier. "The Theory and Application of Generalized Substantively Reweighted Least Squares 1." In What Works. Routledge, 2018. http://dx.doi.org/10.4324/9780429503108-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Reweighted Least-Squares"

1

Mohan, Karthik, and Maryam Fazel. "Iterative reweighted least squares for matrix rank minimization." In 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2010. http://dx.doi.org/10.1109/allerton.2010.5706969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Shuang, Qiuwei Li, Gang Li, Xiongxiong He, and Liping Chang. "Iteratively reweighted least squares for block-sparse recovery." In 2014 IEEE 9th Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2014. http://dx.doi.org/10.1109/iciea.2014.6931321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rihan, Mohamed, Maha Elsabrouty, Said Elnouby, Hossam Shalaby, Osamu Muta, and Hiroshi Furukawa. "Iterative Reweighted Least Squares approach to interference alignment." In 2013 IFIP Wireless Days (WD). IEEE, 2013. http://dx.doi.org/10.1109/wd.2013.6686521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Kaihui, Liangtian Wan, and Feiyu Wang. "Fast Iteratively Reweighted Least Squares Minimization for Sparse Recovery." In 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP). IEEE, 2018. http://dx.doi.org/10.1109/icdsp.2018.8631827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Millikan, Brian, Aritra Dutta, Nazanin Rahnavard, Qiyu Sun, and Hassan Foroosh. "Initialized iterative reweighted least squares for automatic target recognition." In MILCOM 2015 - 2015 IEEE Military Communications Conference. IEEE, 2015. http://dx.doi.org/10.1109/milcom.2015.7357493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ince, Taner, Nurdal Watsuji, and Arif Nacaroglu. "Iteratively reweighted least squares minimization for sparsely corrupted measurements." In 2011 IEEE 19th Signal Processing and Communications Applications Conference (SIU). IEEE, 2011. http://dx.doi.org/10.1109/siu.2011.5929657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schlittgen, Rainer, Christian M. Ringle, Marko Sarstedt, and Jan-Michael Becker. "Segmentation of PLS path models by iterative reweighted regressions." In 2nd International Symposium on Partial Least Squares Path Modeling - The Conference for PLS Users. University of Twente, 2015. http://dx.doi.org/10.3990/2.344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Xinyue, Xudong Zhang, and Bin Zhou. "Fast iterative reweighted least squares algorithm for sparse signals recovery." In 2016 IEEE International Conference on Digital Signal Processing (DSP). IEEE, 2016. http://dx.doi.org/10.1109/icdsp.2016.7868547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shuang, Li. "Sparse Representation of Hardy Function by Iteratively Reweighted Least Squares." In 2020 International Symposium on Computer Engineering and Intelligent Communications (ISCEIC). IEEE, 2020. http://dx.doi.org/10.1109/isceic51027.2020.00020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ramani, Sathish, and Jeffrey A. Fessler. "An accelerated iterative reweighted least squares algorithm for compressed sensing MRI." In 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. IEEE, 2010. http://dx.doi.org/10.1109/isbi.2010.5490364.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Reweighted Least-Squares"

1

WOHLBERG, BRENDT, and PAUL RODRIGUEZ. SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM. Office of Scientific and Technical Information (OSTI), 2007. http://dx.doi.org/10.2172/1000493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!