Dissertations / Theses on the topic 'Linear Algorithms'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Linear Algorithms.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Wilbanks, John W. (John Winston). "Linear Unification." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc500971/.
Full textRettes, Julio Alberto Sibaja. "Robust algorithms for linear regression and locally linear embedding." reponame:Repositório Institucional da UFC, 2017. http://www.repositorio.ufc.br/handle/riufc/22445.
Full textSubmitted by Weslayne Nunes de Sales (weslaynesales@ufc.br) on 2017-03-30T13:15:27Z No. of bitstreams: 1 2017_dis_rettesjas.pdf: 3569500 bytes, checksum: 46cedc2d9f96d0f58bcdfe3e0d975d78 (MD5)
Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2017-04-04T11:10:44Z (GMT) No. of bitstreams: 1 2017_dis_rettesjas.pdf: 3569500 bytes, checksum: 46cedc2d9f96d0f58bcdfe3e0d975d78 (MD5)
Made available in DSpace on 2017-04-04T11:10:44Z (GMT). No. of bitstreams: 1 2017_dis_rettesjas.pdf: 3569500 bytes, checksum: 46cedc2d9f96d0f58bcdfe3e0d975d78 (MD5) Previous issue date: 2017
Nowadays a very large quantity of data is flowing around our digital society. There is a growing interest in converting this large amount of data into valuable and useful information. Machine learning plays an essential role in the transformation of data into knowledge. However, the probability of outliers inside the data is too high to marginalize the importance of robust algorithms. To understand that, various models of outliers are studied. In this work, several robust estimators within the generalized linear model for regression framework are discussed and analyzed: namely, the M-Estimator, the S-Estimator, the MM-Estimator, the RANSAC and the Theil-Sen estimator. This choice is motivated by the necessity of examining algorithms with different working principles. In particular, the M-, S-, MM-Estimator are based on a modification of the least squares criterion, whereas the RANSAC is based on finding the smallest subset of points that guarantees a predefined model accuracy. The Theil Sen, on the other hand, uses the median of least square models to estimate. The performance of the estimators under a wide range of experimental conditions is compared and analyzed. In addition to the linear regression problem, the dimensionality reduction problem is considered. More specifically, the locally linear embedding, the principal component analysis and some robust approaches of them are treated. Motivated by giving some robustness to the LLE algorithm, the RALLE algorithm is proposed. Its main idea is to use different sizes of neighborhoods to construct the weights of the points; to achieve this, the RAPCA is executed in each set of neighbors and the risky points are discarded from the corresponding neighborhood. The performance of the LLE, the RLLE and the RALLE over some datasets is evaluated.
Na atualidade um grande volume de dados é produzido na nossa sociedade digital. Existe um crescente interesse em converter esses dados em informação útil e o aprendizado de máquinas tem um papel central nessa transformação de dados em conhecimento. Por outro lado, a probabilidade dos dados conterem outliers é muito alta para ignorar a importância dos algoritmos robustos. Para se familiarizar com isso, são estudados vários modelos de outliers. Neste trabalho, discutimos e analisamos vários estimadores robustos dentro do contexto dos modelos de regressão linear generalizados: são eles o M-Estimator, o S-Estimator, o MM-Estimator, o RANSAC e o Theil-Senestimator. A escolha dos estimadores é motivada pelo principio de explorar algoritmos com distintos conceitos de funcionamento. Em particular os estimadores M, S e MM são baseados na modificação do critério de minimização dos mínimos quadrados, enquanto que o RANSAC se fundamenta em achar o menor subconjunto que permita garantir uma acurácia predefinida ao modelo. Por outro lado o Theil-Sen usa a mediana de modelos obtidos usando mínimos quadradosno processo de estimação. O desempenho dos estimadores em uma ampla gama de condições experimentais é comparado e analisado. Além do problema de regressão linear, considera-se o problema de redução da dimensionalidade. Especificamente, são tratados o Locally Linear Embedding, o Principal ComponentAnalysis e outras abordagens robustas destes. É proposto um método denominado RALLE com a motivação de prover de robustez ao algoritmo de LLE. A ideia principal é usar vizinhanças de tamanhos variáveis para construir os pesos dos pontos; para fazer isto possível, o RAPCA é executado em cada grupo de vizinhos e os pontos sob risco são descartados da vizinhança correspondente. É feita uma avaliação do desempenho do LLE, do RLLE e do RALLE sobre algumas bases de dados.
Li, Zhentao. "Tree decompositions and linear time algorithms." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107654.
Full textCette thèse traite de décompositions arborescentes. Les arbres font partie des classes de graphes les mieux comprises. La décomposition arborescente d'un graphe améliore notre compréhension de ce dernier. Par exemple, grâce aux travaux de Robertson et Seymour sur les mineurs d'un graphe, nous savons qu'il existe, pour des problèmes qui sont en général NP-difficiles, un algorithme linéaire pour les graphes admettant une certaine décomposition arborescente. Nous classons les décompositions arborescentes connues et déterminons les propiétés qui rendent cette décomposition unique.Comme premier résultat, nous donnons un algorithme linéaire pour construire une décomposition arborescente d'un graphe sans mineur du graphe complet K_5. Notre deuxième resultat repose sur une modification de cet algorithme afin d'obtenir un autre algorithme linéaire. Ce dernier permet la construction d'une décomposition arborescente d'un graphe qui ne contient pas deux chemins à sommets disjoints entre deux paires de sommets données (s_1, t_1) et (s_2, t_2).Nous utilisons ces deux décompositions pour améliorer le temps de calcul des algorithmes existants et modifions des algorithmes pour graphes planaires pour leur permettre de prendre comme donnée des graphes sans mineur K_5.
Lee, Richard. "3D non-linear image restoration algorithms." Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338227.
Full textTORTORELLI, MARCUS MAGNO FERNANDES. "CENTRAL PATH ALGORITHMS FOR LINEAR PROGRAMMING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1991. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9405@1.
Full textWe study here the Interior Points Algorithms for Linear Programming, developed after Karmarkar s Algorithm, which follow the Central Path. Both Primal and Primal-dual Algorithms are considered and also the efficiency of applying a bidirecional Search procedure is verified. These methods were implemented and tested solving a set of randomly generated problems. Comparing these results we analyze the performance of the methodologies.
Yodpinyanee, Anak. "Sub-linear algorithms for graph problems." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120411.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 189-199).
In the face of massive data sets, classical algorithmic models, where the algorithm reads the entire input, performs a full computation, then reports the entire output, are rendered infeasible. To handle these data sets, alternative algorithmic models are suggested to solve problems under the restricted, namely sub-linear, resources such as time, memory or randomness. This thesis aims at addressing these limitations on graph problems and combinatorial optimization problems through a number of different models. First, we consider the graph spanner problem in the local computation algorithm (LCA) model. A graph spanner is a subgraph of the input graph that preserves all pairwise distances up to a small multiplicative stretch. Given a query edge from the input graph, the LCA explores a sub-linear portion of the input graph, then decides whether to include this edge in its spanner or not - the answers to all edge queries constitute the output of the LCA. We provide the first LCA constructions for 3 and 5-spanners of general graphs with almost optimal trade-offs between spanner sizes and stretches, and for fixed-stretch spanners of low maximum-degree graphs. Next, we study the set cover problem in the oracle access model. The algorithm accesses a sub-linear portion of the input set system by probing for elements in a set, and for sets containing an element, then computes an approximate minimum set cover: a collection of an approximately-minimum number of sets whose union includes all elements. We provide probe-efficient algorithms for set cover, and complement our results with almost tight lower bound constructions. We further extend our study to the LP-relaxation variants and to the streaming setting, obtaining the first streaming results for the fractional set cover problem. Lastly, we design local-access generators for a collection of fundamental random graph models. We demonstrate how to generate graphs according to the desired probability distribution in an on-the-fly fashion. Our algorithms receive probes about arbitrary parts of the input graph, then construct just enough of the graph to answer these probes, using only polylogarithmic time, additional space and random bits per probe. We also provide the first implementation of random neighbor probes, which is a basic algorithmic building block with applications in various huge graph models.
by Anak Yodpinyanee.
Ph. D.
Kong, Seunghyun. "Linear programming algorithms using least-squares method." Diss., Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-04012007-010244/.
Full textMartin Savelsbergh, Committee Member ; Joel Sokol, Committee Member ; Earl Barnes, Committee Co-Chair ; Ellis L. Johnson, Committee Chair ; Prasad Tetali, Committee Member.
Jamieson, Alan C. "Linear-time algorithms for edge-based problems." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1193079463/.
Full textAmir-Azizi, Siamak. "Linear filtering algorithms for Monte Carlo simulations." Thesis, University of Southampton, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280859.
Full textPullan, Malcolm Craig. "Separated continuous linear programs : theory and algorithms." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260693.
Full textLuo, Xiaodong. "Continuous linear programming : theory, algorithms and applications." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/10591.
Full textAxiotis, Kyriakos. "Algorithms for Subset Sum using linear sketching." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122750.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 41-43).
Given n positive integers, the Modular Subset Sum problem asks if a subset adds up to a given target t modulo a given integer m. This is a natural generalization of the Subset Sum problem (where m = + [infinity symbol]) with ties to additive combinatorics and cryptography. The non-modular case was long known to be NP-complete but to admit pseudo-polynomial time algorithms and, recently, algorithms running in near-linear pseudo-polynomial time were developed [9, 211. For the modular case, however, the best known algorithm by Koiliaris and Xu [21] runs in time 0̃ (m⁵/⁴). In this thesis we tackle this problem by devising a faster algorithm for the Modular Subset Sum problem, running in 0̃(m) randomized time, which matches a recent conditional lower bound of [1] based on the Strong Exponential Time Hypothesis. Interestingly, in contrast to most previous results on Subset Sum, our algorithm does not use the Fast Fourier Transform. Instead, it is able to simulate the "textbook" Dynamic Programming algorithm much faster, using ideas from linear sketching. This is one of the first applications of sketching-based techniques to obtain fast algorithms for exact combinatorial problems in an offline setting.
by Kyriakos Axiotis.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Silva, Jair da. "Uma familia de algoritmos para programação linear baseada no algoritmo de Von Neumann." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306741.
Full textTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-13T08:57:24Z (GMT). No. of bitstreams: 1 Silva_Jairda1_D.pdf: 1755258 bytes, checksum: 2ecb493aab3646838f54c2df2012b5d9 (MD5) Previous issue date: 2009
Resumo: Neste trabalho apresentamos uma nova família de algoritmos para resolver problemas de programação linear. A vantagem desta família de algoritmos é a sua simplicidade, a possibilidade de explorar a esparsidade dos dados do problema original e geralmente possuir raio de convergência inicial rápido. Esta família de algoritmos surgiu da generalização da idéia apresentada por João Gonçalves, Robert Storer e Jacek Gondzio, para desenvolver o algoritmo de ajustamento pelo par ótimo. Este algoritmo foi desenvolvido por sua vez tendo como base o algoritmo de Von Neumann. O algoritmo de Von Neumann possui propriedades interessantes, como simplicidade e convergência inicial rápida, porém, ele não é muito prático para resolver problemas lineares, visto que sua convergência é muito lenta. Do ponto de vista computacional, nossa proposta não é utilizar a família de algoritmos para resolver os problemas de programação linear até encontrar uma solução e sim explorar a sua simplicidade e seu raio de convergência inicial geralmente rápido e usá-la em conjunto com um método primal-dual de pontos interiores infactível, para melhorar a eficiência deste. Experimentos numéricos revelam que ao usar esta família de algoritmos em conjunto com um método primal-dual de pontos interiores infactível melhoramos o seu desempenho na solução de algumas classes de problemas de programação linear de grande porte.
Abstract: In this work, we present a new family of algorithms to solve linear programming problems. The advantage of this family of algorithms relies in its simplicity, the possibility of exploiting the sparsity of the original problem data and usually to have fast initial ratio of convergence. This family of algorithms arose from the generalization of the idea presented by João Gonçalves, Robert Storer and Jacek Gondzio to develop the optimal pair adjustment algorithm. This algorithm was developed in its own turn based on the Von Neumann's algorithm. It has interesting properties, such as simplicity and fast initial convergence, but it is not very practical for solving linear problems, since its convergence is very slow. From the computational point of view, our suggestion is not to use the family of algorithms to solve problems of linear programming until optimality, but to exploit its simplicity and its fast initial ratio of convergence and use it together with a infeasible primal-dual interior point method to improve its efficiency. Numerical experiments show that using this family of algorithms with an infeasible primal-dual interior point method improves its performance in the solution of some classes of large-scale linear programming problems.
Doutorado
Doutor em Matemática Aplicada
Wang, Yanhui. "Affine scaling algorithms for linear programs and linearly constrained convex and concave programs." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24919.
Full textBeauchamp, Gerson. "Algorithms for singular systems." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/15368.
Full textBorie, Richard Bryan. "Recursively constructed graph families : membership and linear algorithms." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/8140.
Full textHorton, Steven Bradish. "The optimal linear arrangement problem : algorithms and approximation." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/30878.
Full textMehadhebi, Karim. "Linear expected time algorithms for nearest neighbor problems." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22774.
Full textPeng, Wei. "Non-linear detection algorithms for MIMO multiplexing systems." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39558563.
Full textNwana, Vincent Lebga. "Parallel algorithms for solving mixed integer linear programs." Thesis, Brunel University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368540.
Full text朱紫君 and Chi-kwan Chu. "Polynomial time algorithms for linear and integer programming." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224301.
Full textPeng, Wei, and 彭薇. "Non-linear detection algorithms for MIMO multiplexing systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39558563.
Full textChu, Chi-kwan. "Polynomial time algorithms for linear and integer programming." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22718710.
Full textMANRIQUE, ANDRÉ ROBERT FLORES. "DATA-SELECTIVE ADAPTIVE LINEAR AND KERNEL-BASED ALGORITHMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30566@1.
Full textCONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
BOLSA NOTA 10
Nesta dissertação, diversos algoritmos adaptativos para processamento de sinais com seleção de dados são desenvolvidos e estudados, com o objetivo de resolver dois problemas diferentes. O primeiro problema envolve ambientes com sistemas esparsos, onde uma função penalidade é incorporada na função de custo para aproveitar a esparsidade do modelo. Nesta perspectiva, são propostos três algoritmos com função penalidade ajustável, o primeiro baseado na função penalidade l1 é denominado SM-NLMS com atração para zero e função penalidade ajustável (ZA-SM-NLMS-ADP). O segundo algoritmo está baseado na função penalidade log-sum e o terceiro na função penalidade l0 , denominados SM-NLMS com atração ponderada para zero e função de penalidade ajustável (RZA-SM-NLMS-ADP) e SM-NLMS com atração para zero exponencial e função de penalidade ajustável (EZA-SM-NLMSADP), respectivamente. Além disso, foi desenvolvida uma análise estatística do algoritmo SM-NLMS com uma função penalidade genérica, obtendo expressões matemáticas para o erro médio quadrático em estado estacionário. O segundo problema abordado, considera algoritmos adaptativos não lineares baseados em funções de kernels. Neste contexto, são desenvolvidos dois algoritmos com seleção de dados, o algoritmo SM-NKLMS e o algoritmo SM-KAP, os quais possuem a capacidade de limitar o crescimento da estrutura criada pelas funções de kernels, tratando um dos maiores problemas que surge quando se utilizam algoritmos baseados em kernels. Os algoritmos baseados em kernels foram testados para predição de séries temporais. Também é realizada uma análise estatística do algoritmo SM-NKLMS. As simulações mostram que os algoritmos desenvolvidos superam os algoritmos lineares e não lineares convencionais tanto na velocidade de convergência quanto no erro médio quadrático atingido.
In this dissertation, several data-selective adaptive signal processing algorithms are derived and investigated for solving two different problems. The first one involves scenarios handling sparse systems, where we introduce a framework in which a general penalty function is incorporated into the cost function for exploiting the sparsity of the model. Under this scope, we propose three algorithms with an adjustable penalty function, the first one based on the l1 - norm, which we term zero-attracting SM-NLMS with adjustable penalty function (ZA-SM-NLMS-ADP). The second algorithm is based on the log-sum penalty function and the third one on the l0 - norm, named reweighted ZASM- NLMS (RZA-SM-NLMS-ADP) and the exponential ZA-SM-NLMS (EZASM- NLMS-ADP), respectively. We also carry out a statistical analysis of the sparsity-aware SM-NLMS algorithms with a general penalty function, arriving at mathematical expressions for the mean-square error at steady state. The second problem addressed considers nonlinear adaptive algorithms based on kernel functions. In this context, we develop two data selective algorithms, the Set-Membership Normalized Kernel Least Mean Squares (SM-NKLMS) algorithm and the Set-Membership Kernel Affine Projection (SM-KAP) algorithm, which have the capability of naturally limiting the growing structure created by the kernels, dealing with one of the major problems presented when working with kernel algorithms. The kernel algorithms developed have been tested for a time series prediction task. A statistical analysis of the proposed SM-NKLMS algorithm is also developed. Simulation results show that the proposed algorithms, outperform standard linear and nonlinear adaptive algorithms in both convergence rate and steady state performance.
Lammoglia, Bruna. "Sobre minimização de quadraticas em caixas." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306045.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-10T01:30:21Z (GMT). No. of bitstreams: 1 Lammoglia_Bruna_M.pdf: 679586 bytes, checksum: 221fa89afc7d9f594781baed1dfe6b0e (MD5) Previous issue date: 2007
Resumo: Neste trabalho o objetivo principal foi a minimização de quadráticas em caixas. Dissertamos sobre os métodos de máxima descida e dos gradientes conjugados, bem como sobre um método mais recente denominado gradiente espectral. O GENCAN, um algoritmo que minimiza funções em caixas, foi estudado em detalhe, particularmente avaliando sua aplicação para quadráticas. O objetivo foi analisar o desempenho do GENCAN, comparado com algoritmos anteriores, como o LANCELOT e o QUACAN. Foram executados experimentos numéricos a fim de avaliar o desempenho das versões de GENCAN sem e com pré-condicionamento. Concluiu-se que pré-condicionar o método dos gradientes conjugados neste caso tornou o GENCAN mais robusto. No entanto, o pré-condicionador usado neste software mostrou-se computacionalmente caro. Em relação à comparação do GENCAN, LANCELOT e QUACÁN, podemos afirmar que o GENCAN. mostrou-se competitivo
Abstract: The focus of this work was the minimization of quadratic functions with box constraints. We were mainly concerned about the steepest descent and conjugated gradient methods, besides a more recent approach called spectral gradient method. The GENCAN, an algorithm that minimizes functions on a box, was studied in details particularly evaluating this algorithm applied to quadratics. The objective was to analyze the efficiency of GENCAN, comparing it to classical algorithms, such as LANCELOT and QUACAN. We executed numerical experiments in order to investigate the efficiency of GENCAN version with and without preconditioning. Evaluating the results we concluded that preconditioning the conjugated gradient method makes the GENCAN work considerably better; despite the fact that the preconditioner used here turned the computational process more expensive. Comparing GENCA'N, LANCELOT, and QUACAN we can state that GENCAN is competitive
Mestrado
Otimização
Mestre em Matemática Aplicada
Lopez, Mario A., Shlomo Reisner, and reisner@math haifa ac il. "Linear Time Approximation of 3D Convex Polytopes." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1005.ps.
Full textTam, Siu-lung. "Linear-size indexes for approximate pattern matching and dictionary matching." Click to view the E-thesis via HKUTO, 2010. http://sunzi.lib.hku.hk/hkuto/record/B44205326.
Full textTam, Siu-lung, and 譚小龍. "Linear-size indexes for approximate pattern matching and dictionary matching." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44205326.
Full textAjlouni, Naim. "Genetic algorithms for control system design." Thesis, University of Salford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308088.
Full textBen, Daya Mohamed. "Barrier function algorithms for linear and convex quadratic programming." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/25502.
Full textWeihrauch, Christian. "Analysis of Monte Carlo algorithms for linear algebra problems." Thesis, University of Reading, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515747.
Full textYagoob, Muhammad Moeen. "Computationally efficient algorithms for non-linear target tracking problems." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499109.
Full textChen, Fei, and 陳飛. "Linear programming techniques for algorithms with applications in economics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206329.
Full textpublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Tsaig, Yaakov. "Sparse solution of underdetermined linear systems : algorithms and applications /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Full textCosta, da Silva Marco Aurelio. "Applications and algorithms for two-stage robust linear optimization." Thesis, Avignon, 2018. http://www.theses.fr/2018AVIG0229/document.
Full textThe research scope of this thesis is two-stage robust linear optimization. We are interested in investigating algorithms that can explore its structure and also on adding alternatives to mitigate conservatism inherent to a robust solution. We develop algorithms that incorporate these alternatives and are customized to work with rather medium or large scale instances of problems. By doing this we experiment a holistic approach to conservatism in robust linear optimization and bring together the most recent advances in areas such as data-driven robust optimization, distributionally robust optimization and adaptive robust optimization. We apply these algorithms in defined applications of the network design/loading problem, the scheduling problem, a min-max-min combinatorial problem and the airline fleet assignment problem. We show how the algorithms developed improve performance when compared to previous implementations
Nicolae, Maria-Irina. "Learning similarities for linear classification : theoretical foundations and algorithms." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES062/document.
Full textThe notion of metric plays a key role in machine learning problems, such as classification, clustering and ranking. Learning metrics from training data in order to make them adapted to the task at hand has attracted a growing interest in the past years. This research field, known as metric learning, usually aims at finding the best parameters for a given metric under some constraints from the data. The learned metric is used in a machine learning algorithm in hopes of improving performance. Most of the metric learning algorithms focus on learning the parameters of Mahalanobis distances for feature vectors. Current state of the art methods scale well for datasets of significant size. On the other hand, the more complex topic of multivariate time series has received only limited attention, despite the omnipresence of this type of data in applications. An important part of the research on time series is based on the dynamic time warping (DTW) computing the optimal alignment between two time series. The current state of metric learning suffers from some significant limitations which we aim to address in this thesis. The most important one is probably the lack of theoretical guarantees for the learned metric and its performance for classification.The theory of (ℰ , ϓ, τ)-good similarity functions has been one of the first results relating the properties of a similarity to its classification performance. A second limitation in metric learning comes from the fact that most methods work with metrics that enforce distance properties, which are computationally expensive and often not justified. In this thesis, we address these limitations through two main contributions. The first one is a novel general framework for jointly learning a similarity function and a linear classifier. This formulation is inspired from the (ℰ , ϓ, τ)-good theory, providing a link between the similarity and the linear classifier. It is also convex for a broad range of similarity functions and regularizers. We derive two equivalent generalization bounds through the frameworks of algorithmic robustness and uniform convergence using the Rademacher complexity, proving the good theoretical properties of our framework. Our second contribution is a method for learning similarity functions based on DTW for multivariate time series classification. The formulation is convex and makes use of the(ℰ , ϓ, τ)-good framework for relating the performance of the metric to that of its associated linear classifier. Using uniform stability arguments, we prove the consistency of the learned similarity leading to the derivation of a generalization bound
Zheng, Gan. "Optimization in linear multiuser MIMO systems." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39557923.
Full textZheng, Gan, and 鄭淦. "Optimization in linear multiuser MIMO systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39557923.
Full textZhao, Jianmin. "Optimal Clustering: Genetic Constrained K-Means and Linear Programming Algorithms." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1583.
Full textZhao, Zhanlue. "Performance Appraisal of Estimation Algorithms and Application of Estimation Algorithms to Target Tracking." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/394.
Full textSilva, Magno Teófilo Madeira da. "Equalização não-linear de canais de comunicação." Universidade de São Paulo, 2001. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-03072001-162729/.
Full textEqualization of communication channels using neural networks is investigated by considering three kinds of networks: MLP (Multilayer Perceptron), RBF (Radial Basis Function) and RNN (Recurrent Neural Network). The performance of the nonlinear equalizers based on these networks are compared with the linear transversal equalizer and the optimal equalizers given by the bayesian and maximum likelihood criteria. Binary and quaternary alphabets are used and transmitted over finite pulse response channel models. Decision feedback is considered whenever it is worthwhile. The training of these equalizers is considered in the supervised form and a comparison of some training algorithms has been performed. In this scope, a new algorithm based on parameter acceleration is introduced for the training of MLP networks. Moreover, a hybrid equalizer composed of a linear transversal equalizer and a RNN network is proposed. It is a simple and flexible nonlinear structure making use of decision feedback. imulation results show that it may be advantageously used to equalize linear and nonlinear channels.
Sanders, Daniel Preston. "Linear algorithms for graphs of tree-width at most four." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/30061.
Full textDawkins, Ian. "Development of practical evolution Galerkin algorithms on unstructured meshes." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390460.
Full textPoire, Xavier Corvera. "Model generation and sampling algorithms for dynamic stochastic programming." Thesis, University of Essex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294674.
Full textCastillo, Ileana. "Some properties of the affine scaling algorithm." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/25459.
Full textBurer, Samuel A. "New algorithmic approaches for semidefinite programming with applications to combinatorial optimization." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/30268.
Full textLevin, Matthew D. "Parallel algorithms for SIMD and MIMD computers." Thesis, Loughborough University, 1990. https://dspace.lboro.ac.uk/2134/32962.
Full textRahmouni, M. K. "The conversion of linear programmes to network flow problems." Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380042.
Full textLopez, Erazo Tulio Emiro. "Metodos derivative-free para resolver um problema de programação não linear com restrições lineares." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306666.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-10T21:50:16Z (GMT). No. of bitstreams: 1 LopezErazo_TulioEmiro_M.pdf: 12666478 bytes, checksum: df86afcc58096ac12697d7d0fefe8ee5 (MD5) Previous issue date: 2007
Resumo: No presente trabalho estudamos métodos numéricos que resolvem um problema de programação não linear com restrições lineares de desigualdade e de igualdade, os quais não fazem uso explícito do gradiente da função objetivo nem tampouco de aproximações ao mesmo. Um método de decréscimo su_ciente e um método de decréscimo simples são estudados. O primeiro, procura melhores valores para a função objetivo ao longo de um conjunto de direções, as quais geram positivamente o cone poliedral convexo no ponto atual. O segundo método procura melhorar o valor da função objetivo ao longo de um conjunto de direções, as quais, dependendo do ponto atual, ou geram positivamente todo o espaço Rn, ou geram positivamente o cone poliedral convexo em tal ponto. Algoritmos dos métodos, comentários das implementa ções feitas e testes numéricos de tais implementações com problemas da coleção Hock-Schittkowski são feitos ao final do trabalho
Abstract: In the present work we study numerical methods that solve a problem of nonlinear programming with linear inequality and equality constraints, which do not make explicit use either neither of the gradient of the objective function nor approaches to the same one. A method of su_cient decrease and a method of simple decrease are studied. The _rst one, looks for better values for the objective function throughout a set of directions, which positively generate the convex polyhedral cone in the current point. The second method looks for to improve the value of the objective function throughout a set of directions, which, depending on the current point, either generates positively the whole spaces, or positively generates the convex polyhedral cone in this point. Algorithms of the methods, commentaries of the implementations done and numerical tests of such implementations with problems of the Hock-Schittkowski collection are made at the end of the work
Mestrado
Mestre em Matemática Aplicada
Iakymchuk, Roman [Verfasser]. "Performance modeling and prediction for linear algebra algorithms / Roman Iakymchuk." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2012. http://d-nb.info/1026308690/34.
Full text