To see the other types of publications on this topic, follow the link: Large-scale optimization methods.

Dissertations / Theses on the topic 'Large-scale optimization methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large-scale optimization methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Barclay, Alexander. "SQP methods for large-scale optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9936873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Hongchao. "Gradient methods for large-scale nonlinear optimization." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Erway, Jennifer B. "Iterative methods for large-scale unconstrained optimization." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3222051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.<br>Title from first page of PDF file (viewed September 20, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 146-149).
APA, Harvard, Vancouver, ISO, and other styles
4

Fountoulakis, Kimon. "Higher-order methods for large-scale optimization." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15797.

Full text
Abstract:
There has been an increased interest in optimization for the analysis of large-scale data sets which require gigabytes or terabytes of data to be stored. A variety of applications originate from the fields of signal processing, machine learning and statistics. Seven representative applications are described below. - Magnetic Resonance Imaging (MRI): A medical imaging tool used to scan the anatomy and the physiology of a body. - Image inpainting: A technique for reconstructing degraded parts of an image. - Image deblurring: Image processing tool for removing the blurriness of a photo caused by
APA, Harvard, Vancouver, ISO, and other styles
5

Griffin, Joshua D. "Interior-point methods for large-scale nonconvex optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3167839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Haihao Ph D. Massachusetts Institute of Technology. "Large-scale optimization Methods for data-science applications." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122272.

Full text
Abstract:
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Department of Mathematics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 203-211).<br>In this thesis, we present several contributions of large scale optimization methods with the applications in data science and machine learning. In the first part, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. We consider general convex optimization problem, where we presume
APA, Harvard, Vancouver, ISO, and other styles
7

Marcia, Roummel F. "Primal-dual interior-point methods for large-scale optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2002. http://wwwlib.umi.com/cr/ucsd/fullcit?p3044769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Van, Mai Vien. "Large-Scale Optimization With Machine Learning Applications." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263147.

Full text
Abstract:
This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms.  First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proxi
APA, Harvard, Vancouver, ISO, and other styles
9

Ghadimi, Euhanna. "Accelerating Convergence of Large-scale Optimization Algorithms." Doctoral thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162377.

Full text
Abstract:
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, i
APA, Harvard, Vancouver, ISO, and other styles
10

Becker, Adrian Bernard Druke. "Decomposition methods for large scale stochastic and robust optimization problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68969.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 107-112).<br>We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a Markov decision problem named the Generalized Rest less Bandits Problem that encompasses a broad generalization of the restless bandit problem. For this class of stoch
APA, Harvard, Vancouver, ISO, and other styles
11

Wright, Stephen E. "Convergence and approximation for primal-dual methods in large-scale optimization /." Thesis, Connect to this title online; UW restricted, 1990. http://hdl.handle.net/1773/5751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zikrin, Spartak. "Large-Scale Optimization Methods with Application to Design of Filter Networks." Doctoral thesis, Linköpings universitet, Optimeringslära, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-103646.

Full text
Abstract:
Nowadays, large-scale optimization problems are among those most challenging. Any progress in developing methods for large-scale optimization results in solving important applied problems more effectively. Limited memory methods and trust-region methods represent two ecient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited memory methods are usually combined with a line search. We develop new limited memory trust-r
APA, Harvard, Vancouver, ISO, and other styles
13

Bui-Thanh, Tan. "Model-constrained optimization methods for reduction of parameterized large-scale systems." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40305.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Includes bibliographical references (p. 143-158).<br>Most model reduction techniques employ a projection framework that utilizes a reduced-space basis. The basis is usually formed as the span of a set of solutions of the large-scale system, which are computed for selected values (samples) of input parameters and forcing inputs. In existing mode
APA, Harvard, Vancouver, ISO, and other styles
14

Parisini, Fabio <1981&gt. "Hybrid constraint programming and metaheuristic methods for large scale optimization problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3709/.

Full text
Abstract:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is
APA, Harvard, Vancouver, ISO, and other styles
15

Ortiz, Diaz Camilo. "Block-decomposition and accelerated gradient methods for large-scale convex optimization." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53438.

Full text
Abstract:
In this thesis, we develop block-decomposition (BD) methods and variants of accelerated *9gradient methods for large-scale conic programming and convex optimization, respectively. The BD methods, discussed in the first two parts of this thesis, are inexact versions of proximal-point methods applied to two-block-structured inclusion problems. The adaptive accelerated methods, presented in the last part of this thesis, can be viewed as new variants of Nesterov's optimal method. In an effort to improve their practical performance, these methods incorporate important speed-up refinements motivated
APA, Harvard, Vancouver, ISO, and other styles
16

HomChaudhuri, Baisravan. "Price-Based Distributed Optimization in Large-Scale Networked Systems." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1377868426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Reddi, Sashank Jakkam. "New Optimization Methods for Modern Machine Learning." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.

Full text
Abstract:
Modern machine learning systems pose several new statistical, scalability, privacy and ethical challenges. With the advent of massive datasets and increasingly complex tasks, scalability has especially become a critical issue in these systems. In this thesis, we focus on fundamental challenges related to scalability, such as computational and communication efficiency, in modern machine learning applications. The underlying central message of this thesis is that classical statistical thinking leads to highly effective optimization methods for modern big data applications. The first part of the
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Mengdi. "Stochastic methods for large-scale linear problems, variational inequalities, and convex optimization." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82367.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 201-207).<br>This thesis considers stochastic methods for large-scale linear systems, variational inequalities, and convex optimization problems. I focus on special structures that lend themselves to sampling, such as when the linear/nonlinear mapping or the objective function is an expected value or is the sum of a large number of terms, and/or the constraint is the intersection of a large number
APA, Harvard, Vancouver, ISO, and other styles
19

Scott, Drew. "Decomposition Methods for Routing and Planning of Large-Scale Aerospace Systems." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617108065278479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kilinc-Karzan, Fatma. "Tractable relaxations and efficient algorithmic techniques for large-scale optimization." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41141.

Full text
Abstract:
In this thesis, we develop tractable relaxations and efficient algorithms for large-scale optimization. Our developments are motivated by a recent paradigm, Compressed Sensing (CS), which consists of acquiring directly low-dimensional linear projections of signals, possibly corrupted with noise, and then using sophisticated recovery procedures for signal reconstruction. We start by analyzing how to utilize a priori information given in the form of sign restrictions on part of the entries. We propose necessary and sufficient on the sensing matrix for exact recovery of sparse signals, utilize th
APA, Harvard, Vancouver, ISO, and other styles
21

Silveti, Falls Antonio. "First-order noneuclidean splitting methods for large-scale optimization : deterministic and stochastic algorithms." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC204.

Full text
Abstract:
Dans ce travail, nous développons et examinons deux nouveaux algorithmes d'éclatement du premier ordre pour résoudre des problèmes d'optimisation composites à grande échelle dans des espaces à dimensions infinies. Ces problèmes sont au coeur de nombres de domaines scientifiques et d'ingénierie, en particulier la science des données et l'imagerie. Notre travail est axé sur l'assouplissement des hypothèses de régularité de Lipschitz généralement requises par les algorithmes de fractionnement du premier ordre en remplaçant l'énergie euclidienne par une divergence de Bregman. Ces développements pe
APA, Harvard, Vancouver, ISO, and other styles
22

Lindell, Hugo. "Methods for optimizing large scale thermal imaging camera placement problems." Thesis, Linköpings universitet, Optimeringslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161946.

Full text
Abstract:
The objective of this thesis is to model and solve the problem of placing thermal imaging camera for monitoring piles of combustible bio-fuels. The cameras, of different models, can be mounted at discrete heights on poles at fixed positions and at discrete angles, and one seeks camera model and mounting combinations that monitor as much of the piles as possible to as low cost as possible. Since monitoring all piles may not be possible or desired, due to budget or customer constrains, the solution to the problem is a set of compromises between coverage and cost. We denote such a set of compromi
APA, Harvard, Vancouver, ISO, and other styles
23

Kervazo, Christophe. "Optimization framework for large-scale sparse blind source separation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS354/document.

Full text
Abstract:
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multi-valuées. L’objectif de ce doctorat est cependant d’étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s’articule en quatre parties, traitant chacune un aspect du problème: i) l’introduction d’algorithmes robustes de BSS parcimonieuse ne nécessitant qu’un seul lancement (malgré un choix d’hyper-paramètres délicat) et fortement étayés mathématiquement; ii) la proposition d’
APA, Harvard, Vancouver, ISO, and other styles
24

Ananduta, Wayan Wicak. "Non-centralized optimization-based control schemes for large-scale energy systems." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2019. http://hdl.handle.net/10803/669263.

Full text
Abstract:
Non-centralized control schemes for large-scale systems, including energy networks, are more flexible, scalable, and reliable than the centralized counterpart. These benefrts are obtained by having a set of local control!ers, each of which is responsible for a partition of the system, instead of one central entity that controls the whole system. Furthermore,in sorne cases, employing a non­ centralized control structure might be necessary due to the intractability problem of the centralized method.Thus, this thesis is devoted to the study of non-centralized optimization-based control approaches
APA, Harvard, Vancouver, ISO, and other styles
25

DENG, HAOYANG. "Fast Optimization Methods for Model Predictive Control via Parallelization and Sparsity Exploitation." Kyoto University, 2020. http://hdl.handle.net/2433/259076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cho, Taewon. "Computational Advancements for Solving Large-scale Inverse Problems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103772.

Full text
Abstract:
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection m
APA, Harvard, Vancouver, ISO, and other styles
27

Fitiwi, Desta Zahlay. "Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems." Doctoral thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192363.

Full text
Abstract:
Driven by a number of factors, the electric power industry is expected to undergo a paradigm shift with a considerably increased level of variable energy sources. A significant integration of such sources requires heavy transmission investments over geographically wide and large-scale networks. However, the stochastic nature of such sources, along with the sheer size of network systems, results in problems that may become intractable. Thus, the challenge addressed in this work is to design efficient and reasonably accurate models, strategies and tools that can solve large-scale TEP problems un
APA, Harvard, Vancouver, ISO, and other styles
28

Wagner, Julian [Verfasser], Ralf [Gutachter] Münnich, Ekkehard [Gutachter] Sachs, Göran [Gutachter] Kauermann, and Volker [Gutachter] Schulz. "Optimization Methods and Large-Scale Algorithms in Small Area Estimation / Julian Wagner ; Gutachter: Ralf Münnich, Ekkehard Sachs, Göran Kauermann, Volker Schulz." Trier : Universität Trier, 2019. http://d-nb.info/1197808493/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

de, la Lama Zubiran Paula. "Solving large-scale two-stage stochastic optimization problems by specialized interior point method." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671488.

Full text
Abstract:
Two-stage stochastic optimization models give rise to very large linear problems (LP). Several approaches have been devised for efficiently solving them, among which are interior-point methods (IPM). However, using IPM, the linking columns that are associated with first-stage decisions cause excessive fill-ins for the solutions of the normal equations, thus making the procedure computationally expensive. We have taken a step forward on the road to a better solution by reformulating the LP through a variable splitting technique which has significantly reduced the solution time. This work prese
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, W. C. T. "The modified barrier method for large scale nonlinear steady state and dynamic optimization." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597538.

Full text
Abstract:
Equality constraints are dealt with by including them directly in the inner optimization problem of the MBF method. Exact Hessian and gradient information is used throughout all implementations. The MBF, as implemented, consists of a two-stage approach: an outer cycle where the Lagrange multipliers for simple bound constraints of the variables are updated and an inner cycle, where the resulting equality-only constrained nonlinear optimization problem is solved. At present, inequalities in the problem are converted to equalities with the addition of bounded slack variables, the subsequently sol
APA, Harvard, Vancouver, ISO, and other styles
31

Jolivet, Frederic. "Approches "problèmes inverses" régularisées pour l'imagerie sans lentille et la microscopie holographique en ligne." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES012/document.

Full text
Abstract:
En imagerie numérique, les approches «problèmes inverses» régularisées reconstruisent une information d'intérêt à partir de mesures et d'un modèle de formation d'image. Le problème d'inversion étant mal posé, mal conditionné et le modèle de formation d'image utilisé peu contraint, il est nécessaire d'introduire des a priori afin de restreindre l'ambiguïté de l'inversion. Ceci permet de guider la reconstruction vers une solution satisfaisante. Les travaux de cette thèse ont porté sur le développement d'algorithmes de reconstruction d'hologrammes numériques, basés sur des méthodes d'optimisation
APA, Harvard, Vancouver, ISO, and other styles
32

Dai, Wei. "Learning with Staleness." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1209.

Full text
Abstract:
A fundamental assumption behind most machine learning (ML) algorithms and analyses is the sequential execution. That is, any update to the ML model can be immediately applied and the new model is always available for the next algorithmic step. This basic assumption, however, can be costly to realize, when the computation is carried out across multiple machines, linked by commodity networks that are usually 104 times slower than the memory speed due to fundamental hardware limitations. As a result, concurrent ML computation in the distributed settings often needs to handle delayed updates and p
APA, Harvard, Vancouver, ISO, and other styles
33

Hellman, Fredrik. "Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems." Thesis, Uppsala University, Department of Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-130013.

Full text
Abstract:
<p>This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to ans
APA, Harvard, Vancouver, ISO, and other styles
34

Rauški, Sonja [Verfasser], Christof [Akademischer Betreuer] Büskens, and Matthias [Akademischer Betreuer] Gerdts. "Limited Memory BFGS method for Sparse and Large-Scale Nonlinear Optimization / Sonja Rauški. Gutachter: Christof Büskens ; Matthias Gerdts. Betreuer: Christof Büskens." Bremen : Staats- und Universitätsbibliothek Bremen, 2014. http://d-nb.info/1072226332/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Culioli, Jean-Christophe. "Algorithmes de decomposition/coordination en optimisation stochastique." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0059.

Full text
Abstract:
Les systemes consideres, souvent complexes a modeliser et/ou optimiser peuvent etre constitues de sous-systemes heterogenes pour lesquels une technique globale de resolution n'est pas necessairement appropriee ou possible, meme s'ils sont equivalents et peu nombreux
APA, Harvard, Vancouver, ISO, and other styles
36

Fan, Yu. "Multi-scale approaches for the vibration and energy flow through piezoelectric waveguides : simulation strategies, control mechanisms and circuits optimization." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC019/document.

Full text
Abstract:
Cette thèse s’interesse au contrôle des flux d’énergie mécanique dans les structures périodiques. Les problèmes de dynamiques des structures considérés dans cette thèse sont abordés sous l'angle d'une description ondulatoire : la réponse forcée d’un système est calculée comme une superposition d’ondes dans la structure, tandis que les modes propres sont interprétés comme des ondes stationnaires.Un des avantages de l’approche ondulatoire est qu’elle permet de réduire de manière importante la taille des problèmes de dynamique. Ceci se révèle particulièrement utile dans le domaine des hautes et m
APA, Harvard, Vancouver, ISO, and other styles
37

Degbey, Octavien. "Optimisation statique hiérarchisée des systèmes de grandes dimensions : Application à l'équilibrage de bilans de mesures." Nancy 1, 1987. http://www.theses.fr/1987NAN10158.

Full text
Abstract:
Optimisation statique hiérarchisée des systèmes industriels ; traitement hiérarchisé de l'équilibrage des bilans de mesure pour des systèmes statiques décrits par des équations de modèles multilinéaires homogènes à paramètres inconnus
APA, Harvard, Vancouver, ISO, and other styles
38

Aybat, Necdet Serhat. "First Order Methods for Large-Scale Sparse Optimization." Thesis, 2011. https://doi.org/10.7916/D8V69RJJ.

Full text
Abstract:
In today's digital world, improvements in acquisition and storage technology are allowing us to acquire more accurate and finer application-specific data, whether it be tick-by-tick price data from the stock market or frame-by-frame high resolution images and videos from surveillance systems, remote sensing satellites and biomedical imaging systems. Many important large-scale applications can be modeled as optimization problems with millions of decision variables. Very often, the desired solution is sparse in some form, either because the optimal solution is indeed sparse, or because a sparse
APA, Harvard, Vancouver, ISO, and other styles
39

"Structural Decomposition Methods for Sparse Large-Scale Optimization." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.62718.

Full text
Abstract:
abstract: This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use tailored strategies based on iterative refinement (outer-approximations). The proposed algorithms are not rooted in duality theory, providing an alternative to existing methods based on linear programming relaxations
APA, Harvard, Vancouver, ISO, and other styles
40

Jain, Prateek. "Large scale optimization methods for metric and kernel learning." Thesis, 2009. http://hdl.handle.net/2152/27132.

Full text
Abstract:
A large number of machine learning algorithms are critically dependent on the underlying distance/metric/similarity function. Learning an appropriate distance function is therefore crucial to the success of many methods. The class of distance functions that can be learned accurately is characterized by the amount and type of supervision available to the particular application. In this thesis, we explore a variety of such distance learning problems using different amounts/types of supervision and provide efficient and scalable algorithms to learn appropriate distance functions for each of these
APA, Harvard, Vancouver, ISO, and other styles
41

Binhomaid, Omar. "Comparison between Optimization and Heuristic Methods for Large-Scale Infrastructure Rehabilitation Programs." Thesis, 2012. http://hdl.handle.net/10012/7043.

Full text
Abstract:
Civil infrastructure systems are the foundation of economic growth and prosperity in all nations. In recent years, infrastructure rehabilitation has been a focus of attention in North America and around the world. A large percentage of existing infrastructure assets is deteriorating due to harsh environmental conditions, insufficient capacity, and age. Ideally, an assets management system would include functions such as condition assessment, deterioration modeling, repair modeling, life-cycle cost analysis, and asset prioritization for repair along a planning horizon. While many asset manageme
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Cheng-Yu. "A Comparison of Optimization Methods for Large-scale L1-regularized Logistic Regression." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2207200801295600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yuan, Guo-Xun, and 袁國訓. "A Comparison of Optimization Methods for Large-scale L1-regularized Linear Classification." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/89306444290029333316.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>97<br>A large-scale linear classifier is useful for document classification and computational linguistics. The L1-regularized form can be used for feature selection, but its non-differentiability causes more difficulties in training. Various optimization methods have been proposed in recent years, but no serious comparison among them has been made. In this paper, we carefully address implementation issues of some representative methods and conduct a comprehensive comparison. Results show that coordinate descent type methods may be the most suitable in general situat
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Cheng-Yu, and 李振宇. "A Comparison of Optimization Methods for Large-scale L1-regularized Logistic Regression." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/60103621896267974383.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>96<br>Large-scale logistic regression is useful for document classification and computational linguistics. The L1-regularized form can be used for feature selection, but its non-differentiability causes more difficulties in training. Various optimization methods are proposed in recent years, but no serious comparison between them has been made. In this thesis we propose a trust region Newton method and compare several existing methods. Result shows that our method is competitive with some state-of-art L1-regularized logistic regression solvers. To investigate the ap
APA, Harvard, Vancouver, ISO, and other styles
45

Ridzal, Denis. "Trust-region SQP methods with inexact linear system solves for large-scale optimization." Thesis, 2006. http://hdl.handle.net/1911/18962.

Full text
Abstract:
This thesis extends the design and the global convergence analysis of a class of trust-region sequential quadratic programming (SQP) algorithms for smooth nonlinear optimization to allow for an efficient integration of inexact linear system solvers. Each iteration within an SQP method requires the solution of several linear systems, whose system matrix/operator involves the linearized constraints. Most existing implementations of SQP algorithms use direct linear algebra methods to solve these systems. For many optimization problems in science and engineering this is infeasible, because the sys
APA, Harvard, Vancouver, ISO, and other styles
46

Esmailzadeh, Ali. "Novel opposition-based sampling methods for efficiently solving challenging optimization problems." Thesis, 2011. http://hdl.handle.net/10155/150.

Full text
Abstract:
In solving noise-free and noisy optimization problems, candidate initialization and sampling play a key role, but are not deeply investigated. It is of interest to know if the entire search space has the same quality for candidate-solutions during solving different type of optimization problems. In this thesis, a comprehensive investigation is conducted in order to clear those doubts, and to examine the effects of variant sampling methods on solving challenging optimization problems, such as large-scale, noisy, and multi-modal problems. As a result, the search space is segmented by using seven
APA, Harvard, Vancouver, ISO, and other styles
47

Kontogiorgis, Spyridon A. "Alternating directions methods for the parallel solution of large-scale block-structured optimization problems." 1994. http://catalog.hathitrust.org/api/volumes/oclc/32179058.html.

Full text
Abstract:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1994.<br>Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 161-173).
APA, Harvard, Vancouver, ISO, and other styles
48

(6760907), Jose S. Rodriguez. "Solution of Large-scale Structured Optimization Problems with Schur-complement and Augmented Lagrangian Decomposition Methods." Thesis, 2019.

Find full text
Abstract:
<pre>In this dissertation we develop numerical algorithms and software tools to facilitate parallel solutions of nonlinear programming (NLP) problems. In particular, we address large-scale, block-structured problems with an intrinsic decomposable configuration. These problems arise in a great number of engineering applications, including parameter estimation, optimal control, network optimization, and stochastic programming. The structure of these problems can be leveraged by optimization solvers to accelerate solutions and overcome memory limitations, and we propose variants to two classes of
APA, Harvard, Vancouver, ISO, and other styles
49

Albersmeyer, Jan [Verfasser]. "Adjoint-based algorithms and numerical methods for sensitivity generation and optimization of large scale dynamic systems / vorgelegt von Jan Albersmeyer." 2010. http://d-nb.info/101109617X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ghobadi, Kimia. "Optimization Methods for Patient Positioning in Leksell Gamma Knife Perfexion." Thesis, 2014. http://hdl.handle.net/1807/65662.

Full text
Abstract:
We study inverse treatment planning approaches for stereotactic radiosurgery using Leksell Gamma Knife Perfexion (PFX, Elekta, Stockholm, Sweden) to treat brain cancer and tumour patients. PFX is a dedicated head-and-neck radiation delivery device that is commonly used in clinics. In a PFX treatment, the patient lies on a couch and the radiation beams are emitted from eight banks of radioactive sources around the patient's head that are focused at a single spot, called an isocentre. The radiation delivery in PFX follows a step-and-shoot manner, i.e., the couch is stationary while the radiati
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!