Dissertations / Theses on the topic 'Large Scale Probleme'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Large Scale Probleme.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Falter, Daniela [Verfasser], and Bruno [Akademischer Betreuer] Merz. "A novel approach for large-scale flood risk assessments : continuous and long-term simulation of the full flood risk chain / Daniela Falter ; Betreuer: Bruno Merz." Potsdam : Universität Potsdam, 2016. http://d-nb.info/1218400412/34.
Full textBrunner, Carl. "Pairwise Classification and Pairwise Support Vector Machines." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-87820.
Full textEs gibt verschiedene Ansätze, um binäre Klassifikatoren zur Mehrklassenklassifikation zu nutzen, zum Beispiel die One Against All Technik, die One Against One Technik oder Directed Acyclic Graphs. Paarweise Klassifikation ist ein neuerer Ansatz zur Mehrklassenklassifikation. Dieser Ansatz basiert auf der Verwendung von zwei Input Examples anstelle von einem und bestimmt, ob diese beiden Examples zur gleichen Klasse oder zu unterschiedlichen Klassen gehören. Eine Support Vector Machine (SVM), die für paarweise Klassifikationsaufgaben genutzt wird, heißt paarweise SVM. Beispielsweise werden Probleme der Gesichtserkennung als paarweise Klassifikationsaufgabe gestellt. Dazu nutzt man eine Menge von Bildern zum Training und ein andere Menge von Bildern zum Testen. Häufig ist man dabei an der Interclass Generalization interessiert. Das bedeutet, dass jede Person, die auf wenigstens einem Bild der Trainingsmenge dargestellt ist, auf keinem Bild der Testmenge vorkommt. Von allen erwähnten Mehrklassenklassifikationstechniken liefert nur die paarweise Klassifikationstechnik sinnvolle Ergebnisse für die Interclass Generalization. Die Entscheidung eines paarweisen Klassifikators sollte nicht von der Reihenfolge der zwei Input Examples abhängen. Diese Symmetrie wird häufig durch die Verwendung spezieller Kerne gesichert. Es werden Beziehungen zwischen solchen Kernen und bestimmten Projektionen hergeleitet. Zudem wird gezeigt, dass diese Projektionen zu einem Informationsverlust führen können. Für paarweise SVMs ist die Symmetrisierung der Trainingsmengen ein weiter Ansatz zur Sicherung der Symmetrie. Das bedeutet, wenn das Paar (a,b) von Input Examples zur Trainingsmenge gehört, dann muss das Paar (b,a) ebenfalls zur Trainingsmenge gehören. Es wird bewiesen, dass für bestimmte Parameter beide Ansätze zur gleichen Entscheidungsfunktion führen. Empirische Messungen zeigen, dass der Ansatz mittels spezieller Kerne drei bis viermal schneller ist. Um eine gute Interclass Generalization zu erreichen, werden bei paarweisen SVMs Trainingsmengen mit mehreren Millionen Paaren benötigt. Es wird eine Technik eingeführt, die die Trainingszeit von paarweisen SVMs um bis zum 130-fachen beschleunigt und es somit ermöglicht, Trainingsmengen mit mehreren Millionen Paaren zu verwenden. Auch die Auswahl guter Parameter für paarweise SVMs ist im Allgemeinen sehr zeitaufwendig. Selbst mit den beschriebenen Beschleunigungen ist eine Gittersuche in der Menge der Parameter sehr teuer. Daher wird eine Model Selection Technik eingeführt, die deutlich geringeren Aufwand erfordert. Im maschinellen Lernen werden die Trainingsmenge und die Testmenge von einem Datengenerierungsprozess erzeugt. Ausgehend von einem nicht paarweisen Datengenerierungsprozess werden unterschiedliche paarweise Datengenerierungsprozesse abgeleitet und ihre Vor- und Nachteile bewertet. Es werden paarweise Bayes-Klassifikatoren eingeführt und ihre Eigenschaften diskutiert. Es wird gezeigt, dass sich diese Bayes-Klassifikatoren für Interclass Generalization Aufgaben und für Interexample Generalization Aufgaben im Allgemeinen unterscheiden. Bei der Gesichtserkennung bedeutet die Interexample Generalization, dass jede Person, die auf einem Bild der Testmenge dargestellt ist, auch auf mindestens einem Bild der Trainingsmenge vorkommt. Außerdem ist der Durchschnitt der Menge der Bilder der Trainingsmenge mit der Menge der Bilder der Testmenge leer. Paarweise SVMs werden an vier synthetischen und an zwei Real World Datenbanken getestet. Eine der verwendeten Real World Datenbanken ist die Labeled Faces in the Wild (LFW) Datenbank. Die andere wurde von Cognitec Systems GmbH bereitgestellt. Die Annahmen der Model Selection Technik, die Diskussion über den Informationsverlust, sowie die präsentierten Beschleunigungstechniken werden durch empirische Messungen mit den synthetischen Datenbanken belegt. Zudem wird mittels dieser Datenbanken gezeigt, dass Klassifikatoren von paarweisen SVMs zu ähnlich guten Ergebnissen wie paarweise Bayes-Klassifikatoren führen. Für die LFW Datenbank wird ein paarweiser Klassifikator bestimmt, der zu einer durchschnittlichen Equal Error Rate (EER) von 0.0947 und einem Standard Error of The Mean (SEM) von 0.0057 führt. Dieses Ergebnis ist besser als das des aktuellen State of the Art Klassifikators, dem Combined Probabilistic Linear Discriminant Analysis Klassifikator. Dieser führt zu einer durchschnittlichen EER von 0.0993 und einem SEM von 0.0051
Brinkel, Johanna [Verfasser]. "A user-centred evaluation of a mobile phone-based interactive voice response system to support infectious disease surveillance and access to healthcare for sick children in Ghana: users’ experiences, challenges and opportunities for large-scale application. Part of a concept and pilot study for mobile phone-based Electronic Health Information and Surveillance System (eHISS) for Africa / Johanna Brinkel." Bielefeld : Universitätsbibliothek Bielefeld, 2020. http://d-nb.info/1204561826/34.
Full textTran, Van-Hoai. "Solving large scale crew pairing problems." [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975292714.
Full textGrigoleit, Mark Ted. "Optimisation of large scale network problems." Curtin University of Technology, Department of Mathematics and Statistics, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=115092.
Full textWe then use this information to constrain the network along a bisecting meridian. The combination of Lagrange Relaxation (LR) and a heuristic for filtering along the meridian provide an aggressive method for finding near-optimal solutions in a short time. Two network problems are studied in this work. The first is a Submarine Transit Path problem in which the transit field contains four sonar detectors at known locations, each with the same detection profile. The side constraint is the total transit time, with the submarine capable of 2 speeds. For the single-speed case, the initial LR duality gap may be as high as 30%. The first hybrid method uses a single centre meridian to constrain the network based on the unused time resource, and is able to produce solutions that are generally within 1% of optimal and always below 3%. Using the computation time for the initial Lagrange Relaxation as a baseline, the average computation time for the first hybrid method is about 30% to 50% higher, and the worst case CPU times are 2 to 4 times higher. The second problem is a random valued network from the literature. Edge costs, times, and lengths are uniform, randomly generated integers in a given range. Since the values given in the literature problems do not yield problems with a high duality gap, the values are varied and from a population of approximately 100,000 problems only the worst 200 from each set are chosen for study. These problems have an initial LR duality gap as high as 40%. A second hybrid method is developed, using values for the unused time resource and the lower bound values computed by Dijkstra’s algorithm as part of the LR method. The computed values are then used to position multiple constraining meridians in order to allow LR to find better solutions.
This second hybrid method is able to produce solutions that are generally within 0.1% of optimal, with computation times that are on average 2 times the initial Lagrange Relaxation time, and in the worst case only about 5 times higher. The best method for solving the Constrained Shortest Path Problem reported in the literature thus far is the LRE-A method of Carlyle et al. (2007), which uses Lagrange Relaxation for preprocessing followed by a bounded search using aggregate constraints. We replace Lagrange Relaxation with the second hybrid method and show that optimal solutions are produced for both network problems with computation times that are between one and two orders of magnitude faster than LRE-A. In addition, these hybrid methods combined with the bounded search are up to 2 orders of magnitude faster than the commercial CPlex package using a straightforward MILP formulation of the problem. Finally, the second hybrid method is used as a preprocessing step on both network problems, prior to running CPlex. This preprocessing reduces the network size sufficiently to allow CPlex to solve all cases to optimality up to 3 orders of magnitude faster than without this preprocessing, and up to an order of magnitude faster than using Lagrange Relaxation for preprocessing. Chapter 1 provides a review of the thesis and some terminology used. Chapter 2 reviews previous approaches to the CSPP, in particular the two current best methods. Chapter 3 applies Lagrange Relaxation to the Submarine Transit Path problem with 2 speeds, to provide a baseline for comparison. The problem is reduced to a single speed, which demonstrates the large duality gap problem possible with Lagrange Relaxation, and the first hybrid method is introduced.
Chapter 4 examines a grid network problem using randomly generated edge costs and weights, and introduces the second hybrid method. Chapter 5 then applies the second hybrid method to both network problems as a preprocessing step, using both CPlex and a bounded search method from the literature to solve to optimality. The conclusion of this thesis and directions for future work are discussed in Chapter 6.
Shim, Sangho. "Large scale group network optimization." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31737.
Full textCommittee Chair: Ellis L. Johnson; Committee Member: Brady Hunsaker; Committee Member: George Nemhauser; Committee Member: Jozef Siran; Committee Member: Shabbir Ahmed; Committee Member: William Cook. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Bulin, Johannes. "Large-scale time parallelization for molecular dynamics problems." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129301.
Full textModerna superdatorer använder ett stort antal processorer för att uppnå hög prestanda. Därför är det nödvändigt att parallellisera sina program på ett effektivt sätt. När man löser differentialekvationer så brukar man parallellisera beräkningen av en enda tidspunkt. Speedupen av sådana program är ofta begränsad, till exempel av problemets storlek. Genom att använda ytterligare parallellisering i tid kan man uppnå bättre skalbarhet. Denna avhandling presenterar två välkända algoritmer för tidsparallellisering: waveform relaxation och parareal. Dessa metoder används för att lösa ett molekyldynamikproblem där tidsdomänen är stor jämförd med antalet obekanta. Slutligen undersöks några förbättringar för att möjliggöra storskaliga beräkningar.
Bacarella, Daniele. "Distributed clustering algorithm for large scale clustering problems." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-212089.
Full textFutamura, Natsuhiko. "Algorithms for large-scale problems in computational biology." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2002. http://wwwlib.umi.com/cr/syr/main.
Full textSohrabi, Babak. "Solving large scale distribution problems using heuristic algorithms." Thesis, Lancaster University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369654.
Full textShylo, Oleg V. "New tools for large-scale combinatorial optimization problems." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024719.
Full textRosas, José Humberto Ablanedo. "Algorithms for very large scale set covering problems /." Full text available from ProQuest UM Digital Dissertations, 2007. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1609001671&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1244747021&clientId=22256.
Full textCho, Taewon. "Computational Advancements for Solving Large-scale Inverse Problems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103772.
Full textDoctor of Philosophy
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
Wilkinson, Stephen James. "Aggregate formulations for large-scale process scheduling problems." Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/7255.
Full textMitchell, David Riach. "Modelling environments for large scale process system problems." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/15408.
Full textMacLeod, Donald James. "A parallel algorithm for large scale electronic structure calculations." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/17023.
Full textFu, Yuhong. "Rapid solution of large-scale three-dimensional micromechanics problems /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.
Full textAttia, Ahmed Mohamed Mohamed. "Advanced Sampling Methods for Solving Large-Scale Inverse Problems." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73683.
Full textPh. D.
Liu, Yida. "SECOUT: Parallel Secure Outsourcing of Large-scale Optimization Problems." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586271073206561.
Full textDimitriadis, Andreas Dimitriou. "Algorithms for the solution of large-scale scheduling problems." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/8047.
Full textHenneman, Richard Lewis. "Human problem solving in complex hierarchical large scale systems." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/25432.
Full textSabbir, Tarikul Alam Khan. "Topology sensitive algorithms for large scale uncapacitated covering problem." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2011, 2011. http://hdl.handle.net/10133/3235.
Full textix, 89 leaves : ill. ; 29 cm
Yost, Kirk A. "Solution of large-scale allocation problems with partially observable outcomes." Diss., Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA355529.
Full text"September 1998." Dissertation supervisor(s): Alan R. Washburn. Includes bibliographical references (p. 165-160). Also Available online.
Bängtsson, Erik. "Robust preconditioned iterative solution methods for large-scale nonsymmetric problems /." Uppsala : Department of Information Technology, Uppsala University, 2005. http://www.it.uu.se/research/reports/lic/2005-006/.
Full textJha, Krishna Chandra. "Very large-scale neighborhood search heuristics for combinatorial optimization problems." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0004352.
Full textTrukhanov, Svyatoslav. "Novel approaches for solving large-scale optimization problems on graphs." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2986.
Full textMišić, Velibor V. "Data, models and decisions for large-scale stochastic optimization problems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105003.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 204-209).
Modern business decisions exceed human decision making ability: often, they are of a large scale, their outcomes are uncertain, and they are made in multiple stages. At the same time, firms have increasing access to data and models. Faced with such complex decisions and increasing access to data and models, how do we transform data and models into effective decisions? In this thesis, we address this question in the context of four important problems: the dynamic control of large-scale stochastic systems, the design of product lines under uncertainty, the selection of an assortment from historical transaction data and the design of a personalized assortment policy from data. In the first chapter, we propose a new solution method for a general class of Markov decision processes (MDPs) called decomposable MDPs. We propose a novel linear optimization formulation that exploits the decomposable nature of the problem data to obtain a heuristic for the true problem. We show that the formulation is theoretically stronger than alternative proposals and provide numerical evidence for its strength in multi-armed bandit problems. In the second chapter, we consider to how to make strategic product line decisions under uncertainty in the underlying choice model. We propose a method based on robust optimization for addressing both parameter uncertainty and structural uncertainty. We show using a real conjoint data set the benefits of our approach over the traditional approach that assumes both the model structure and the model parameters are known precisely. In the third chapter, we propose a new two-step method for transforming limited customer transaction data into effective assortment decisions. The approach involves estimating a ranking-based choice model by solving a large-scale linear optimization problem, and solving a mixed-integer optimization problem to obtain a decision. Using synthetic data, we show that the approach is scalable, leads to accurate predictions and effective decisions that outperform alternative parametric and non-parametric approaches. In the last chapter, we consider how to leverage auxiliary customer data to make personalized assortment decisions. We develop a simple method based on recursive partitioning that segments customers using their attributes and show that it improves on a "uniform" approach that ignores auxiliary customer information.
by Velibor V. Mišić.
Ph. D.
Becker, Adrian Bernard Druke. "Decomposition methods for large scale stochastic and robust optimization problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68969.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 107-112).
We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a Markov decision problem named the Generalized Rest less Bandits Problem that encompasses a broad generalization of the restless bandit problem. For this class of stochastic optimization problems, we develop a nested policy heuristic which iteratively solves a series of sub-problems operating on smaller bandit systems. We also develop linear-optimization based bounds for the Generalized Restless Bandit problem and demonstrate promising computational performance of the nested policy heuristic on a large-scale real world application of search term selection for sponsored search advertising. We further study the distributionally robust optimization problem with known mean, covariance and support. These optimization models are attractive in their real world applications as they require the model consumer to only rely on those statistics of uncertainty that are known with relative confidence rather than making arbitrary assumptions about the exact dynamics of the underlying distribution of uncertainty. Known to be AP - hard, current approaches invoke tractable but often weak relaxations for real-world applications. We develop a decomposition method for this family of problems which recursively derives sub-policies along projected dimensions of uncertainty and provides a sequence of bounds on the value of the derived policy. In the development of this method, we prove that non-convex quadratic optimization in n-dimensions over a box in two-dimensions is efficiently solvable. We also show that this same decomposition method yields a promising heuristic for the MAXCUT problem. We then provide promising computational results in the context of a real world fixed income portfolio optimization problem. The decomposition methods developed in this thesis recursively derive sub-policies on projected dimensions of the master problem. These sub-policies are optimal on relaxations which admit "tight" projections of the master problem; that is, the projection of the feasible region for the relaxation is equivalent to the projection of that of master problem along the dimensions of the sub-policy. Additionally, these decomposition strategies provide a hierarchical solution structure that aids in solving large-scale problems.
by Adrian Bernard Druke Becker.
Ph.D.
Chen, Yujie. "Optimisation for large-scale maintenance, scheduling and vehicle routing problems." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/16107/.
Full textLiu, Chiun-Ming. "Special versus standard algorithms for large-scale harvest scheduling problems." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/43047.
Full textThis thesis is concerned with structure exploitation and the design of algorithms for solving large-scale harvest scheduling problems. We discover that the harvest scheduling problem involving area constraints possesses a network structure. In Model I-Form 1, the network constraints form a separable block diagonal structure, which permits one to solve for the decision variables belonging to each individual area constraints as independent knapsack problems. In Model II-Form 1, the network constraints constitute a longest path problem, and a Longest Path Algorithm is developed to solve this problem in closed form. The computational time for this scheme is greatly reduced over that for the revised simplex method. The Dantzig-Wolfe algorithm is coded and tuned to solve general Model II problems, taking advantage of the Longest Path Algorithm in the subproblem step, and using the revised simplex method to solve the master problems. Computational results show that the algorithm solves problems to within one percent accuracy far more efficiently than the revised simplex method using MPS III. Both the CPU time and number of iterations for the Dantzig-Wolfe algorithm are less than those for the MPS III, depending on the problem size. Results also suggest that the Dantzig-Wolfe algorithm makes rapid advances in the initial iterations, but has a slow convergence rate in the final iterations. A Primal-Dual Conjugate Subgradient Algorithm is also coded and tuned to solve general Model II problems. Results show that the computational effort is greatly affected by the number of side constraints. If the number of side constraints is restricted, the Primal-Dual Conjugate Subgradient Algorithm can give a more efficient algorithm for solving harvest scheduling problems. Overall, from a storage requirement viewpoint, the Primal-Dual Conjugate Subgradient Algorithm is best, followed by the Dantzig-Wolfe algorithm and then the revised simplex method. From a computational efficiency viewpoint, if the optimality criterion is suitably selected, the Dantzig-Wolfe algorithm is best, provided that the number of side constraints are not too many, followed by the revised simplex method and then the Primal-Dual Conjugate Subgradient Algorithm.
Master of Science
Lindell, Hugo. "Methods for optimizing large scale thermal imaging camera placement problems." Thesis, Linköpings universitet, Optimeringslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161946.
Full textSyftet med detta examensarbete är att modellera och lösa kameraplaceringsproblemet då IR-kameror ska användas för brandövervakning av fastbränslehögar. Problemet består i att givet ett antal kamera modeller och monteringsstolpar bestämma de kombinationer av placeringar och modeller sådana att övervakningen av högarna är maximal, för alla möjliga kostnadsnivåer. I den första delen av examensarbetet presenteras en modell för detta kameraplaceringsproblem. Modellen använder sig av en diskret formulering, där området om ska övervaras är representerad av ett rutnät. De möjliga kameravalen beskrivas med en diskret mängd av möjliga kameraplaceringar. För att utröna vilka celler inom rutnätet som en kameraplacering övervakar används metoden ray-casting. Utifrån mängden av möjliga kameraplaceringar kan en optimeringsmodell med två målfunktioner formuleras. Målet i den första målfunktionen är att minimera kostnaden för övervakningen och i den andra att maximera storleken på det övervakade området. Utgående från denna modell presenteras därefter ett antal algoritmer för att lösa modellen. Dessa är: Greedy Search, Random Greedy Search, Fear Search, Unique Search, Meta-RaPS och Weighted Linear Neighbourhood Search. Algoritmerna utvärderas på två konstgjorda testproblem och ett antal problem från verkliga fastbränslelager. Utvärderingen baseras på lösningsfronter (grafer över de icke-dominerade lösningarna med de bästa kombinationerna av kostnad och täckning) samt ett antal resultatmått som tid, lägsta kostnad för lösning med full täckning, etc... Vid utvärderingen av resultaten framkom att för de konstgjorda testinstanserna presterade ingen av heuristikerna jämförbart med en standardlösare, varken i termer av kvalitén på lösningarna eller med hänsyn tagen till tidsåtgången. De heuristiker som presterade bäst på dessa problem var framförallt Fear Search och Greedy Search. Även på de mindre probleminstanserna från existerande fastbränslelager hittade standardlösaren optimala lösningsfronter och en lösning med full täckning, men tidsåtgången var här flera gånger större jämfört med vissa av heuristikerna. På en hundra- respektive en tiondel av tiden kan Greedy Search eller Random Greedy Search heuristikerna finna en lösningsfront som är jämförbar med standardlösare, upp till 70-80% täckning. För de största probleminstanserna är tidsåtgången vid användning av standardlösare så pass stor att det i många fall är praktiskt svårt att lösa problemen, både för att generera fronten och att hitta en lösning med full täckning. I dessa fall är heuristiker oftast de enda möjliga alternativen. Vi fann att Greedy Search och Random Greedy Search var de heuristiker som, liksom för de mindre probleminstanserna, genererade de bästa lösningsfronterna. Ofta kunde dock en bättre lösning för full täckning hittas med hjälp av Fear Search eller Unique Search.
Bängtsson, Erik. "Robust preconditioned iterative solution methods for large-scale nonsymmetric problems." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86353.
Full textRoosta-Khorasani, Farbod. "Randomized algorithms for solving large scale nonlinear least squares problems." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52663.
Full textScience, Faculty of
Computer Science, Department of
Graduate
Guertler, Siegfried. "Large scale computer-simulations of many-body Bose and Fermi systems at low temperature." Thesis, Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40887741.
Full textDing, Jian. "Fast Boundary Element Method Solutions For Three Dimensional Large Scale Problems." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6830.
Full textDing, Jian. "Fast boundary element method solutions for three dimensional large scale problems." Available online, Georgia Institute of Technology, 2005, 2004. http://etd.gatech.edu/theses/available/etd-01102005-174227/unrestricted/ding%5Fjian%5F200505%5Fphd.pdf.
Full textMucha, Peter, Committee Member ; Qu, Jianmin, Committee Member ; Ye, Wenjing, Committee Chair ; Hesketh, Peter, Committee Member ; Gray, Leonard J., Committee Member. Vita. Includes bibliographical references.
Solomon, P. J. "Some problems in the statistical analysis of large scale clinical trials." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37860.
Full textCohn, Amy Ellen Mainville 1969. "Composite-variable modeling for large-scale problems in transportation and logistics." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8529.
Full textIncludes bibliographical references (p. 137-142).
Numerous important real-world problems are found in the areas of transportation and logistics. Many of these problems pose tremendous challenges due to characteristics such as complex networks, tightly constrained resources, and very large numbers of heavily inter-connected decisions. As a result, mathematical models can be critical in solving these problems. These models, however, can be computationally challenging or even intractable. In this thesis we discuss how greater tractability can sometimes be achieved with composite-variable models - models in which individual binary variables encompass multiple decisions. In Part I, we discuss common challenges found in solving large-scale transportation and logistics problems. We introduce the idea of composite variables and discuss the potential benefits of composite-variable models. We also note some of the drawbacks of these models and discuss approaches to addressing these drawbacks. In Parts II and III, we demonstrate these ideas using two real-world examples, one from airline planning and the other from service parts logistics. We build on our experience from these two applications in Part IV, providing some broader insights for composite-variable modeling. We focus in particular on the dominance property seen in the service parts logistics example and on the fact that we can relax the integrality of the composite variables in the airline planning example. In both cases, we introduce broader classes of problems in which these properties can also be found. We offer conclusions in Part V.
(cont.) The contributions of the thesis are three-fold. First, we provide a new model and solution approach for an important real-world problem from the airline industry. Second, we provide a framework for addressing challenging problems in service parts logistics. Third, we provide insights into how to construct composite-variable models for greater tractability. These insights can be useful not only in solving large-scale problems, but also in integrating multiple stages within a planning environment, developing better heuristics for solving large problems in real time, and providing users with greater control in trading off solution time and quality.
by Amy Ellen Mainville Cohn.
Ph.D.
Parisini, Fabio <1981>. "Hybrid constraint programming and metaheuristic methods for large scale optimization problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3709/.
Full textMadabushi, Ananth R. "Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36833.
Full textMaster of Science
Romero, Alcalde Eloy. "Parallel implementation of Davidson-type methods for large-scale eigenvalue problems." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/15188.
Full textRomero Alcalde, E. (2012). Parallel implementation of Davidson-type methods for large-scale eigenvalue problems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15188
Palancia
Dan, Hiroshige. "Studies on algorithms for large-scale nonlinear optimization and related problems." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/145312.
Full textDa, Silva Curt. "Large-scale optimization algorithms for missing data completion and inverse problems." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62968.
Full textScience, Faculty of
Mathematics, Department of
Graduate
Ma, Yanting. "Solving Large-Scale Inverse Problems via Approximate Message Passing and Optimization." Thesis, North Carolina State University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10758823.
Full textThis work studies the problem of reconstructing a signal from measurements obtained by a sensing system, where the measurement model that characterizes the sensing system may be linear or nonlinear.
We first consider linear measurement models. In particular, we study the popular low-complexity iterative linear inverse algorithm, approximate message passing (AMP), in a probabilistic setting, meaning that the signal is assumed to be generated from some probability distribution, though the distribution may be unknown to the algorithm. The existing rigorous performance analysis of AMP only allows using a separable or block-wise separable estimation function at each iteration of AMP, and therefore cannot capture sophisticated dependency structures in the signal. This work studies the case when the signal has a Markov random field (MRF) prior, which is commonly used in image applications. We provide rigorous performance analysis of AMP with a class of non-separable sliding-window estimation functions, which is suitable to capture local dependencies in an MRF prior.
In addition, we design AMP-based algorithms with non-separable estimation functions for hyperspectral imaging and universal compressed sensing (imaging), and compare our algorithms to state-of-the-art algorithms with extensive numerical examples. For fast computation in largescale problems, we study a multiprocessor implementation of AMP and provide its performance analysis. Additionally, we propose a two-part reconstruction scheme where Part 1 detects zero-valued entries in the signal using a simple and fast algorithm, and Part 2 solves for the remaining entries using a high-fidelity algorithm. Such two-part scheme naturally leads to a trade-off analysis of speed and reconstruction quality.
Finally, we study diffractive imaging, where the electric permittivity distribution of an object is reconstructed from scattered wave measurements. When the object is strongly scattering, a nonlinear measurement model is needed to characterize the relationship between the permittivity and the scattered wave. We propose an inverse method for nonlinear diffractive imaging. Our method is based on a nonconvex optimization formulation. The nonconvex solver used in the proposed method is our new variant of the popular convex solver, fast iterative shrinkage/ thresholding algorithm (FISTA). We provide a fast and memory-efficient implementation of our new FISTA variant and prove that it reliably converges for our nonconvex optimization problem. Hence, our new FISTA variance may be of interest on its own as a general nonconvex solver. In addition, we systematically compare our method to state-of-the-art methods on simulated as well as experimentally measured data in both 2D and 3D (vectorial field) settings.
Agarwal, Richa. "Composite very large-scale neighborhood structure for the vehicle-routing problem." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001111.
Full textFigueras, Anthony L. "A hierarchical approach for solving the large-scale traveling salesman problem." FIU Digital Commons, 1994. https://digitalcommons.fiu.edu/etd/3321.
Full textSilva, Carla Taviane Lucke da. "Otimização de processos acoplados: programação da produção e corte de estoque." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13022009-102119/.
Full textIn the many manufacturing industries (e.g., paper industry, furniture, steel, textile), lot-sizing decisions generally arise together with other decisions of planning production, such as distribution, cutting, scheduling and others. However, usually, these decisions are dealt with separately, which reduce the solution space and break dependence on decisions, increasing the total costs. In this thesis, we study the production process that arises in small scale furniture industries, which consists basically of cutting large plates available in stock into several thicknesses to obtain different types of pieces required to manufacture lots of ordered products. The cutting and drilling machines are possibly bottlenecks and their capacities have to be taken into account. The lot-sizing and cutting stock problems are coupled with each other in a large scale linear integer optimization model, whose objective function consists in minimizing different costs simultaneously, production, inventory, raw material waste and setup costs. The proposed model captures the tradeoff between making inventory and reducing losses. The impact of the uncertainty of the demand, which is composed with ordered and forecasting products) was smoothed down by a rolling horizon strategy and by new decision variables that represent extra production to meet forecasting demands at the best moment, aiming at total cost minimization. Two heuristic methods are proposed to solve relaxation of the mathematical model. Randomly generated instances based on real world life data were used for the computational experiments for empirical analyses of the model and the proposed solution methods
Hellman, Fredrik. "Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems." Thesis, Uppsala University, Department of Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-130013.
Full textThis thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments.
The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances.
Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer.
The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size.
Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem.
Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.
Bredström, David. "Models and solution methods for large-scale industrial mixed integer programming problems /." Linköping : Division of Optimization, Department of Mathematics, Linköpings universitet, 2007. http://www.bibl.liu.se/liupubl/disp/disp2007/tek1071s.pdf.
Full textSharkawy, Mohamed Hassan Al. "Iterative multi-region technique for the analysis of large scale electromagnetic problems /." Full text available from ProQuest UM Digital Dissertations, 2006. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1394652571&SrchMode=1&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1216839065&clientId=22256.
Full text