To see the other types of publications on this topic, follow the link: Large Scale Probleme.

Dissertations / Theses on the topic 'Large Scale Probleme'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large Scale Probleme.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Falter, Daniela [Verfasser], and Bruno [Akademischer Betreuer] Merz. "A novel approach for large-scale flood risk assessments : continuous and long-term simulation of the full flood risk chain / Daniela Falter ; Betreuer: Bruno Merz." Potsdam : Universität Potsdam, 2016. http://d-nb.info/1218400412/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brunner, Carl. "Pairwise Classification and Pairwise Support Vector Machines." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-87820.

Full text
Abstract:
Several modifications have been suggested to extend binary classifiers to multiclass classification, for instance the One Against All technique, the One Against One technique, or Directed Acyclic Graphs. A recent approach for multiclass classification is the pairwise classification, which relies on two input examples instead of one and predicts whether the two input examples belong to the same class or to different classes. A Support Vector Machine (SVM), which is able to handle pairwise classification tasks, is called pairwise SVM. A common pairwise classification task is face recognition. In this area, a set of images is given for training and another set of images is given for testing. Often, one is interested in the interclass setting. The latter means that any person which is represented by an image in the training set is not represented by any image in the test set. From the mentioned multiclass classification techniques only the pairwise classification technique provides meaningful results in the interclass setting. For a pairwise classifier the order of the two examples should not influence the classification result. A common approach to enforce this symmetry is the use of selected kernels. Relations between such kernels and certain projections are provided. It is shown, that those projections can lead to an information loss. For pairwise SVMs another approach for enforcing symmetry is the symmetrization of the training sets. In other words, if the pair (a,b) of examples is a training pair then (b,a) is a training pair, too. It is proven that both approaches do lead to the same decision function for selected parameters. Empirical tests show that the approach using selected kernels is three to four times faster. For a good interclass generalization of pairwise SVMs training sets with several million training pairs are needed. A technique is presented which further speeds up the training time of pairwise SVMs by a factor of up to 130 and thus enables the learning of training sets with several million pairs. Another element affecting time is the need to select several parameters. Even with the applied speed up techniques a grid search over the set of parameters would be very expensive. Therefore, a model selection technique is introduced that is much less computationally expensive. In machine learning, the training set and the test set are created by using some data generating process. Several pairwise data generating processes are derived from a given non pairwise data generating process. Advantages and disadvantages of the different pairwise data generating processes are evaluated. Pairwise Bayes' Classifiers are introduced and their properties are discussed. It is shown that pairwise Bayes' Classifiers for interclass generalization tasks can differ from pairwise Bayes' Classifiers for interexample generalization tasks. In face recognition the interexample task implies that each person which is represented by an image in the test set is also represented by at least one image in the training set. Moreover, the set of images of the training set and the set of images of the test set are disjoint. Pairwise SVMs are applied to four synthetic and to two real world datasets. One of the real world datasets is the Labeled Faces in the Wild (LFW) database while the other one is provided by Cognitec Systems GmbH. Empirical evidence for the presented model selection heuristic, the discussion about the loss of information and the provided speed up techniques is given by the synthetic databases and it is shown that classifiers of pairwise SVMs lead to a similar quality as pairwise Bayes' classifiers. Additionally, a pairwise classifier is identified for the LFW database which leads to an average equal error rate (EER) of 0.0947 with a standard error of the mean (SEM) of 0.0057. This result is better than the result of the current state of the art classifier, namely the combined probabilistic linear discriminant analysis classifier, which leads to an average EER of 0.0993 and a SEM of 0.0051
Es gibt verschiedene Ansätze, um binäre Klassifikatoren zur Mehrklassenklassifikation zu nutzen, zum Beispiel die One Against All Technik, die One Against One Technik oder Directed Acyclic Graphs. Paarweise Klassifikation ist ein neuerer Ansatz zur Mehrklassenklassifikation. Dieser Ansatz basiert auf der Verwendung von zwei Input Examples anstelle von einem und bestimmt, ob diese beiden Examples zur gleichen Klasse oder zu unterschiedlichen Klassen gehören. Eine Support Vector Machine (SVM), die für paarweise Klassifikationsaufgaben genutzt wird, heißt paarweise SVM. Beispielsweise werden Probleme der Gesichtserkennung als paarweise Klassifikationsaufgabe gestellt. Dazu nutzt man eine Menge von Bildern zum Training und ein andere Menge von Bildern zum Testen. Häufig ist man dabei an der Interclass Generalization interessiert. Das bedeutet, dass jede Person, die auf wenigstens einem Bild der Trainingsmenge dargestellt ist, auf keinem Bild der Testmenge vorkommt. Von allen erwähnten Mehrklassenklassifikationstechniken liefert nur die paarweise Klassifikationstechnik sinnvolle Ergebnisse für die Interclass Generalization. Die Entscheidung eines paarweisen Klassifikators sollte nicht von der Reihenfolge der zwei Input Examples abhängen. Diese Symmetrie wird häufig durch die Verwendung spezieller Kerne gesichert. Es werden Beziehungen zwischen solchen Kernen und bestimmten Projektionen hergeleitet. Zudem wird gezeigt, dass diese Projektionen zu einem Informationsverlust führen können. Für paarweise SVMs ist die Symmetrisierung der Trainingsmengen ein weiter Ansatz zur Sicherung der Symmetrie. Das bedeutet, wenn das Paar (a,b) von Input Examples zur Trainingsmenge gehört, dann muss das Paar (b,a) ebenfalls zur Trainingsmenge gehören. Es wird bewiesen, dass für bestimmte Parameter beide Ansätze zur gleichen Entscheidungsfunktion führen. Empirische Messungen zeigen, dass der Ansatz mittels spezieller Kerne drei bis viermal schneller ist. Um eine gute Interclass Generalization zu erreichen, werden bei paarweisen SVMs Trainingsmengen mit mehreren Millionen Paaren benötigt. Es wird eine Technik eingeführt, die die Trainingszeit von paarweisen SVMs um bis zum 130-fachen beschleunigt und es somit ermöglicht, Trainingsmengen mit mehreren Millionen Paaren zu verwenden. Auch die Auswahl guter Parameter für paarweise SVMs ist im Allgemeinen sehr zeitaufwendig. Selbst mit den beschriebenen Beschleunigungen ist eine Gittersuche in der Menge der Parameter sehr teuer. Daher wird eine Model Selection Technik eingeführt, die deutlich geringeren Aufwand erfordert. Im maschinellen Lernen werden die Trainingsmenge und die Testmenge von einem Datengenerierungsprozess erzeugt. Ausgehend von einem nicht paarweisen Datengenerierungsprozess werden unterschiedliche paarweise Datengenerierungsprozesse abgeleitet und ihre Vor- und Nachteile bewertet. Es werden paarweise Bayes-Klassifikatoren eingeführt und ihre Eigenschaften diskutiert. Es wird gezeigt, dass sich diese Bayes-Klassifikatoren für Interclass Generalization Aufgaben und für Interexample Generalization Aufgaben im Allgemeinen unterscheiden. Bei der Gesichtserkennung bedeutet die Interexample Generalization, dass jede Person, die auf einem Bild der Testmenge dargestellt ist, auch auf mindestens einem Bild der Trainingsmenge vorkommt. Außerdem ist der Durchschnitt der Menge der Bilder der Trainingsmenge mit der Menge der Bilder der Testmenge leer. Paarweise SVMs werden an vier synthetischen und an zwei Real World Datenbanken getestet. Eine der verwendeten Real World Datenbanken ist die Labeled Faces in the Wild (LFW) Datenbank. Die andere wurde von Cognitec Systems GmbH bereitgestellt. Die Annahmen der Model Selection Technik, die Diskussion über den Informationsverlust, sowie die präsentierten Beschleunigungstechniken werden durch empirische Messungen mit den synthetischen Datenbanken belegt. Zudem wird mittels dieser Datenbanken gezeigt, dass Klassifikatoren von paarweisen SVMs zu ähnlich guten Ergebnissen wie paarweise Bayes-Klassifikatoren führen. Für die LFW Datenbank wird ein paarweiser Klassifikator bestimmt, der zu einer durchschnittlichen Equal Error Rate (EER) von 0.0947 und einem Standard Error of The Mean (SEM) von 0.0057 führt. Dieses Ergebnis ist besser als das des aktuellen State of the Art Klassifikators, dem Combined Probabilistic Linear Discriminant Analysis Klassifikator. Dieser führt zu einer durchschnittlichen EER von 0.0993 und einem SEM von 0.0051
APA, Harvard, Vancouver, ISO, and other styles
3

Brinkel, Johanna [Verfasser]. "A user-centred evaluation of a mobile phone-based interactive voice response system to support infectious disease surveillance and access to healthcare for sick children in Ghana: users’ experiences, challenges and opportunities for large-scale application. Part of a concept and pilot study for mobile phone-based Electronic Health Information and Surveillance System (eHISS) for Africa / Johanna Brinkel." Bielefeld : Universitätsbibliothek Bielefeld, 2020. http://d-nb.info/1204561826/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tran, Van-Hoai. "Solving large scale crew pairing problems." [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975292714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Grigoleit, Mark Ted. "Optimisation of large scale network problems." Curtin University of Technology, Department of Mathematics and Statistics, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=115092.

Full text
Abstract:
The Constrained Shortest Path Problem (CSPP) consists of finding the shortest path in a graph or network that satisfies one or more resource constraints. Without these constraints, the shortest path problem can be solved in polynomial time; with them, the CSPP is NP-hard and thus far no polynomial-time algorithms exist for solving it optimally. The problem arises in a number of practical situations. In the case of vehicle path planning, the vehicle may be an aircraft flying through a region with obstacles such as mountains or radar detectors, with an upper bound on the fuel consumption, the travel time or the risk of attack. The vehicle may be a submarine travelling through a region with sonar detectors, with a time or risk budget. These problems all involve a network which is a discrete model of the physical domain. Another example would be the routing of voice and data information in a communications network such as a mobile phone network, where the constraints may include maximum call delays or relay node capacities. This is a problem of current economic importance, and one for which time-sensitive solutions are not always available, especially if the networks are large. We consider the simplest form of the problem, large grid networks with a single side constraint, which have been studied in the literature. This thesis explores the application of Constraint Programming combined with Lagrange Relaxation to achieve optimal or near-optimal solutions of the CSPP. The following is a brief outline of the contribution of this thesis. Lagrange Relaxation may or may not achieve optimal or near-optimal results on its own. Often, large duality gaps are present. We make a simple modification to Dijkstra’s algorithm that does not involve any additional computational work in order to generate an estimate of path time at every node.
We then use this information to constrain the network along a bisecting meridian. The combination of Lagrange Relaxation (LR) and a heuristic for filtering along the meridian provide an aggressive method for finding near-optimal solutions in a short time. Two network problems are studied in this work. The first is a Submarine Transit Path problem in which the transit field contains four sonar detectors at known locations, each with the same detection profile. The side constraint is the total transit time, with the submarine capable of 2 speeds. For the single-speed case, the initial LR duality gap may be as high as 30%. The first hybrid method uses a single centre meridian to constrain the network based on the unused time resource, and is able to produce solutions that are generally within 1% of optimal and always below 3%. Using the computation time for the initial Lagrange Relaxation as a baseline, the average computation time for the first hybrid method is about 30% to 50% higher, and the worst case CPU times are 2 to 4 times higher. The second problem is a random valued network from the literature. Edge costs, times, and lengths are uniform, randomly generated integers in a given range. Since the values given in the literature problems do not yield problems with a high duality gap, the values are varied and from a population of approximately 100,000 problems only the worst 200 from each set are chosen for study. These problems have an initial LR duality gap as high as 40%. A second hybrid method is developed, using values for the unused time resource and the lower bound values computed by Dijkstra’s algorithm as part of the LR method. The computed values are then used to position multiple constraining meridians in order to allow LR to find better solutions.
This second hybrid method is able to produce solutions that are generally within 0.1% of optimal, with computation times that are on average 2 times the initial Lagrange Relaxation time, and in the worst case only about 5 times higher. The best method for solving the Constrained Shortest Path Problem reported in the literature thus far is the LRE-A method of Carlyle et al. (2007), which uses Lagrange Relaxation for preprocessing followed by a bounded search using aggregate constraints. We replace Lagrange Relaxation with the second hybrid method and show that optimal solutions are produced for both network problems with computation times that are between one and two orders of magnitude faster than LRE-A. In addition, these hybrid methods combined with the bounded search are up to 2 orders of magnitude faster than the commercial CPlex package using a straightforward MILP formulation of the problem. Finally, the second hybrid method is used as a preprocessing step on both network problems, prior to running CPlex. This preprocessing reduces the network size sufficiently to allow CPlex to solve all cases to optimality up to 3 orders of magnitude faster than without this preprocessing, and up to an order of magnitude faster than using Lagrange Relaxation for preprocessing. Chapter 1 provides a review of the thesis and some terminology used. Chapter 2 reviews previous approaches to the CSPP, in particular the two current best methods. Chapter 3 applies Lagrange Relaxation to the Submarine Transit Path problem with 2 speeds, to provide a baseline for comparison. The problem is reduced to a single speed, which demonstrates the large duality gap problem possible with Lagrange Relaxation, and the first hybrid method is introduced.
Chapter 4 examines a grid network problem using randomly generated edge costs and weights, and introduces the second hybrid method. Chapter 5 then applies the second hybrid method to both network problems as a preprocessing step, using both CPlex and a bounded search method from the literature to solve to optimality. The conclusion of this thesis and directions for future work are discussed in Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
6

Shim, Sangho. "Large scale group network optimization." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31737.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Ellis L. Johnson; Committee Member: Brady Hunsaker; Committee Member: George Nemhauser; Committee Member: Jozef Siran; Committee Member: Shabbir Ahmed; Committee Member: William Cook. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
7

Bulin, Johannes. "Large-scale time parallelization for molecular dynamics problems." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129301.

Full text
Abstract:
As modern supercomputers draw their power from the sheer number of cores, an efficient parallelization of programs is crucial for achieving good performance. When one tries to solve differential equations in parallel this is usually done by parallelizing the computation of one single time step. As the speedup of such parallelization schemes is usually limited, e.g. by the spatial size of the problem, additional parallelization in time may be useful to achieve better scalability. This thesis will introduce two well-known schemes for time-parallelization, namely the waveform relaxation method and the parareal algorithm. These methods are then applied to a molecular dynamics problem which is a useful test example as the number of required time steps is high while the number of unknowns is relatively low. Afterwards it is investigated how these methods can be adapted to large-scale computations.
Moderna superdatorer använder ett stort antal processorer för att uppnå hög prestanda. Därför är det nödvändigt att parallellisera sina program på ett effektivt sätt. När man löser differentialekvationer så brukar man parallellisera beräkningen av en enda tidspunkt. Speedupen av sådana program är ofta begränsad, till exempel av problemets storlek. Genom att använda ytterligare parallellisering i tid kan man uppnå bättre skalbarhet. Denna avhandling presenterar två välkända algoritmer för tidsparallellisering: waveform relaxation och parareal. Dessa metoder används för att lösa ett molekyldynamikproblem där tidsdomänen är stor jämförd med antalet obekanta. Slutligen undersöks några förbättringar för att möjliggöra storskaliga beräkningar.
APA, Harvard, Vancouver, ISO, and other styles
8

Bacarella, Daniele. "Distributed clustering algorithm for large scale clustering problems." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-212089.

Full text
Abstract:
Clustering is a task which has got much attention in data mining. The task of finding subsets of objects sharing some sort of common attributes is applied in various fields such as biology, medicine, business and computer science. A document search engine for instance, takes advantage of the information obtained clustering the document database to return a result with relevant information to the query. Two main factors that make clustering a challenging task are the size of the dataset and the dimensionality of the objects to cluster. Sometimes the character of the object makes it difficult identify its attributes. This is the case of the image clustering. A common approach is comparing two images using their visual features like the colors or shapes they contain. However, sometimes they come along with textual information claiming to be sufficiently descriptive of the content (e.g. tags on web images). The purpose of this thesis work is to propose a text-based image clustering algorithm through the combined application of two techniques namely Minhash Locality Sensitive Hashing (MinHash LSH) and Frequent itemset Mining.
APA, Harvard, Vancouver, ISO, and other styles
9

Futamura, Natsuhiko. "Algorithms for large-scale problems in computational biology." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2002. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sohrabi, Babak. "Solving large scale distribution problems using heuristic algorithms." Thesis, Lancaster University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Shylo, Oleg V. "New tools for large-scale combinatorial optimization problems." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rosas, José Humberto Ablanedo. "Algorithms for very large scale set covering problems /." Full text available from ProQuest UM Digital Dissertations, 2007. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1609001671&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1244747021&clientId=22256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cho, Taewon. "Computational Advancements for Solving Large-scale Inverse Problems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103772.

Full text
Abstract:
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification.
Doctor of Philosophy
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
APA, Harvard, Vancouver, ISO, and other styles
14

Wilkinson, Stephen James. "Aggregate formulations for large-scale process scheduling problems." Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/7255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mitchell, David Riach. "Modelling environments for large scale process system problems." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/15408.

Full text
Abstract:
This thesis presents a novel modelling environment for large scale process systems problems. Traditional modelling environments attempt to provide maximal functionality within a fixed modelling language. The intention of such systems is to provide the user with a complete package that requires no further development or coding on their part. This approach limits the user to the functionality provided within the package but requires little or no programming experience on the part of the user. The environment provides sufficient capability for the user to describe the model in terms of a variable set and a set of methods with which to manipulate the variables. Many of these methods will describe equations but there is no restriction limiting methods to representing equations. These methods can act as agents, linking the modelling environment to external systems such as physical property databanks and non-JFMS format models. Separating the description of the model from its' processing allows the complexities to be dealt with in a full programming language (external functions are written in Fortran90 or C). The behaviour of the system is tailored by the user, the modelling environment existing solely to store the model structure and provide the interface layer between the external systems.
APA, Harvard, Vancouver, ISO, and other styles
16

MacLeod, Donald James. "A parallel algorithm for large scale electronic structure calculations." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/17023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fu, Yuhong. "Rapid solution of large-scale three-dimensional micromechanics problems /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Attia, Ahmed Mohamed Mohamed. "Advanced Sampling Methods for Solving Large-Scale Inverse Problems." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73683.

Full text
Abstract:
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Yida. "SECOUT: Parallel Secure Outsourcing of Large-scale Optimization Problems." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586271073206561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dimitriadis, Andreas Dimitriou. "Algorithms for the solution of large-scale scheduling problems." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/8047.

Full text
Abstract:
Modern multipurpose plants play a key role within the overall current climate of business globalisation, aiming to produce highly diversified products that can address the needs and demands of customers spread over wide geographical areas. The inherent size and diversity of this process gives rise to the need for planning and scheduling problems of large-scale combined production and distribution operations. In recent years, it has become possible to model combined production and distribution processes mathematically to a relatively high degree of detail. This modelling usually results in optimisation problems involving both discrete and continuous decisions. Despite much progress in numerical solution algorithms, the size and complexity of problems of industrial interest often exceed significantly those that can be tackled directly using standard algorithms and codes. This thesis is, therefore, primarily concerned with algorithms that exploit the structure of the underlying mathematical formulations to permit the practical solution of such problems. The Resource-Task Network (RTN) process representation is a general framework that has been used successfully for modelling and solving relatively small process scheduling problems. This work identifies and addresses the limitations that arise when RTNs are used for modelling large-scale production planning and scheduling problems. A number of modifications are suggested in order to make it more efficient in the representation of partial resource equivalence without losing any modelling detail. The length of the time horizon under consideration is a key factor affecting the complexity of the resulting scheduling problem. In view of this, this thesis presents two time-based decomposition approaches that attempt to solve the scheduling problem by considering only part of the time horizon in detail at any one step. The first time-based decomposition scheme is a rigorous algorithm that is guaranteed to derive optimal detailed schedules. The second scheme is a family of rolling horizon algorithms that can obtain good, albeit not necessarily optimal, detailed solutions to medium-term scheduling problems within reasonable computational times. The complexity of the process under consideration, and in particular the large numbers of interacting tasks and resources, is another factor that directly affects the difficulty of the resulting scheduling problem. Consequently, a task-based decomposition algorithm for complex RTNs is proposed, exploiting the fact that some tasks (e. g. those associated with transportation activities)
APA, Harvard, Vancouver, ISO, and other styles
21

Henneman, Richard Lewis. "Human problem solving in complex hierarchical large scale systems." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/25432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sabbir, Tarikul Alam Khan. "Topology sensitive algorithms for large scale uncapacitated covering problem." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2011, 2011. http://hdl.handle.net/10133/3235.

Full text
Abstract:
Solving NP-hard facility location problems in wireless network planning is a common scenario. In our research, we study the Covering problem, a well known facility location problem with applications in wireless network deployment. We focus on networks with a sparse structure. First, we analyzed two heuristics of building Tree Decomposition based on vertex separator and perfect elimination order. We extended the vertex separator heuristic to improve its time performance. Second, we propose a dynamic programming algorithm based on the Tree Decomposition to solve the Covering problem optimally on the network. We developed several heuristic techniques to speed up the algorithm. Experiment results show that one variant of the dynamic programming algorithm surpasses the performance of the state of the art mathematical optimization commercial software on several occasions.
ix, 89 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
23

Yost, Kirk A. "Solution of large-scale allocation problems with partially observable outcomes." Diss., Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA355529.

Full text
Abstract:
Dissertation (Ph.D. in Operations Research) Naval Postgraduate School, September 1998.
"September 1998." Dissertation supervisor(s): Alan R. Washburn. Includes bibliographical references (p. 165-160). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
24

Bängtsson, Erik. "Robust preconditioned iterative solution methods for large-scale nonsymmetric problems /." Uppsala : Department of Information Technology, Uppsala University, 2005. http://www.it.uu.se/research/reports/lic/2005-006/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jha, Krishna Chandra. "Very large-scale neighborhood search heuristics for combinatorial optimization problems." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0004352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Trukhanov, Svyatoslav. "Novel approaches for solving large-scale optimization problems on graphs." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mišić, Velibor V. "Data, models and decisions for large-scale stochastic optimization problems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105003.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 204-209).
Modern business decisions exceed human decision making ability: often, they are of a large scale, their outcomes are uncertain, and they are made in multiple stages. At the same time, firms have increasing access to data and models. Faced with such complex decisions and increasing access to data and models, how do we transform data and models into effective decisions? In this thesis, we address this question in the context of four important problems: the dynamic control of large-scale stochastic systems, the design of product lines under uncertainty, the selection of an assortment from historical transaction data and the design of a personalized assortment policy from data. In the first chapter, we propose a new solution method for a general class of Markov decision processes (MDPs) called decomposable MDPs. We propose a novel linear optimization formulation that exploits the decomposable nature of the problem data to obtain a heuristic for the true problem. We show that the formulation is theoretically stronger than alternative proposals and provide numerical evidence for its strength in multi-armed bandit problems. In the second chapter, we consider to how to make strategic product line decisions under uncertainty in the underlying choice model. We propose a method based on robust optimization for addressing both parameter uncertainty and structural uncertainty. We show using a real conjoint data set the benefits of our approach over the traditional approach that assumes both the model structure and the model parameters are known precisely. In the third chapter, we propose a new two-step method for transforming limited customer transaction data into effective assortment decisions. The approach involves estimating a ranking-based choice model by solving a large-scale linear optimization problem, and solving a mixed-integer optimization problem to obtain a decision. Using synthetic data, we show that the approach is scalable, leads to accurate predictions and effective decisions that outperform alternative parametric and non-parametric approaches. In the last chapter, we consider how to leverage auxiliary customer data to make personalized assortment decisions. We develop a simple method based on recursive partitioning that segments customers using their attributes and show that it improves on a "uniform" approach that ignores auxiliary customer information.
by Velibor V. Mišić.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Becker, Adrian Bernard Druke. "Decomposition methods for large scale stochastic and robust optimization problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68969.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 107-112).
We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a Markov decision problem named the Generalized Rest less Bandits Problem that encompasses a broad generalization of the restless bandit problem. For this class of stochastic optimization problems, we develop a nested policy heuristic which iteratively solves a series of sub-problems operating on smaller bandit systems. We also develop linear-optimization based bounds for the Generalized Restless Bandit problem and demonstrate promising computational performance of the nested policy heuristic on a large-scale real world application of search term selection for sponsored search advertising. We further study the distributionally robust optimization problem with known mean, covariance and support. These optimization models are attractive in their real world applications as they require the model consumer to only rely on those statistics of uncertainty that are known with relative confidence rather than making arbitrary assumptions about the exact dynamics of the underlying distribution of uncertainty. Known to be AP - hard, current approaches invoke tractable but often weak relaxations for real-world applications. We develop a decomposition method for this family of problems which recursively derives sub-policies along projected dimensions of uncertainty and provides a sequence of bounds on the value of the derived policy. In the development of this method, we prove that non-convex quadratic optimization in n-dimensions over a box in two-dimensions is efficiently solvable. We also show that this same decomposition method yields a promising heuristic for the MAXCUT problem. We then provide promising computational results in the context of a real world fixed income portfolio optimization problem. The decomposition methods developed in this thesis recursively derive sub-policies on projected dimensions of the master problem. These sub-policies are optimal on relaxations which admit "tight" projections of the master problem; that is, the projection of the feasible region for the relaxation is equivalent to the projection of that of master problem along the dimensions of the sub-policy. Additionally, these decomposition strategies provide a hierarchical solution structure that aids in solving large-scale problems.
by Adrian Bernard Druke Becker.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Yujie. "Optimisation for large-scale maintenance, scheduling and vehicle routing problems." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/16107/.

Full text
Abstract:
Solving real-world combinatorial problems is involved in many industry fields to minimise operational cost or to maximise profit, or both. Along with continuous growth in computing power, many asset management decision-making processes that were originally solved by hand now tend to be based on big data analysis. Larger scale problem can be solved and more detailed operation instructions can be delivered. In this thesis, we investigate models and algorithms to solve large scale Geographically Distributed asset Maintenance Problems (GDMP). Our study of the problem was motivated by our business partner, Gaist solutions Ltd., to optimise scheduling of maintenance actions for a drainage system in an urban area. The models and solution methods proposed in the thesis can be applied to many similar issues arising in other industry fields. The thesis contains three parts. We firstly built a risk driven model considering vehicle routing problems and the asset degradation information. A hyperheuristic method embedded with customised low-level heuristics is employed to solve our real-world drainage maintenance problem in Blackpool. Computational results show that our hyperheuristic approach can, within reasonable CPU time, produce much higher quality solutions than the scheduling strategy currently implemented by Blackpool council. We then attempt to develop more efficient solution approaches to tackle our GDMP. We study various hyperheuristics and propose efficient local search strategies in part II. We present computational results on standard periodic vehicle routing problem instances and our GDMP instances. Based on manifold experimental evidences, we summarise the principles of designing heuristic based solution approaches to solve combinatorial problems. Last bu not least, we investigate a related decision making problem from highway maintenance, that is again of interest to Gaist solutions Ltd. We aim to make a strategical decision to choose a cost effective method of delivering the road inspection at a national scale. We build the analysis based on the Chinese Postman Problem and theoretically proof the modelling feasibility in real-world road inspection situations. We also propose a novel graph reduction process to allow effective computation over very large data sets.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Chiun-Ming. "Special versus standard algorithms for large-scale harvest scheduling problems." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/43047.

Full text
Abstract:

This thesis is concerned with structure exploitation and the design of algorithms for solving large-scale harvest scheduling problems. We discover that the harvest scheduling problem involving area constraints possesses a network structure. In Model I-Form 1, the network constraints form a separable block diagonal structure, which permits one to solve for the decision variables belonging to each individual area constraints as independent knapsack problems. In Model II-Form 1, the network constraints constitute a longest path problem, and a Longest Path Algorithm is developed to solve this problem in closed form. The computational time for this scheme is greatly reduced over that for the revised simplex method. The Dantzig-Wolfe algorithm is coded and tuned to solve general Model II problems, taking advantage of the Longest Path Algorithm in the subproblem step, and using the revised simplex method to solve the master problems. Computational results show that the algorithm solves problems to within one percent accuracy far more efficiently than the revised simplex method using MPS III. Both the CPU time and number of iterations for the Dantzig-Wolfe algorithm are less than those for the MPS III, depending on the problem size. Results also suggest that the Dantzig-Wolfe algorithm makes rapid advances in the initial iterations, but has a slow convergence rate in the final iterations. A Primal-Dual Conjugate Subgradient Algorithm is also coded and tuned to solve general Model II problems. Results show that the computational effort is greatly affected by the number of side constraints. If the number of side constraints is restricted, the Primal-Dual Conjugate Subgradient Algorithm can give a more efficient algorithm for solving harvest scheduling problems. Overall, from a storage requirement viewpoint, the Primal-Dual Conjugate Subgradient Algorithm is best, followed by the Dantzig-Wolfe algorithm and then the revised simplex method. From a computational efficiency viewpoint, if the optimality criterion is suitably selected, the Dantzig-Wolfe algorithm is best, provided that the number of side constraints are not too many, followed by the revised simplex method and then the Primal-Dual Conjugate Subgradient Algorithm.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Lindell, Hugo. "Methods for optimizing large scale thermal imaging camera placement problems." Thesis, Linköpings universitet, Optimeringslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161946.

Full text
Abstract:
The objective of this thesis is to model and solve the problem of placing thermal imaging camera for monitoring piles of combustible bio-fuels. The cameras, of different models, can be mounted at discrete heights on poles at fixed positions and at discrete angles, and one seeks camera model and mounting combinations that monitor as much of the piles as possible to as low cost as possible. Since monitoring all piles may not be possible or desired, due to budget or customer constrains, the solution to the problem is a set of compromises between coverage and cost. We denote such a set of compromises a frontier. In the first part of the thesis a way of modelling the problem is presented. The model uses a discrete formulation where the area to monitor is partitioned into a grid of cells. Further, a pool of candidate camera placements is formed, containing all combinations of camera models and mounting positions. For each camera in this pool, all cells monitored are deduced using ray-casting. Finally, an optimization model is formulated, based on the pool of candidate cameras and their monitoring of the grid. The optimization model has the two objectives of minimizing the cost while maximizing the number of covered cells. In the second part, a number of heuristic optimization algorithms to solve the problem is presented: Greedy Search, Random Greedy Search, Fear Search, Unique Search, Meta-RaPS and Weighted Linear Neighbourhood Search. The performance of these heuristics is evaluated on a couple of test cases from existing real world depots and a few artificial test instances. Evaluation is made by comparing the solution frontiers using various result metrics and graphs. Whenever practically possible, frontiers containing all optimal cost and coverage combinations are calculated using a state-of-the-art solver. Our findings indicate that for the artificial test instances, the state-of-the-art solver is unmatched in solution quality and uses similar execution time as the heuristics. Among the heuristics, Fear Search and Greedy Search were the strongest performing. For the smaller real world instances, the state-of-the-art solver was still unmatched in terms of solution quality, but generating the frontiers in this way was fairly time consuming. By generating the frontiers using Greedy Search or Random Greedy Search we obtained solutions of similar quality as the state-of-the-art solver up to 70-80% coverage using one hundredth and one tenth of the time, respectively. For the larger real world problem instances, generating the frontier using the state-of-the-art solver was extremely time consuming and thus sometimes impracticable. Hence the use of heuristics is often necessary. As for the smaller instances, Greedy Search and Random Greedy Search generated the frontiers with the best quality. Often even better full coverage solutions could be found by the more time consuming Fear Search or Unique Search.
Syftet med detta examensarbete är att modellera och lösa kameraplaceringsproblemet då IR-kameror ska användas för brandövervakning av fastbränslehögar. Problemet består i att givet ett antal kamera modeller och monteringsstolpar bestämma de kombinationer av placeringar och modeller sådana att övervakningen av högarna är maximal, för alla möjliga kostnadsnivåer. I den första delen av examensarbetet presenteras en modell för detta kameraplaceringsproblem. Modellen använder sig av en diskret formulering, där området om ska övervaras är representerad av ett rutnät. De möjliga kameravalen beskrivas med en diskret mängd av möjliga kameraplaceringar. För att utröna vilka celler inom rutnätet som en kameraplacering övervakar används metoden ray-casting. Utifrån mängden av möjliga kameraplaceringar kan en optimeringsmodell med två målfunktioner formuleras. Målet i den första målfunktionen är att minimera kostnaden för övervakningen och i den andra att maximera storleken på det övervakade området. Utgående från denna modell presenteras därefter ett antal algoritmer för att lösa modellen. Dessa är: Greedy Search, Random Greedy Search, Fear Search, Unique Search, Meta-RaPS och Weighted Linear Neighbourhood Search. Algoritmerna utvärderas på två konstgjorda testproblem och ett antal problem från verkliga fastbränslelager. Utvärderingen baseras på lösningsfronter (grafer över de icke-dominerade lösningarna med de bästa kombinationerna av kostnad och täckning) samt ett antal resultatmått som tid, lägsta kostnad för lösning med full täckning, etc... Vid utvärderingen av resultaten framkom att för de konstgjorda testinstanserna presterade ingen av heuristikerna jämförbart med en standardlösare, varken i termer av kvalitén på lösningarna eller med hänsyn tagen till tidsåtgången. De heuristiker som presterade bäst på dessa problem var framförallt Fear Search och Greedy Search. Även på de mindre probleminstanserna från existerande fastbränslelager hittade standardlösaren optimala lösningsfronter och en lösning med full täckning, men tidsåtgången var här flera gånger större jämfört med vissa av heuristikerna. På en hundra- respektive en tiondel av tiden kan Greedy Search eller Random Greedy Search heuristikerna finna en lösningsfront som är jämförbar med standardlösare, upp till 70-80% täckning. För de största probleminstanserna är tidsåtgången vid användning av standardlösare så pass stor att det i många fall är praktiskt svårt att lösa problemen, både för att generera fronten och att hitta en lösning med full täckning. I dessa fall är heuristiker oftast de enda möjliga alternativen. Vi fann att Greedy Search och Random Greedy Search var de heuristiker som, liksom för de mindre probleminstanserna, genererade de bästa lösningsfronterna. Ofta kunde dock en bättre lösning för full täckning hittas med hjälp av Fear Search eller Unique Search.
APA, Harvard, Vancouver, ISO, and other styles
32

Bängtsson, Erik. "Robust preconditioned iterative solution methods for large-scale nonsymmetric problems." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86353.

Full text
Abstract:
We study robust, preconditioned, iterative solution methods for large-scale linear systems of equations, arising from different applications in geophysics and geotechnics. The first type of linear systems studied here, which are dense, arise from a boundary element type of discretization of crack propagation in brittle material. Numerical experiment show that simple algebraic preconditioning strategies results in iterative schemes that are highly competitive with a direct solution method. The second type of algebraic systems are nonsymmetric and indefinite and arise from finite element discretization of the partial differential equations describing the elastic part of glacial rebound processes. An equal order finite element discretization is analyzed and an optimal stabilization parameter is derived. The indefinite algebraic systems are of 2-by-2-block form, and therefore block preconditioners of block-factorized or block-triangular form are used when solving the indefinite algebraic system. There, the required Schur complement is approximated in various ways and the quality of these approximations is compared numerically. When the block preconditioners are constructed from incomplete factorizations of the diagonal blocks, the iterative scheme show a growth in iteration count with increasing problem size. This growth is stabilized by replacing the incomplete factors with an inner iterative scheme with a (nearly) optimal order multilevel preconditioner.
APA, Harvard, Vancouver, ISO, and other styles
33

Roosta-Khorasani, Farbod. "Randomized algorithms for solving large scale nonlinear least squares problems." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52663.

Full text
Abstract:
This thesis presents key contributions towards devising highly efficient stochastic reconstruction algorithms for solving large scale inverse problems, where a large data set is available and the underlying physical systems is complex, e.g., modeled by partial differential equations (PDEs). We begin by developing stochastic and deterministic dimensionality reduction methods to transform the original large dimensional data set into the one with much smaller dimensions for which the computations are more manageable. We then incorporate such methods in our efficient stochastic reconstruction algorithms. In the presence of corrupted or missing data, many of such dimensionality reduction methods cannot be efficiently used. To alleviate this issue, in the context of PDE inverse problems, we develop and mathematically justify new techniques for replacing (or filling) the corrupted (or missing) parts of the data set. Our data replacement/completion methods are motivated by theory in Sobolev spaces, regarding the properties of weak solutions along the domain boundary. All of the stochastic dimensionality reduction techniques can be reformulated as Monte-Carlo (MC) methods for estimating the trace of a symmetric positive semi-definite (SPSD) matrix. In the next part of the present thesis, we present some probabilistic analysis of such randomized trace estimators and prove various computable and informative conditions for the sample size required for such Monte-Carlo methods in order to achieve a prescribed probabilistic relative accuracy. Although computationally efficient, a major drawback of any (randomized) approximation algorithm is the introduction of “uncertainty” in the overall procedure, which could cast doubt on the credibility of the obtained results. The last part of this thesis consists of uncertainty quantification of stochastic steps of our approximation algorithms presented earlier. As a result, we present highly efficient variants of our original algorithms where the degree of uncertainty can easily be quantified and adjusted, if needed. The uncertainty quantification presented in the last part of the thesis is an application of our novel results regarding the maximal and minimal tail probabilities of non-negative linear combinations of gamma random variables which can be considered independently of the rest of this thesis.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
34

Guertler, Siegfried. "Large scale computer-simulations of many-body Bose and Fermi systems at low temperature." Thesis, Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40887741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ding, Jian. "Fast Boundary Element Method Solutions For Three Dimensional Large Scale Problems." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6830.

Full text
Abstract:
Efficiency is one of the key issues in numerical simulation of large-scale problems with complex 3-D geometry. Traditional domain based methods, such as finite element methods, may not be suitable for these problems due to, for example, the complexity of mesh generation. The Boundary Element Method (BEM), based on boundary integral formulations (BIE), offers one possible solution to this issue by discretizing only the surface of the domain. However, to date, successful applications of the BEM are mostly limited to linear and continuum problems. The challenges in the extension of the BEM to nonlinear problems or problems with non-continuum boundary conditions (BC) include, but are not limited to, the lack of appropriate BIE and the difficulties in the treatment of the volume integrals that result from the nonlinear terms. In this thesis work, new approaches and techniques based on the BEM have been developed for 3-D nonlinear problems and Stokes problems with slip BC. For nonlinear problems, a major difficulty in applying the BEM is the treatment of the volume integrals in the BIE. An efficient approach, based on the precorrected-FFT technique, is developed to evaluate the volume integrals. In this approach, the 3-D uniform grid constructed initially to accelerate surface integration is used as the baseline mesh to evaluate volume integrals. The cubes enclosing part of the boundary are partitioned using surface panels. No volume discretization of the interior cubes is necessary. This grid is also used to accelerate volume integration. Based on this approach, accelerated BEM solvers for non-homogeneous and nonlinear problems are developed and tested. Good agreement is achieved between simulation results and analytical results. Qualitative comparison is made with current approaches. Stokes problems with slip BC are of particular importance in micro gas flows such as those encountered in MEMS devices. An efficient approach based on the BEM combined with the precorrected-FFT technique has been proposed and various techniques have been developed to solve these problems. As the applications of the developed method, drag forces on oscillating objects immersed in an unbounded slip flow are calculated and validated with either analytic solutions or experimental results.
APA, Harvard, Vancouver, ISO, and other styles
36

Ding, Jian. "Fast boundary element method solutions for three dimensional large scale problems." Available online, Georgia Institute of Technology, 2005, 2004. http://etd.gatech.edu/theses/available/etd-01102005-174227/unrestricted/ding%5Fjian%5F200505%5Fphd.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2005.
Mucha, Peter, Committee Member ; Qu, Jianmin, Committee Member ; Ye, Wenjing, Committee Chair ; Hesketh, Peter, Committee Member ; Gray, Leonard J., Committee Member. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
37

Solomon, P. J. "Some problems in the statistical analysis of large scale clinical trials." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cohn, Amy Ellen Mainville 1969. "Composite-variable modeling for large-scale problems in transportation and logistics." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8529.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, 2002.
Includes bibliographical references (p. 137-142).
Numerous important real-world problems are found in the areas of transportation and logistics. Many of these problems pose tremendous challenges due to characteristics such as complex networks, tightly constrained resources, and very large numbers of heavily inter-connected decisions. As a result, mathematical models can be critical in solving these problems. These models, however, can be computationally challenging or even intractable. In this thesis we discuss how greater tractability can sometimes be achieved with composite-variable models - models in which individual binary variables encompass multiple decisions. In Part I, we discuss common challenges found in solving large-scale transportation and logistics problems. We introduce the idea of composite variables and discuss the potential benefits of composite-variable models. We also note some of the drawbacks of these models and discuss approaches to addressing these drawbacks. In Parts II and III, we demonstrate these ideas using two real-world examples, one from airline planning and the other from service parts logistics. We build on our experience from these two applications in Part IV, providing some broader insights for composite-variable modeling. We focus in particular on the dominance property seen in the service parts logistics example and on the fact that we can relax the integrality of the composite variables in the airline planning example. In both cases, we introduce broader classes of problems in which these properties can also be found. We offer conclusions in Part V.
(cont.) The contributions of the thesis are three-fold. First, we provide a new model and solution approach for an important real-world problem from the airline industry. Second, we provide a framework for addressing challenging problems in service parts logistics. Third, we provide insights into how to construct composite-variable models for greater tractability. These insights can be useful not only in solving large-scale problems, but also in integrating multiple stages within a planning environment, developing better heuristics for solving large problems in real time, and providing users with greater control in trading off solution time and quality.
by Amy Ellen Mainville Cohn.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
39

Parisini, Fabio <1981&gt. "Hybrid constraint programming and metaheuristic methods for large scale optimization problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3709/.

Full text
Abstract:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
APA, Harvard, Vancouver, ISO, and other styles
40

Madabushi, Ananth R. "Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36833.

Full text
Abstract:
This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions. The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution. We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
41

Romero, Alcalde Eloy. "Parallel implementation of Davidson-type methods for large-scale eigenvalue problems." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/15188.

Full text
Abstract:
El problema de valores propios (tambien llamado de autovalores, o eigenvalues) esta presente en diversas tareas cienficas a traves de la resolucion de ecuaciones diferenciales, analisis de modelos y calculos de funciones matriciales, entre otras muchas aplicaciones. Si los problemas son de dimension moderada (menor a 106), pueden ser abordados mediante los llamados metodos directos, como el algoritmo iterativo QR o el metodo de divide y vencerlas. Sin embargo, si el problema es de gran dimension y solo se requieren unas pocas soluciones (comparado con el tama~no del problema) y con un cierto grado de aproximacion, los metodos iterativos pueden resultar mas eficientes. Ademas los metodos iterativos pueden ofrecer mejores prestaciones en arquitecturas de altas prestaciones, como las de memoria distribuida, en las que existen un cierto numero de nodos computacionales con espacio de memoria propios y solo pueden compartir informacion y sincronizarse mediante el paso de mensajes. Esta tesis aborda la implementacion de metodos de tipo Davidson, destacando Generalized Davidson y Jacobi-Davidson, una clase de metodos iterativos que puede ser competitiva en casos especialmente dificiles como calcular valores propios en el interior del espectro o cuando la factorizacion de matrices es prohibitiva o ineficiente, y solo es posible una factorizacion aproximada. La implementacion se desarrolla en SLEPc (Scalable Library for Eigenvalue Problem Computations), libreria libre destacada en la resolucion de problemas de gran tama~no de valores propios, problemas cuadraticos de valores propios y problemas de valores singulares, entre otros. A su vez, SLEPc se desarrolla bajo el marco de PETSc (Portable, Extensible Toolkit for Scientic Computation), que ofrece implementaciones eficientes de operaciones basicas del algebra lineal, como operaciones con matrices y vectores, resolucion aproximada de sistemas lineales, factorizaciones exactas y aproximadas de matrices, etc.
Romero Alcalde, E. (2012). Parallel implementation of Davidson-type methods for large-scale eigenvalue problems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15188
Palancia
APA, Harvard, Vancouver, ISO, and other styles
42

Dan, Hiroshige. "Studies on algorithms for large-scale nonlinear optimization and related problems." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/145312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Da, Silva Curt. "Large-scale optimization algorithms for missing data completion and inverse problems." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62968.

Full text
Abstract:
Inverse problems are an important class of problems found in many areas of science and engineering. In these problems, one aims to estimate unknown parameters of a physical system through indirect multi-experiment measurements. Inverse problems arise in a number of fields including seismology, medical imaging, and astronomy, among others. An important aspect of inverse problems is the quality of the acquired data itself. Real-world data acquisition restrictions, such as time and budget constraints, often results in measured data with missing entries. Many inversion algorithms assume that the input data is fully sampled and relatively noise free and produce poor results when these assumptions are violated. Given the multidimensional nature of real-world data, we propose a new low-rank optimization method on the smooth manifold of Hierarchical Tucker tensors. Tensors that exhibit this low-rank structure can be recovered from solving this non-convex program in an efficient manner. We successfully interpolate realistically sized seismic data volumes using this approach. If our low-rank tensor is corrupted with non-Gaussian noise, the resulting optimization program can be formulated as a convex-composite problem. This class of problems involves minimizing a non-smooth but convex objective composed with a nonlinear smooth mapping. In this thesis, we develop a level set method for solving composite-convex problems and prove that the resulting subproblems converge linearly. We demonstrate that this method is competitive when applied to examples in noisy tensor completion, analysis-based compressed sensing, audio declipping, total-variation deblurring and denoising, and one-bit compressed sensing. With respect to solving the inverse problem itself, we introduce a new software design framework that manages the cognitive complexity of the various components involved. Our framework is modular by design, which enables us to easily integrate and replace components such as linear solvers, finite difference stencils, preconditioners, and parallelization schemes. As a result, a researcher using this framework can formulate her algorithms with respect to high-level components such as objective functions and hessian operators. We showcase the ease with which one can prototype such algorithms in a 2D test problem and, with little code modification, apply the same method to large-scale 3D problems.
Science, Faculty of
Mathematics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
44

Ma, Yanting. "Solving Large-Scale Inverse Problems via Approximate Message Passing and Optimization." Thesis, North Carolina State University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10758823.

Full text
Abstract:

This work studies the problem of reconstructing a signal from measurements obtained by a sensing system, where the measurement model that characterizes the sensing system may be linear or nonlinear.

We first consider linear measurement models. In particular, we study the popular low-complexity iterative linear inverse algorithm, approximate message passing (AMP), in a probabilistic setting, meaning that the signal is assumed to be generated from some probability distribution, though the distribution may be unknown to the algorithm. The existing rigorous performance analysis of AMP only allows using a separable or block-wise separable estimation function at each iteration of AMP, and therefore cannot capture sophisticated dependency structures in the signal. This work studies the case when the signal has a Markov random field (MRF) prior, which is commonly used in image applications. We provide rigorous performance analysis of AMP with a class of non-separable sliding-window estimation functions, which is suitable to capture local dependencies in an MRF prior.

In addition, we design AMP-based algorithms with non-separable estimation functions for hyperspectral imaging and universal compressed sensing (imaging), and compare our algorithms to state-of-the-art algorithms with extensive numerical examples. For fast computation in largescale problems, we study a multiprocessor implementation of AMP and provide its performance analysis. Additionally, we propose a two-part reconstruction scheme where Part 1 detects zero-valued entries in the signal using a simple and fast algorithm, and Part 2 solves for the remaining entries using a high-fidelity algorithm. Such two-part scheme naturally leads to a trade-off analysis of speed and reconstruction quality.

Finally, we study diffractive imaging, where the electric permittivity distribution of an object is reconstructed from scattered wave measurements. When the object is strongly scattering, a nonlinear measurement model is needed to characterize the relationship between the permittivity and the scattered wave. We propose an inverse method for nonlinear diffractive imaging. Our method is based on a nonconvex optimization formulation. The nonconvex solver used in the proposed method is our new variant of the popular convex solver, fast iterative shrinkage/ thresholding algorithm (FISTA). We provide a fast and memory-efficient implementation of our new FISTA variant and prove that it reliably converges for our nonconvex optimization problem. Hence, our new FISTA variance may be of interest on its own as a general nonconvex solver. In addition, we systematically compare our method to state-of-the-art methods on simulated as well as experimentally measured data in both 2D and 3D (vectorial field) settings.

APA, Harvard, Vancouver, ISO, and other styles
45

Agarwal, Richa. "Composite very large-scale neighborhood structure for the vehicle-routing problem." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Figueras, Anthony L. "A hierarchical approach for solving the large-scale traveling salesman problem." FIU Digital Commons, 1994. https://digitalcommons.fiu.edu/etd/3321.

Full text
Abstract:
An algorithm for solving the large-scale Traveling Salesman Problem is presented. Research into past work in the area of Hopfield neural network use in solving the Traveling Salesman Problem has yielded design ideas that have been incorporated into this work. The algorithm consists of an unsupervised learning algorithm and a recursive Hopfield neural network. The unsupervised learning algorithm was used to decompose the problem into clusters. The recursive Hopfield neural network was applied to the centroids of the clusters, then to the cities in each cluster, in order to find an optimal path. An improvement in both computation speed and solution accuracy is shown by the proposed algorithm over the straight use of the Hopfield neural network.
APA, Harvard, Vancouver, ISO, and other styles
47

Silva, Carla Taviane Lucke da. "Otimização de processos acoplados: programação da produção e corte de estoque." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13022009-102119/.

Full text
Abstract:
Em diversas indústrias de manufatura (por exemplo, papeleira, moveleira, metalúrgica, têxtil) as decisões do dimensionamento de lotes interagem com outras decisões do planejamento e programação da produção, tais como, a distribuição, o processo de corte, entre outros. Porém, usualmente, essas decisões são tratadas de forma isolada, reduzindo o espaço de soluções e a interdependência entre as decisões, elevando assim os custos totais. Nesta tese, estudamos o processo produtivo de indústrias de móveis de pequeno porte, que consiste em cortar placas grandes disponíveis em estoque para obter diversos tipos de peças que são processadas posteriormente em outros estágios e equipamentos com capacidades limitadas para, finalmente, comporem os produtos demandados. Os problemas de dimensionamento de lotes e corte de estoque são acoplados em um modelo de otimização linear inteiro cujo objetivo é minimizar os custos de produção, estoque de produtos, preparação de máquinas e perda de matéria-prima. Esse modelo mostra o compromisso existente entre antecipar ou não a fabricação de certos produtos aumentando os custos de estoque, mas reduzindo a perda de matéria-prima ao obter melhores combinações entre as peças. O impacto da incerteza da demanda (composta pela carteira de pedidos e mais uma quantidade extra estimada) foi amortizado pela estratégia de horizonte de planejamento rolante e por variáveis de decisão que representam uma produção extra para a demanda esperada no melhor momento, visando a minimização dos custos totais. Dois métodos heurísticos são desenvolvidos para resolver uma simplificação do modelo matemático proposto, o qual possui um alto grau de complexidade. Os experimentos computacionais realizados com exemplares gerados a partir de dados reais coletados em uma indústria de móveis de pequeno porte, uma análise dos resultados, as conclusões e perspectivas para este trabalho são apresentados
In the many manufacturing industries (e.g., paper industry, furniture, steel, textile), lot-sizing decisions generally arise together with other decisions of planning production, such as distribution, cutting, scheduling and others. However, usually, these decisions are dealt with separately, which reduce the solution space and break dependence on decisions, increasing the total costs. In this thesis, we study the production process that arises in small scale furniture industries, which consists basically of cutting large plates available in stock into several thicknesses to obtain different types of pieces required to manufacture lots of ordered products. The cutting and drilling machines are possibly bottlenecks and their capacities have to be taken into account. The lot-sizing and cutting stock problems are coupled with each other in a large scale linear integer optimization model, whose objective function consists in minimizing different costs simultaneously, production, inventory, raw material waste and setup costs. The proposed model captures the tradeoff between making inventory and reducing losses. The impact of the uncertainty of the demand, which is composed with ordered and forecasting products) was smoothed down by a rolling horizon strategy and by new decision variables that represent extra production to meet forecasting demands at the best moment, aiming at total cost minimization. Two heuristic methods are proposed to solve relaxation of the mathematical model. Randomly generated instances based on real world life data were used for the computational experiments for empirical analyses of the model and the proposed solution methods
APA, Harvard, Vancouver, ISO, and other styles
48

Hellman, Fredrik. "Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems." Thesis, Uppsala University, Department of Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-130013.

Full text
Abstract:

This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments.

The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances.

Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer.

The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size.

Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem.

Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.

APA, Harvard, Vancouver, ISO, and other styles
49

Bredström, David. "Models and solution methods for large-scale industrial mixed integer programming problems /." Linköping : Division of Optimization, Department of Mathematics, Linköpings universitet, 2007. http://www.bibl.liu.se/liupubl/disp/disp2007/tek1071s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sharkawy, Mohamed Hassan Al. "Iterative multi-region technique for the analysis of large scale electromagnetic problems /." Full text available from ProQuest UM Digital Dissertations, 2006. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1394652571&SrchMode=1&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1216839065&clientId=22256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography