Dissertations / Theses on the topic 'Bayesian Optimization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Bayesian Optimization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Klein, Aaron [Verfasser], and Frank [Akademischer Betreuer] Hutter. "Efficient bayesian hyperparameter optimization." Freiburg : Universität, 2020. http://d-nb.info/1214592961/34.
Full textMahendran, Nimalan. "Bayesian optimization for adaptive MCMC." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/30636.
Full textGelbart, Michael Adam. "Constrained Bayesian Optimization and Applications." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467236.
Full textBiophysics
Gaudrie, David. "High-Dimensional Bayesian Multi-Objective Optimization." Thesis, Lyon, 2019. https://tel.archives-ouvertes.fr/tel-02356349.
Full textThis thesis focuses on the simultaneous optimization of expensive-to-evaluate functions that depend on a high number of parameters. This situation is frequently encountered in fields such as design engineering through numerical simulation. Bayesian optimization relying on surrogate models (Gaussian Processes) is particularly adapted to this context.The first part of this thesis is devoted to the development of new surrogate-assisted multi-objective optimization methods. To improve the attainment of Pareto optimal solutions, an infill criterion is tailored to direct the search towards a user-desired region of the objective space or, in its absence, towards the Pareto front center introduced in our work. Besides targeting a well-chosen part of the Pareto front, the method also considers the optimization budget in order to provide an as wide as possible range of optimal solutions in the limit of the available resources.Next, inspired by shape optimization problems, an optimization method with dimension reduction is proposed to tackle the curse of dimensionality. The approach hinges on the construction of hierarchized problem-related auxiliary variables that can describe all candidates globally, through a principal component analysis of potential solutions. Few of these variables suffice to approach any solution, and the most influential ones are selected and prioritized inside an additive Gaussian Process. This variable categorization is then further exploited in the Bayesian optimization algorithm which operates in reduced dimension
Scotto, Di Perrotolo Alexandre. "A Theoretical Framework for Bayesian Optimization Convergence." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-225129.
Full textBayesiansk optimering är en välkänd klass av globala optimeringsalgoritmer som inte beror av derivator och främst används för optimering av dyra svartlådsfunktioner. Trots sin relativa effektivitet lider de av en brist av stringent konvergenskriterium som gör dem mer benägna att användas som modelleringsverktyg istället för som optimeringsverktyg. Denna rapport är avsedd att föreslå, analysera och testa en ett globalt konvergerande ramverk (på ett sätt som som beskrivs vidare) för Bayesianska optimeringsalgoritmer, som ärver de globala sökegenskaperna för minimum medan de noggrant övervakas för att konvergera.
Wang, Ziyu. "Practical and theoretical advances in Bayesian optimization." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:9612d870-015e-4236-8c8d-0419670172fb.
Full textZinberg, Ben (Ben I. ). "Bayesian optimization as a probabilistic meta-program." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106374.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 50).
This thesis answers two questions: 1. How should probabilistic programming languages in- corporate Gaussian processes? and 2. Is it possible to write a probabilistic meta-program for Bayesian optimization, a probabilistic meta-algorithm that can combine regression frameworks such as Gaussian processes with a broad class of parameter estimation and optimization techniques? We answer both questions affirmatively, presenting both an implementation and informal semantics for Gaussian process models in probabilistic programming systems, and a probabilistic meta-program for Bayesian optimization. The meta-program exposes modularity common to a wide range of Bayesian optimization methods in a way that is not apparent from their usual treatment in statistics.
by Ben Zinberg.
M. Eng.
Wang, Zheng S. M. Massachusetts Institute of Technology. "An optimization based algorithm for Bayesian inference." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98815.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 75-76).
In the Bayesian statistical paradigm, uncertainty in the parameters of a physical system is characterized by a probability distribution. Information from observations is incorporated by updating this distribution from prior to posterior. Quantities of interest, such as credible regions, event probabilities, and other expectations can then be obtained from the posterior distribution. One major task in Bayesian inference is then to characterize the posterior distribution, for example, through sampling. Markov chain Monte Carlo (MCMC) algorithms are often used to sample from posterior distributions using only unnormalized evaluations of the posterior density. However, high dimensional Bayesian inference problems are challenging for MCMC-type sampling algorithms, because accurate proposal distributions are needed in order for the sampling to be efficient. One method to obtain efficient proposal samples is an optimization-based algorithm titled 'Randomize-then-Optimize' (RTO). We build upon RTO by developing a new geometric interpretation that describes the samples as projections of Gaussian-distributed points, in the joint data and parameter space, onto a nonlinear manifold defined by the forward model. This interpretation reveals generalizations of RTO that can be used. We use this interpretation to draw connections between RTO and two other sampling techniques, transport map based MCMC and implicit sampling. In addition, we motivate and propose an adaptive version of RTO designed to be more robust and efficient. Finally, we introduce a variable transformation to apply RTO to problems with non-Gaussian priors, such as Bayesian inverse problems with Li-type priors. We demonstrate several orders of magnitude in computational savings from this strategy on a high-dimensional inverse problem.
by Zheng Wang.
S.M.
Carstens, Herman. "A Bayesian approach to energy monitoring optimization." Thesis, University of Pretoria, 2017. http://hdl.handle.net/2263/63791.
Full textHierdie proefskrif ontwikkel metodes waarmee die koste van energiemonitering en verifieëring (M&V) deur Bayesiese statistiek verlaag kan word. M&V bepaal die hoeveelheid besparings wat deur energiedoeltreffendheid- en vraagkantbestuurprojekte behaal kan word. Dit word gedoen deur die energieverbruik in ’n gegewe tydperk te vergelyk met wat dit sou wees indien geen ingryping plaasgevind het nie. ’n Grootskaalse beligtingsretrofitstudie, waar filamentgloeilampe met fluoresserende spaarlampe vervang word, dien as ’n gevallestudie. Sulke projekte moet gewoonlik oor baie jare met ’n vasgestelde statistiese akkuuraatheid gemonitor word, wat M&V duur kan maak. Twee verwante onsekerheidskomponente moet in M&V beligtingsprojekte aangespreek word, en vorm die grondslag van hierdie proefskrif. Ten eerste is daar die onsekerheid in jaarlikse energieverbruik van die gemiddelde lamp. Ten tweede is daar die volhoubaarheid van die besparings oor veelvoudige jare, wat bepaal word deur die aantal lampe wat tot in ’n gegewe jaar behoue bly. Vir longitudinale projekte moet hierdie twee komponente oor veelvoudige jare bepaal word. Hierdie proefskrif spreek die probleem deur middel van ’n Bayesiese paradigma aan. Bayesiese statistiek is nog relatief onbekend in M&V, en bied ’n geleentheid om die doeltreffendheid van statistiese analises te verhoog, veral vir bogenoemde projekte. Die proefskrif begin met ’n deeglike literatuurstudie, veral met betrekking tot metingsonsekerheid in M&V. Daarna word ’n inleiding tot Bayesiese statistiek vir M&V voorgehou, en drie metodes word ontwikkel. Hierdie metodes spreek die drie hoofbronne van onsekerheid in M&V aan: metings, opnames, en modellering. Die eerste metode is ’n laekoste energiemeterkalibrasietegniek. Die tweede metode is ’n Dinamiese Linieêre Model (DLM) met Bayesiese vooruitskatting, waarmee meter opnamegroottes bepaal kan word. Die derde metode is ’n Dinamiese Veralgemeende Linieêre Model (DVLM), waarmee bevolkingsoorlewing opnamegroottes bepaal kan word. Volgens wet moet M&V energiemeters gereeld deur erkende laboratoria gekalibreer word. Dit kan duur en ongerieflik wees, veral as die aanleg tydens meterverwydering en -installering afgeskakel moet word. Sommige regsgebiede vereis ook dat meters in-situ gekalibreer word; in hul bedryfsomgewings. Tog word dit aangetoon dat metingsonsekerheid ’n klein deel van die algehele M&V onsekerheid beslaan, veral wanneer opnames gedoen word. Dit bevraagteken die kostevoordeel van laboratoriumkalibrering. Die voorgestelde tegniek gebruik ’n ander kommersieële-akkuurraatheidsgraad meter (wat self ’n nie-weglaatbare metingsfout bevat), om die kalibrasie in-situ te behaal. Dit word gedoen deur die metingsfout deur SIMulerings EKStraptolering (SIMEKS) te verminder. Die SIMEKS resultaat word dan deur Bayesiese statistiek verbeter, en behaal aanvaarbare foutbereike en akkuurate parameterafskattings. Die tweede tegniek gebruik ’n DLM met Bayesiese vooruitskatting om die onsekerheid in die meting van die opnamemonster van die algehele bevolking af te skat. ’n Genetiese Algoritme (GA) word dan toegepas om doeltreffende opnamegroottes te vind. Bayesiese statistiek is veral nuttig in hierdie geval aangesien dit vorige jare se uitslae kan gebruik om huidige afskattings te belig Dit laat ook die presiese afskatting van onsekerheid toe, terwyl standaard vertrouensintervaltegnieke dit nie doen nie. Resultate toon ’n kostebesparing van tot 66%. Die studie ondersoek dan die standvastigheid van kostedoeltreffende opnameplanne in die teenwoordigheid van vooruitskattingsfoute. Dit word gevind dat kostedoeltreffende opnamegroottes 50% van die tyd te klein is, vanweë die gebrek aan statistiese krag in die standaard M&V formules. Die derde tegniek gebruik ’n DVLM op dieselfde manier as die DLM, behalwe dat bevolkingsoorlewingopnamegroottes ondersoek word. Die saamrol van binomiale opname-uitslae binne die GA skep ’n probleem, en in plaas van ’n Monte Carlo simulasie word die relatiewe nuwe Mellin Vervorming Moment Berekening op die probleem toegepas. Die tegniek word dan uitgebou om laagsgewyse opname-ontwerpe vir heterogene bevolkings te vind. Die uitslae wys ’n 17-40% kosteverlaging, alhoewel dit van die koste-skema afhang. Laastens word die DLM en DVLM saamgevoeg om ’n doeltreffende algehele M&V plan, waar meting en opnamekostes teen mekaar afgespeel word, te ontwerp. Dit word vir eenvoudige en laagsgewyse opname-ontwerpe gedoen. Moniteringskostes word met 26-40% verlaag, maar hang van die aangenome koste-skema af. Die uitslae bewys die krag en buigsaamheid van Bayesiese statistiek vir M&V toepassings, beide vir presiese onsekerheidskwantifisering, en deur die doeltreffendheid van die dataverbruik te verhoog en sodoende moniteringskostes te verlaag.
Thesis (PhD)--University of Pretoria, 2017.
National Research Foundation
Department of Science and Technology
National Hub for the Postgraduate Programme in Energy Efficiency and Demand Side Management
Electrical, Electronic and Computer Engineering
PhD
Unrestricted
Taheri, Sona. "Learning Bayesian networks based on optimization approaches." Thesis, University of Ballarat, 2012. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/36051.
Full textDoctor of Philosophy
PEREGO, RICCARDO. "Automated Deep Learning through Constrained Bayesian Optimization." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/314922.
Full textIn an increasingly technological and interconnected world, the amount of data is continuously growing, and as a consequence, decision-making algorithms are also continually evolving to adapt to it. One of the major sources of this vast amount of data is the Internet of Things, in which billions of sensors exchange information over the network to perform various types of activities such as industrial and medical monitoring. In recent years, technological development has made it possible to define new high-performance hardware architectures for sensors, called Microcontrollers, which enabled the creation of a new kind of decentralized computing named Edge Computing. This new computing paradigm allowed sensors to run decision-making algorithm at the edge in order to take immediate and local decisions instead of transferring the data on central server processing. To support Edge Computing, the research community started developing new advanced techniques to efficiently manage the limited resources on these devices for applying the most advanced Machine Learning models, especially the Deep Neural Networks. Automated Machine Learning is a branch of the Machine Learning field aimed at disclosing the power of Machine Learning to non-experts as well as efficiently supporting data scientists in designing their own data analysis pipelines. The adoption of Automated Machine Learning has made it possible to develop increasingly high-performance models almost automatically. However, with the advent of Edge Computing, a specialization of Machine Learning, defined as Tiny Machine Learning (Tiny ML), has been arising, that is, the application of Machine Learning algorithms on devices having limited hardware resources. This thesis mainly addresses the applicability of Automated Machine Learning to generate accurate models which must be also deployable on tiny devices, specifically Microcontroller Units. More specifically, the proposed approach is aimed at maximizing the performances of Deep Neural Networks while satisfying the constraints associated to the limited hardware resources, including batteries, of Microcontrollers. Thanks to a close collaboration with STMicroelectronics, a leading company for design, production and sale of microcontrollers, it was possible to develop a novel Automated Machine Learning framework that deals with the black-box constraints related to the deployability of a Deep Neural Network on these tiny devices, widely adopted in IoT applications. The application on two real-life use cases provided by STMicroelectronics (i.e., Human Activity Recognition and Image Recognition) proved that the novel proposed approach can efficiently find out configurations for accurate and deployable Deep Neural Networks, increasing their accuracy against baseline models while drastically reducing hardware required to run them on a microcontroller (i.e., a reduction of more than 90\%). The approach was also compared against one of the state-of-the-art AutoML solutions in order to evaluate its capability to overcome the issues which currently limit the wide application of AutoML in the tiny ML field. Finally, this PhD thesis suggests interesting and challenging research directions to further increase the applicability of the proposed approach by integrating recent and innovative research results (e.g., weakly defined search spaces, Meta-Learning, Multi-objective and Multi-Information Source optimization).
Fowkes, Jaroslav Mrazek. "Bayesian numerical analysis : global optimization and other applications." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:ab268fe7-f757-459e-b1fe-a4a9083c1cba.
Full textLorenz, Romy. "Neuroadaptive Bayesian optimization : implications for the cognitive sciences." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/51419.
Full textFu, Stefan Xueyan. "Finding Optimal Jetting Waveform Parameters with Bayesian Optimization." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231374.
Full textJet printing är en metod för att applicera lodpasta eller andra elektroniska material på kretskort inom ytmontering inom elektronikproduktion. Lodpastan skjuts ut på kretskorten med hjälp av en pistong som drivs av piezoelektrisk enhet. Kvaliteten på det jettade resultatet kan påverkas av en mängd faktorer, till exempel vågformen av signalen som används för att aktivera piezoenheten. I teorin är vilken vågform som helst möjlig, men i praktiken används en vågform som definieras av sju parametrar. Att hitta optimala värden på dessa parametrar är ett optimeringsproblem som inte kan lösas med metoder baserade på derivata, då optimeringens målfunktion är en s.k. svart låda (black-box function) som bara är tillgänglig via brusiga och tidskrävande evalueringar. Den nuvarande metoden för optimering av parametrarna är en modifierad gridsökning för de två viktigaste parametrarna där de kvarvarande fem parametrarna är fixerade. Bayesiansk optimering är en heuristisk modell-baserad sökmetod för dataeffektiv optimering av brusiga funktioner för vilka derivator inte kan beräknas. En implementation av Bayesiansk optimering anpassades för optimering av vågformsparametrar och användes för att optimera en mängd kombinationer av parametrarna. Alla resultaten gav liknande värden för de två kända parametrarna, med skillnader inom osäkerheten från mätbrus. Resultaten för de övriga fem parametrarna var motstridiga, men en närmare granskning av hyperparametrar för modellen visade att detta berodde på att de fem parametrarna bara har en minimal påverkan på det jettade resultatet. Därför kan de motstridiga resultaten förklaras helt som skillnader på grund av mätbrus. Baserat på resultaten verkar Bayesiansk optimering vara en passande och effektiv metod för optimering av vågformsparametrar. Slutligen föreslås några möjligheter för vidare utveckling av metoden.
Lévesque, Julien-Charles. "Bayesian hyperparameter optimization : overfitting, ensembles and conditional spaces." Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/28364.
Full textIn this thesis, we consider the analysis and extension of Bayesian hyperparameter optimization methodology to various problems related to supervised machine learning. The contributions of the thesis are attached to 1) the overestimation of the generalization accuracy of hyperparameters and models resulting from Bayesian optimization, 2) an application of Bayesian optimization to ensemble learning, and 3) the optimization of spaces with a conditional structure such as found in automatic machine learning (AutoML) problems. Generally, machine learning algorithms have some free parameters, called hyperparameters, allowing to regulate or modify these algorithms’ behaviour. For the longest time, hyperparameters were tuned by hand or with exhaustive search algorithms. Recent work highlighted the conceptual advantages in optimizing hyperparameters with more rational methods, such as Bayesian optimization. Bayesian optimization is a very versatile framework for the optimization of unknown and non-derivable functions, grounded strongly in probabilistic modelling and uncertainty estimation, and we adopt it for the work in this thesis. We first briefly introduce Bayesian optimization with Gaussian processes (GP) and describe its application to hyperparameter optimization. Next, original contributions are presented on the dangers of overfitting during hyperparameter optimization, where the optimization ends up learning the validation folds. We show that there is indeed overfitting during the optimization of hyperparameters, even with cross-validation strategies, and that it can be reduced by methods such as a reshuffling of the training and validation splits at every iteration of the optimization. Another promising method is demonstrated in the use of a GP’s posterior mean for the selection of final hyperparameters, rather than directly returning the model with the minimal crossvalidation error. Both suggested approaches are demonstrated to deliver significant improvements in the generalization accuracy of the final selected model on a benchmark of 118 datasets. The next contributions are provided by an application of Bayesian hyperparameter optimization for ensemble learning. Stacking methods have been exploited for some time to combine multiple classifiers in a meta classifier system. Those can be applied to the end result of a Bayesian hyperparameter optimization pipeline by keeping the best classifiers and combining them at the end. Our Bayesian ensemble optimization method consists in a modification of the Bayesian optimization pipeline to search for the best hyperparameters to use for an ensemble, which is different from optimizing hyperparameters for the performance of a single model. The approach has the advantage of not requiring the training of more models than a regular Bayesian hyperparameter optimization. Experiments show the potential of the suggested approach on three different search spaces and many datasets. The last contributions are related to the optimization of more complex hyperparameter spaces, namely spaces that contain a structure of conditionality. Conditions arise naturally in hyperparameter optimization when one defines a model with multiple components – certain hyperparameters then only need to be specified if their parent component is activated. One example of such a space is the combined algorithm selection and hyperparameter optimization, now better known as AutoML, where the objective is to choose the base model and optimize its hyperparameters. We thus highlight techniques and propose new kernels for GPs that handle structure in such spaces in a principled way. Contributions are also supported by experimental evaluation on many datasets. Overall, the thesis regroups several works directly related to Bayesian hyperparameter optimization. The thesis showcases novel ways to apply Bayesian optimization for ensemble learning, as well as methodologies to reduce overfitting or optimize more complex spaces.
Dans cette thèse, l’optimisation bayésienne sera analysée et étendue pour divers problèmes reliés à l’apprentissage supervisé. Les contributions de la thèse sont en lien avec 1) la surestimation de la performance de généralisation des hyperparamètres et des modèles résultants d’une optimisation bayésienne, 2) une application de l’optimisation bayésienne pour la génération d’ensembles de classifieurs, et 3) l’optimisation d’espaces avec une structure conditionnelle telle que trouvée dans les problèmes “d’apprentissage machine automatique” (AutoML). Généralement, les algorithmes d’apprentissage automatique ont des paramètres libres, appelés hyperparamètres, permettant de réguler ou de modifier leur comportement à plus haut niveau. Auparavant, ces hyperparamètres étaient choisis manuellement ou par recherche exhaustive. Des travaux récents ont souligné la pertinence d’utiliser des méthodes plus intelligentes pour l’optimisation d’hyperparamètres, notamment l’optimisation bayésienne. Effectivement, l’optimisation bayésienne est un outil polyvalent pour l’optimisation de fonctions inconnues ou non dérivables, ancré fortement dans la modélisation probabiliste et l’estimation d’incertitude. C’est pourquoi nous adoptons cet outil pour le travail dans cette thèse. La thèse débute avec une introduction de l’optimisation bayésienne avec des processus gaussiens (Gaussian processes, GP) et décrit son application à l’optimisation d’hyperparamètres. Ensuite, des contributions originales sont présentées sur les dangers du surapprentissage durant l’optimisation d’hyperparamètres, où l’on se trouve à mémoriser les plis de validation utilisés pour l’évaluation. Il est démontré que l’optimisation d’hyperparamètres peut en effet mener à une surestimation de la performance de validation, même avec des méthodologies de validation croisée. Des méthodes telles que le rebrassage des plis d’entraînement et de validation sont ensuite proposées pour réduire ce surapprentissage. Une autre méthode prometteuse est démontrée dans l’utilisation de la moyenne a posteriori d’un GP pour effectuer la sélection des hyperparamètres finaux, plutôt que sélectionner directement le modèle avec l’erreur minimale en validation croisée. Les deux approches suggérées ont montré une amélioration significative sur la performance en généralisation pour un banc de test de 118 jeux de données. Les contributions suivantes proviennent d’une application de l’optimisation d’hyperparamètres pour des méthodes par ensembles. Les méthodes dites d’empilage (stacking) ont précédemment été employées pour combiner de multiples classifieurs à l’aide d’un métaclassifieur. Ces méthodes peuvent s’appliquer au résultat final d’une optimisation bayésienne d’hyperparamètres en conservant les meilleurs classifieurs identifiés lors de l’optimisation et en les combinant à la fin de l’optimisation. Notre méthode d’optimisation bayésienne d’ensembles consiste en une modification du pipeline d’optimisation d’hyperparamètres pour rechercher des hyperparamètres produisant de meilleurs modèles pour un ensemble, plutôt que d’optimiser pour la performance d’un modèle seul. L’approche suggérée a l’avantage de ne pas nécessiter plus d’entraînement de modèles qu’une méthode classique d’optimisation bayésienne d’hyperparamètres. Une évaluation empirique démontre l’intérêt de l’approche proposée. Les dernières contributions sont liées à l’optimisation d’espaces d’hyperparamètres plus complexes, notamment des espaces contenant une structure conditionnelle. Ces conditions apparaissent dans l’optimisation d’hyperparamètres lorsqu’un modèle modulaire est défini – certains hyperparamètres sont alors seulement définis si leur composante parente est activée. Un exemple de tel espace de recherche est la sélection de modèles et l’optimisation d’hyperparamètres combinée, maintenant davantage connu sous l’appellation AutoML, où l’on veut à la fois choisir le modèle de base et optimiser ses hyperparamètres. Des techniques et de nouveaux noyaux pour processus gaussiens sont donc proposées afin de mieux gérer la structure de tels espaces d’une manière fondée sur des principes. Les contributions présentées sont appuyées par une autre étude empirique sur de nombreux jeux de données. En résumé, cette thèse consiste en un rassemblement de travaux tous reliés directement à l’optimisation bayésienne d’hyperparamètres. La thèse présente de nouvelles méthodes pour l’optimisation bayésienne d’ensembles de classifieurs, ainsi que des procédures pour réduire le surapprentissage et pour optimiser des espaces d’hyperparamètres structurés.
Kawaguchi, Kenji Ph D. Massachusetts Institute of Technology. "Towards practical theory : Bayesian optimization and optimal exploration." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/103670.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 83-87).
This thesis presents novel principles to improve the theoretical analyses of a class of methods, aiming to provide theoretically driven yet practically useful methods. The thesis focuses on a class of methods, called bound-based search, which includes several planning algorithms (e.g., the A* algorithm and the UCT algorithm), several optimization methods (e.g., Bayesian optimization and Lipschitz optimization), and some learning algorithms (e.g., PAC-MDP algorithms). For Bayesian optimization, this work solves an open problem and achieves an exponential convergence rate. For learning algorithms, this thesis proposes a new analysis framework, called PACRMDP, and improves the previous theoretical bounds. The PAC-RMDP framework also provides a unifying view of some previous near-Bayes optimal and PAC-MDP algorithms. All proposed algorithms derived on the basis of the new principles produced competitive results in our numerical experiments with standard benchmark tests.
by Kenji Kawaguchi.
S.M.
Petit, Sébastien. "Improved Gaussian process modeling : Application to Bayesian optimization." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG063.
Full textThis manuscript focuses on Bayesian modeling of unknown functions with Gaussian processes. This task arises notably for industrial design, with numerical simulators whose computation time can reach several hours. Our work focuses on the problem of model selection and validation and goes in two directions. The first part studies empirically the current practices for stationary Gaussian process modeling. Several issues on Gaussian process parameter selection are tackled. A study of parameter selection criteria is the core of this part. It concludes that the choice of a family of models is more important than that of the selection criterion. More specifically, the study shows that the regularity parameter of the Matérn covariance function is more important than the choice of a likelihood or cross-validation criterion. Moreover, the analysis of the numerical results shows that this parameter can be selected satisfactorily by the criteria, which leads to a practical recommendation. Then, particular attention is given to the numerical optimization of the likelihood criterion. Observing important inconsistencies between the different libraries available for Gaussian process modeling like Erickson et al. (2018), we propose elementary numerical recipes making it possible to obtain significant gains both in terms of likelihood and model accuracy. Finally, the analytical formulas for computing cross-validation criteria are revisited under a new angle and enriched with similar formulas for the gradients. This last contribution aligns the computational cost of a class of cross-validation criteria with that of the likelihood. The second part presents a goal-oriented methodology. It is designed to improve the accuracy of the model in an (output) range of interest. This approach consists in relaxing the interpolation constraints on a relaxation range disjoint from the range of interest. We also propose an approach for automatically selecting the relaxation range. This new method can implicitly manage potentially complex regions of interest in the input space with few parameters. Outside, it learns non-parametrically a transformation improving the predictions on the range of interest. Numerical simulations show the benefits of the approach for Bayesian optimization, where one is interested in low values in the minimization framework. Moreover, the theoretical convergence of the method is established under some assumptions
Gaul, Nicholas John. "Modified Bayesian Kriging for noisy response problems and Bayesian confidence-based reliability-based design optimization." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1322.
Full textLee, Chung Hyun. "Bayesian collaborative sampling: adaptive learning for multidisciplinary design." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42894.
Full textConjeevaram, Krishnakumar Naveen Kartik. "A Bayesian approach to feed reconstruction." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82414.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 83-86).
In this thesis, we developed a Bayesian approach to estimate the detailed composition of an unknown feedstock in a chemical plant by combining information from a few bulk measurements of the feedstock in the plant along with some detailed composition information of a similar feedstock that was measured in a laboratory. The complexity of the Bayesian model combined with the simplex-type constraints on the weight fractions makes it difficult to sample from the resulting high-dimensional posterior distribution. We reviewed and implemented different algorithms to generate samples from this posterior that satisfy the given constraints. We tested our approach on a data set from a plant.
by Naveen Kartik Conjeevaram Krishnakumar.
S.M.
Horiguchi, Akira. "Bayesian Additive Regression Trees: Sensitivity Analysis and Multiobjective Optimization." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1606841319315633.
Full textMatosevic, Antonio. "On Bayesian optimization and its application to hyperparameter tuning." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74962.
Full textKrishnaswami, Sreedhar Bharathwaj. "Bayesian Optimization for Neural Architecture Search using Graph Kernels." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291219.
Full textNeural arkitektur sökning är en populär metod för att automatisera arkitektur design. Bayesian-optimering är ett vanligt tillvägagångssätt för optimering av hyperparameter och kan uppskatta en funktion med begränsade prover. Bayesianska optimeringsmetoder är dock inte att föredra för arkitektonisk sökning eftersom vektoringångar förväntas medan grafer är högdimensionella data. Denna avhandling presenterar ett Bayesiansk tillvägagångssätt med gaussiska prior som använder grafkärnor som är särskilt fokuserade på att arbeta i det högre dimensionella grafutrymmet. Vi implementerade tre olika grafkärnor och visar att det på NASBench- 101-data, till och med en otränad Grafkonvolutionsnätverk-kärna, överträffar tidigare metoder när det gäller det bästa nätverket som hittats och antalet prover som krävs för att hitta det. Vi följer AutoML-riktlinjerna för att göra detta arbete reproducerbart.
Pelamatti, Julien. "Mixed-variable Bayesian optimization : application to aerospace system design." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I003.
Full textWithin the framework of complex system design, such as aircraft and launch vehicles, the presence of computationallyintensive objective and/or constraint functions (e.g., finite element models and multidisciplinary analyses)coupled with the dependence on discrete and unordered technological design choices results in challenging optimizationproblems. Furthermore, part of these technological choices is associated to a number of specific continuous anddiscrete design variables which must be taken into consideration only if specific technological and/or architecturalchoices are made. As a result, the optimization problem which must be solved in order to determine the optimalsystem design presents a dynamically varying search space and feasibility domain.The few existing algorithms which allow solving this particular type of problems tend to require a large amountof function evaluations in order to converge to the feasible optimum, and result therefore inadequate when dealingwith the computationally intensive problems which can often be encountered within the design of complex systems.For this reason, this thesis explores the possibility of performing constrained mixed-variable and variable-size designspace optimization by relying on surrogate model-based design optimization performed with the help of Gaussianprocesses, also known as Bayesian optimization. More specifically, 3 main axes are discussed. First, the Gaussianprocess surrogate modeling of mixed continuous/discrete functions and the associated challenges are extensivelydiscussed. A unifying formalism is proposed in order to facilitate the description and comparison between theexisting kernels allowing to adapt Gaussian processes to the presence of discrete unordered variables. Furthermore,the actual modeling performances of these various kernels are tested and compared on a set of analytical and designrelated benchmarks with different characteristics and parameterizations.In the second part of the thesis, the possibility of extending the mixed continuous/discrete surrogate modeling toa context of Bayesian optimization is discussed. The theoretical feasibility of said extension in terms of objective/-constraint function modeling as well as acquisition function definition and optimization is shown. Different possiblealternatives are considered and described. Finally, the performance of the proposed optimization algorithm, withvarious kernels parameterizations and different initializations, is tested on a number of analytical and design relatedtest-cases and compared to reference algorithms.In the last part of this manuscript, two alternative ways of adapting the previously discussed mixed continuous/discrete Bayesian optimization algorithms in order to solve variable-size design space problems (i.e., problemscharacterized by a dynamically varying design space) are proposed. The first adaptation is based on the paralleloptimization of several sub-problems coupled with a computational budget allocation based on the informationprovided by the surrogate models. The second adaptation, instead, is based on the definition of a kernel allowingto compute the covariance between samples belonging to partially different search spaces based on the hierarchicalgrouping of design variables. Finally, the two alternatives are tested and compared on a set of analytical and designrelated benchmarks.Overall, it is shown that the proposed optimization methods allow to converge to the various constrained problemoptimum neighborhoods considerably faster when compared to the reference methods, thus representing apromising tool for the design of complex systems
Fischer, Christopher Corey. "Bayesian Inspired Multi-Fidelity Optimization with Aerodynamic Design Application." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1621948051637597.
Full textBade, Alexander. "Bayesian portfolio optimization from a static and dynamic perspective /." Münster : Verl.-Haus Monsenstein und Vannerdat, 2009. http://d-nb.info/996985085/04.
Full textShahriari, Bobak. "Practical Bayesian optimization with application to tuning machine learning algorithms." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59104.
Full textScience, Faculty of
Computer Science, Department of
Graduate
Brochu, Eric. "Interactive Bayesian optimization : learning user preferences for graphics and animation." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/30519.
Full textChen, Zhaozhong. "Visual-Inertial SLAM Extrinsic Parameter Calibration Based on Bayesian Optimization." Thesis, University of Colorado at Boulder, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10789260.
Full textVI-SLAM (Visual-Inertial Simultaneous Localization and Mapping) is a popular way for robotics navigation and tracking. With the help of sensor fusion from IMU and camera, VI-SLAM can give a more accurate solution for navigation. One important problem needs to be solved in VI-SLAM is that we need to know accurate relative position between camera and IMU, we call it the extrinsic parameter. However, our measurement of the rotation and translation between IMU and camera is noisy. If the measurement is slightly o?, the result of SLAM system will be much more away from the ground truth after a long run. Optimization is necessary. This paper uses a global optimization method called Bayesian Optimization to optimize the relative pose between IMU and camera based on the sliding window residual output from VISLAM. The advantage of using Bayesian Optimization is that we can get an accurate pose estimation between IMU and camera from a large searching range. Whats more, thanks to the Gaussian Process or T process of Bayesian Optimization, we can get a result with a known uncertainty, which cannot be done by many optimization solutions.
Shende, Sourabh. "Bayesian Topology Optimization for Efficient Design of Origami Folding Structures." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592170569337763.
Full textParno, Matthew David. "A multiscale framework for Bayesian inference in elliptic problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65322.
Full textPage 118 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 112-117).
The Bayesian approach to inference problems provides a systematic way of updating prior knowledge with data. A likelihood function involving a forward model of the problem is used to incorporate data into a posterior distribution. The standard method of sampling this distribution is Markov chain Monte Carlo which can become inefficient in high dimensions, wasting many evaluations of the likelihood function. In many applications the likelihood function involves the solution of a partial differential equation so the large number of evaluations required by Markov chain Monte Carlo can quickly become computationally intractable. This work aims to reduce the computational cost of sampling the posterior by introducing a multiscale framework for inference problems involving elliptic forward problems. Through the construction of a low dimensional prior on a coarse scale and the use of iterative conditioning technique the scales are decouples and efficient inference can proceed. This work considers nonlinear mappings from a fine scale to a coarse scale based on the Multiscale Finite Element Method. Permeability characterization is the primary focus but a discussion of other applications is also provided. After some theoretical justification, several test problems are shown that demonstrate the efficiency of the multiscale framework.
by Matthew David Parno.
S.M.
Pelikan, Martin. "Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/toc/fy053/2004116659.html.
Full textJeggle, Kai. "Scalable Hyperparameter Opimization: Combining Asynchronous Bayesian Optimization With Efficient Budget Allocation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280340.
Full textAutomatiserad inställning av hyperparameter har blivit en integrerad del i optimering av arbetsflöde för maskininlärning (ML). Sekventiella modellbaserade optimeringsalgoritmer, såsom Bayesian Optimization (BO), har visat sig vara dataeffektiva med stark slutprestanda. Emellertid kräver ökande komplexitet och träningstider för ML-modeller en övergång från sekventiell till asynkron, distribuerad hyperparameterinställning. Litteraturen har kommit med olika strategier för att modifiera BO för att arbeta i en asynkron miljö. Genom att kombinera asynkron BO med budgetallokeringsstrategier kan dåliga försök stoppas tidigt för att frigöra dyra resurser för andra försök att utforska sökutrymmet, förbättra effektiv resursanvändning och därmed skalbarhet ytterligare.Maggy är en öppen källkod asynkron hyperparameteroptimeringsram som är byggd på Spark som transparent planerar och hanterar hyperparameter-försök. I den här avhandlingen presenterar vi nytt stöd för en plug and play API som kombinerar asynkron bayesiska optimeringsalgoritmer med bud-get allokeringsstrategier, som Hyperband eller Median Early Stopping. Detta kombinerar det bästa från båda världarna och ger hög skalbarhet genom effektiv användning av resurser och stark slutprestanda. Vi utvärderar experimentellt olika kombinationer av asynkron Bayesian Optimization med allokering av budgetalgoritmer och visar dess konkurrenskraftiga prestanda och förmåga.
Feng, Chi S. M. Massachusetts Institute of Technology. "Optimal Bayesian experimental design in the presence of model error." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97790.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 87-90).
The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction. We propose an information theoretic framework and algorithms for robust optimal experimental design with simulation-based models, with the goal of maximizing information gain in targeted subsets of model parameters, particularly in situations where experiments are costly. Our framework employs a Bayesian statistical setting, which naturally incorporates heterogeneous sources of information. An objective function reflects expected information gain from proposed experimental designs. Monte Carlo sampling is used to evaluate the expected information gain, and stochastic approximation algorithms make optimization feasible for computationally intensive and high-dimensional problems. A key aspect of our framework is the introduction of model calibration discrepancy terms that are used to "relax" the model so that proposed optimal experiments are more robust to model error or inadequacy. We illustrate the approach via several model problems and misspecification scenarios. In particular, we show how optimal designs are modified by allowing for model error, and we evaluate the performance of various designs by simulating "real-world" data from models not considered explicitly in the optimization objective.
by Chi Feng.
S.M.
Lieberman, Chad Eric. "Parameter and state model reduction for Bayesian statistical inverse problems." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54213.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 113-118).
Decisions based on single-point estimates of uncertain parameters neglect regions of significant probability. We consider a paradigm based on decision-making under uncertainty including three steps: identification of parametric probability by solution of the statistical inverse problem, propagation of that uncertainty through complex models, and solution of the resulting stochastic or robust mathematical programs. In this thesis we consider the first of these steps, solution of the statistical inverse problem, for partial differential equations (PDEs) parameterized by field quantities. When these field variables and forward models are discretized, the resulting system is high-dimensional in both parameter and state space. The system is therefore expensive to solve. The statistical inverse problem is one of Bayesian inference. With assumption on prior belief about the form of the parameter and an assignment of normal error in sensor measurements, we derive the solution to the statistical inverse problem analytically, up to a constant of proportionality. The parametric probability density, or posterior, depends implicitly on the parameter through the forward model. In order to understand the distribution in parameter space, we must sample. Markov chain Monte Carlo (MCMC) sampling provides a method by which a random walk is constructed through parameter space. By following a few simple rules, the random walk converges to the posterior distribution and the resulting samples represent draws from that distribution. This set of samples from the posterior can be used to approximate its moments.
(cont.) In the multi-query setting, it is computationally intractable to utilize the full-order forward model to perform the posterior evaluations required in the MCMC sampling process. Instead, we implement a novel reduced-order model which reduces in parameter and state. The reduced bases are generated by greedy sampling. We iteratively sample the field in parameter space which maximizes the error in full-order and current reduced-order model outputs. The parameter is added to its basis and then a high-fidelity forward model is solved for the state, which is then added to the state basis. The reduction in state accelerates posterior evaluation while the reduction in parameter allows the MCMC sampling to be conducted with a simpler, non-adaptive 3 Metropolis-Hastings algorithm. In contrast, the full-order parameter space is high-dimensional and requires more expensive adaptive methods. We demonstrate for the groundwater inverse problem in 1-D and 2-D that the reduced-order implementation produces accurate results with a factor of three speed up even for the model problems of dimension N ~~500. Our complexity analysis demonstrates that the same approach applied to the large-scale models of interest (e.g. N > 10⁴) results in a speed up of three orders of magnitude.
by Chad Eric Lieberman.
S.M.
Ramadan, Saleem Z. "Bayesian Multi-objective Design of Reliability Testing." Ohio University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1298474937.
Full textBahg, Giwon. "Adaptive Design Optimization in Functional MRI Experiments." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531836392551605.
Full textLam, Remi Roger Alain Paul. "Scaling Bayesian optimization for engineering design : lookahead approaches and multifidelity dimension reduction." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119289.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 105-111).
The objective functions and constraints that arise in engineering design problems are often non-convex, multi-modal and do not have closed-form expressions. Evaluation of these functions can be expensive, requiring a time-consuming computation (e.g., solving a set of partial differential equations) or a costly experiment (e.g., conducting wind-tunnel measurements). Accordingly, whether the task is formal optimization or just design space exploration, there is often a finite budget specifying the maximum number of evaluations of the objectives and constraints allowed. Bayesian optimization (BO) has become a popular global optimization technique for solving problems governed by such expensive functions. BO iteratively updates a statistical model and uses it to quantify the expected benefits of evaluating a given design under consideration. The next design to evaluate can be selected in order to maximize such benefits. Most existing BO algorithms are greedy strategies, making decisions to maximize the immediate benefits, without planning over several steps. This is typically a suboptimal approach. In the first part of this thesis, we develop a novel BO algorithm with planning capabilities. This algorithm selects the next design to evaluate in order to maximize the long-term expected benefit obtained at the end of the optimization. This lookahead approach requires tools to quantify the effects a decision has over several steps in the future. To do so, we use Gaussian processes as generative models and combine them with dynamic programming to formulate the optimal planning strategy. We first illustrate the proposed algorithm on unconstrained optimization problems. In the second part, we demonstrate how the proposed lookahead BO algorithm can be extended to handle non-linear expensive inequality constraints, a ubiquitous situation in engineering design. We illustrate the proposed lookahead constrained BO algorithm on a reacting flow optimization problem. In the last part of this thesis, we develop techniques to scale BO to high dimension by exploiting a special structure arising when the objective function varies only in a low-dimensional subspace. Such a subspace can be detected using the (randomized) method of Active Subspaces. We propose a multifidelity active subspace algorithm that reduces the computational cost by leveraging a cheap-to-evaluate approximation of the objective function. We analyze the number of evaluations sufficient to control the error incurred, both in expectation and with high probability. We illustrate the proposed algorithm on an ONERA M6 wing shape-optimization problem.
by Remi Roger Alain Paul Lam.
Ph. D.
Liu, Jeffrey Ph D. Massachusetts Institute of Technology. "On the effect and value of information in Bayesian routing games." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106962.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 52-54).
We consider the problem of estimating individual and social value of information in routing games. We propose a Bayesian congestion game that accounts for the heterogeneity in the commuters' access to information about traffic incidents. The model divides the population of commuters into two sub-populations or types based on their information about incidents. Types-H and L have high and low information about incidents, respectively. Each population routes its demand on an incident-prone, parallel route network. The cost function for each route depends is affine in its usage level and its slope increases with the route's incident state. Both populations (player types) know the demand of each type, route cost functions, and the incident probability. In addition, in our model, the commuters in type-H population receive private information on the true realization of incident state. We analyze both individual cost for each population and the aggregate (social) cost as the type-H population size increases. We observe that, in equilibrium, both these costs are non-monotonic and non-linear as the fraction of the total demand that is type-H increases. Our main results are as follows: First, the information improves individual welfare (i.e., when a commuter shifts from being in the type-L population to the type-H population), but the value of information is zero after a certain threshold fraction. Second, there exist another threshold (lower than the first threshold) after which increasing the relative fraction of type-H commuters does not reduce the aggregate social cost.
by Jeffrey Liu.
S.M.
Wogrin, Sonja. "Model reduction for dynamic sensor steering : a Bayesian approach to inverse problems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43739.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 97-101).
In many settings, distributed sensors provide dynamic measurements over a specified time horizon that can be used to reconstruct information such as parameters, states or initial conditions. This estimation task can be posed formally as an inverse problem: given a model and a set of measurements, estimate the parameters of interest. We consider the specific problem of computing in real-time the prediction of a contamination event, based on measurements obtained by mobile sensors. The spread of the contamination is modeled by the convection diffusion equation. A Bayesian approach to the inverse problem yields an estimate of the probability density function of the initial contaminant concentration, which can then be propagated through the forward model to determine the predicted contaminant field at some future time and its associated uncertainty distribution. Sensor steering is effected by formulating and solving an optimization problem that seeks the sensor locations that minimize the uncertainty in this prediction. An important aspect of this Dynamic Sensor Steering Algorithm is the ability to execute in real-time. We achieve this through reduced-order modeling, which (for our two-dimensional examples) yields models that can be solved two orders of magnitude faster than the original system, but only incur average relative errors of magnitude O(10-3). The methodology is demonstrated on the contaminant transport problem, but is applicable to a broad class of problems where we wish to observe certain phenomena whose location or features are not known a priori.
by Sonja Wogrin.
S.M.
Tohme, Tony. "The Bayesian validation metric : a framework for probabilistic model calibration and validation." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126919.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 109-114).
In model development, model calibration and validation play complementary roles toward learning reliable models. In this thesis, we propose and develop the "Bayesian Validation Metric" (BVM) as a general model validation and testing tool. We show that the BVM can represent all the standard validation metrics - square error, reliability, probability of agreement, frequentist, area, probability density comparison, statistical hypothesis testing, and Bayesian model testing - as special cases while improving, generalizing and further quantifying their uncertainties. In addition, the BVM assists users and analysts in designing and selecting their models by allowing them to specify their own validation conditions and requirements. Further, we expand the BVM framework to a general calibration and validation framework by inverting the validation mathematics into a method for generalized Bayesian regression and model learning. We perform Bayesian regression based on a user's definition of model-data agreement. This allows for model selection on any type of data distribution, unlike Bayesian and standard regression techniques, that "fail" in some cases. We show that our tool is capable of representing and combining Bayesian regression, standard regression, and likelihood-based calibration techniques in a single framework while being able to generalize aspects of these methods. This tool also offers new insights into the interpretation of the predictive envelopes in Bayesian regression, standard regression, and likelihood-based methods while giving the analyst more control over these envelopes.
by Tony Tohme.
S.M.
S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
Monson, Christopher Kenneth. "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1292.pdf.
Full textXue, Liang. "Multi-Model Bayesian Analysis of Data Worth and Optimization of Sampling Scheme Design." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/203432.
Full textFrancis, Gilad. "Autonomous Exploration over Continuous Domains." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/20443.
Full textSong, Mingzhou. "Integrated surface model optimization from images and prior shape knowledge /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6115.
Full textCarroll, James Lamond. "A Bayesian Decision Theoretical Approach to Supervised Learning, Selective Sampling, and Empirical Function Optimization." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3413.pdf.
Full textMashamba, Able. "Bayesian optimization and uncertainty analysis of complex environmental models, with applications in watershed management." Diss., Montana State University, 2010. http://etd.lib.montana.edu/etd/2010/mashamba/MashambaA1210.pdf.
Full textBasak, Subhasish. "Multipathogen quantitative risk assessment in raw milk soft cheese : monotone integration and Bayesian optimization." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG021.
Full textThis manuscript focuses on Bayesian optimization of a quantitative microbiological risk assessment (QMRA) model, in the context of the European project ArtiSaneFood, supported by the PRIMA program. The primary goal is to establish efficient bio-intervention strategies for cheese producers in France.This work is divided into three broad directions: 1) development and implementation of a multipathogen QMRA model for raw milk soft cheese, 2) studying monotone integration methods for estimating outputs of the QMRA model, and 3) designing a Bayesian optimization algorithm tailored for a stochastic and computationally expensive simulator.In the first part we propose a multipathogen QMRA model, built upon existing studies in the literature (see, e.g., Bonifait et al., 2021, Perrin et al., 2014, Sanaa et al., 2004, Strickland et al., 2023). This model estimates the impact of foodborne illnesses on public health, caused by pathogenic STEC, Salmonella and Listeria monocytogenes, which can potentially be present in raw milk soft cheese. This farm-to-fork model also implements the intervention strategies related to mlik and cheese testing, which allows to estimate the cost of intervention. An implementation of the QMRA model for STEC is provided in R and in the FSKX framework (Basak et al., under review). The second part of this manuscript investigates the potential application of sequential integration methods, leveraging the monotonicity and boundedness properties of the simulator outputs. We conduct a comprehensive literature review on existing integration methods (see, e.g., Kiefer, 1957, Novak, 1992), and delve into the theoretical findings regarding their convergence. Our contribution includes proposing enhancements to these methods and discussion on the challenges associated with their application in the QMRA domain.In the final part of this manuscript, we propose a Bayesian multiobjective optimization algorithm for estimating the Pareto optimal inputs of a stochastic and computationally expensive simulator. The proposed approach is motivated by the principle of Stepwise Uncertainty Reduction (SUR) (see, e.g., Vazquezand Bect, 2009, Vazquez and Martinez, 2006, Villemonteix et al., 2007), with a weighted integrated mean squared error (w-IMSE) based sampling criterion, focused on the estimation of the Pareto front. A numerical benchmark is presented, comparing the proposed algorithm with PALS (Pareto Active Learning for Stochastic simulators) (Barracosa et al., 2021), over a set of bi-objective test problems. We also propose an extension (Basak et al., 2022a) of the PALS algorithm, tailored to the QMRA application case
Shayegh, Soheil. "Learning in integrated optimization models of climate change and economy." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54012.
Full textZhu, Zhanxing. "Integrating local information for inference and optimization in machine learning." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20980.
Full text