Academic literature on the topic 'Bayesian Optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian Optimization"

1

Nguyen, Thanh Dai, Sunil Gupta, Santu Rana, and Svetha Venkatesh. "Stable Bayesian optimization." International Journal of Data Science and Analytics 6, no. 4 (April 9, 2018): 327–39. http://dx.doi.org/10.1007/s41060-018-0119-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmed, Mohamed Osama, Sharan Vaswani, and Mark Schmidt. "Combining Bayesian optimization and Lipschitz optimization." Machine Learning 109, no. 1 (January 2020): 79–102. http://dx.doi.org/10.1007/s10994-019-05833-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Motoyama, Yuichi, Ryo Tamura, Kazuyoshi Yoshimi, Kei Terayama, Tsuyoshi Ueno, and Koji Tsuda. "Bayesian optimization package: PHYSBO." Computer Physics Communications 278 (September 2022): 108405. http://dx.doi.org/10.1016/j.cpc.2022.108405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ewerhart, Christian. "Bayesian optimization and genericity." Operations Research Letters 21, no. 5 (January 1997): 243–48. http://dx.doi.org/10.1016/s0167-6377(97)00050-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cochran, James J., Martin S. Levy, and Jeffrey D. Camm. "Bayesian coverage optimization models." Journal of Combinatorial Optimization 19, no. 2 (June 29, 2008): 158–73. http://dx.doi.org/10.1007/s10878-008-9172-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shapiro, Alexander, Enlu Zhou, and Yifan Lin. "Bayesian Distributionally Robust Optimization." SIAM Journal on Optimization 33, no. 2 (June 26, 2023): 1279–304. http://dx.doi.org/10.1137/21m1465548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Klepac, Goran. "Particle Swarm Optimization Algorithm as a Tool for Profile Optimization." International Journal of Natural Computing Research 5, no. 4 (October 2015): 1–23. http://dx.doi.org/10.4018/ijncr.2015100101.

Full text
Abstract:
Complex analytical environment is challenging environment for finding customer profiles. In situation where predictive model exists like Bayesian networks challenge became even bigger regarding combinatory explosion. Complex analytical environment can be caused by multiple modality of output variable, fact that each node of Bayesian network can potetnitaly be target variable for profiling, as well as from big data environment, which cause data complexity in way of data quantity. As an illustration of presented concept particle swarm optimization algorithm will be used as a tool, which will find profiles from developed predictive model of Bayesian network. This paper will show how partical swarm optimization algorithm can be powerfull tool for finding optimal customer profiles given target conditions as evidences within Bayesian networks.
APA, Harvard, Vancouver, ISO, and other styles
8

Hickish, Bob, David I. Fletcher, and Robert F. Harrison. "Investigating Bayesian Optimization for rail network optimization." International Journal of Rail Transportation 8, no. 4 (October 14, 2019): 307–23. http://dx.doi.org/10.1080/23248378.2019.1669500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dogan, Vedat, and Steven Prestwich. "Multi-Objective BiLevel Optimization by Bayesian Optimization." Algorithms 17, no. 4 (March 30, 2024): 146. http://dx.doi.org/10.3390/a17040146.

Full text
Abstract:
In a multi-objective optimization problem, a decision maker has more than one objective to optimize. In a bilevel optimization problem, there are the following two decision-makers in a hierarchy: a leader who makes the first decision and a follower who reacts, each aiming to optimize their own objective. Many real-world decision-making processes have various objectives to optimize at the same time while considering how the decision-makers affect each other. When both features are combined, we have a multi-objective bilevel optimization problem, which arises in manufacturing, logistics, environmental economics, defence applications and many other areas. Many exact and approximation-based techniques have been proposed, but because of the intrinsic nonconvexity and conflicting multiple objectives, their computational cost is high. We propose a hybrid algorithm based on batch Bayesian optimization to approximate the upper-level Pareto-optimal solution set. We also extend our approach to handle uncertainty in the leader’s objectives via a hypervolume improvement-based acquisition function. Experiments show that our algorithm is more efficient than other current methods while successfully approximating Pareto-fronts.
APA, Harvard, Vancouver, ISO, and other styles
10

Muzayanah, Rini, Dwika Ananda Agustina Pertiwi, Muazam Ali, and Much Aziz Muslim. "Comparison of gridsearchcv and bayesian hyperparameter optimization in random forest algorithm for diabetes prediction." Journal of Soft Computing Exploration 5, no. 1 (April 2, 2024): 86–91. http://dx.doi.org/10.52465/joscex.v5i1.308.

Full text
Abstract:
Diabetes Mellitus (DM) is a chronic disease whose complications have a significant impact on patients and the wider community. In its early stages, diabetes mellitus usually does not cause significant symptoms, but if it is detected too late and not handled properly, it can cause serious health problems. To overcome these problems, diabetes detection is one of the solutions used. In this research, diabetes detection was carried out using Random Forest with gridsearchcv and bayesian hyperparameter optimization. The research was carried out through the stages of study literature, model development using Kaggle Notebook, model testing, and results analysis. This study aims to compare GridSearchCV and Bayesian hyperparameter optimizations, then analyze the advantages and disadvantages of each optimization when applied to diabetes prediction using the Random Forest algorithm. From the research conducted, it was found that GridSearchCV and Bayesian hyperparameter optimization have their own advantages and disadvantages. The GridSearchCV hyperparameter excels in terms of accuracy of 0.74, although it takes longer for 338,416 seconds. On the other hand, Bayesian hyperparameter optimization has a lower accuracy rate than GridSearchCV optimization with a difference of 0.01, which is 0.73 and takes less time than GridSearchCV for 177,085 seconds.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Bayesian Optimization"

1

Klein, Aaron [Verfasser], and Frank [Akademischer Betreuer] Hutter. "Efficient bayesian hyperparameter optimization." Freiburg : Universität, 2020. http://d-nb.info/1214592961/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mahendran, Nimalan. "Bayesian optimization for adaptive MCMC." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/30636.

Full text
Abstract:
A new randomized strategy for adaptive Markov chain Monte Carlo (MCMC) using Bayesian optimization, called Bayesian-optimized MCMC, is proposed. This approach can handle non-differentiable objective functions and trades off exploration and exploitation to reduce the number of function evaluations. Bayesian-optimized MCMC is applied to the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters of the proposal mechanism automatically to ensure efficient mixing of the Markov chains. It is found that Bayesian-optimized MCMC is able to match or surpass manual tuning of the proposal mechanism by a domain expert.
APA, Harvard, Vancouver, ISO, and other styles
3

Gelbart, Michael Adam. "Constrained Bayesian Optimization and Applications." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467236.

Full text
Abstract:
Bayesian optimization is an approach for globally optimizing black-box functions that are expensive to evaluate, non-convex, and possibly noisy. Recently, Bayesian optimization has been used with great effectiveness for applications like tuning the hyperparameters of machine learning algorithms and automatic A/B testing for websites. This thesis considers Bayesian optimization in the presence of black-box constraints. Prior work on constrained Bayesian optimization consists of a variety of methods that can be used with some efficacy in specific contexts. Here, by forming a connection with multi-task Bayesian optimization, we formulate a more general class of constrained Bayesian optimization problems that we call Bayesian optimization with decoupled constraints. In this general framework, the objective and constraint functions are divided into tasks that can be evaluated independently of each other, and resources with which these tasks can be performed. We then present two methods for solving problems in this general class. The first method, an extension to a constrained variant of expected improvement, is fast and straightforward to implement but performs poorly in some circumstances and is not sufficiently flexible to address all varieties of decoupled problems. The second method, Predictive Entropy Search with Constraints (PESC), is highly effective and sufficiently flexible to address all problems in the general class of decoupled problems without any ad hoc modifications. The two weaknesses of PESC are its implementation difficulty and slow execution time. We address these issues by, respectively, providing a publicly available implementation within the popular Bayesian optimization software Spearmint, and developing an extension to PESC that achieves greater speed without significant performance losses. We demonstrate the effectiveness of these methods on real-world machine learning meta-optimization problems.
Biophysics
APA, Harvard, Vancouver, ISO, and other styles
4

Gaudrie, David. "High-Dimensional Bayesian Multi-Objective Optimization." Thesis, Lyon, 2019. https://tel.archives-ouvertes.fr/tel-02356349.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à l'optimisation simultanée de fonctions coûteuses à évaluer et dépendant d'un grand nombre de paramètres. Cette situation est rencontrée dans de nombreux domaines tels que la conception de systèmes en ingénierie au moyen de simulations numériques. L'optimisation bayésienne, reposant sur des méta-modèles (processus gaussiens) est particulièrement adaptée à ce contexte.La première partie de cette thèse est consacrée au développement de nouvelles méthodes d'optimisation multi-objectif assistées par méta-modèles. Afin d'améliorer le temps d'atteinte de solutions Pareto optimales, un critère d'acquisition est adapté pour diriger l'algorithme vers une région de l'espace des objectifs plébiscitée par l'utilisateur ou, en son absence, le centre du front de Pareto introduit dans nos travaux. Outre le ciblage, la méthode prend en compte le budget d'optimisation, afin de restituer un éventail de solutions optimales aussi large que possible, dans la limite des ressources disponibles.Dans un second temps, inspirée par l'optimisation de forme, une approche d'optimisation avec réduction de dimension est proposée pour contrer le fléau de la dimension. Elle repose sur la construction, par analyse en composantes principales de solutions candidates, de variables auxiliaires adaptées au problème, hiérarchisées et plus à même de décrire les candidats globalement. Peu d'entre elles suffisent à approcher les solutions, et les plus influentes sont sélectionnées et priorisées au sein d'un processus gaussien additif. Cette structuration des variables est ensuite exploitée dans l'algorithme d'optimisation bayésienne qui opère en dimension réduite
This thesis focuses on the simultaneous optimization of expensive-to-evaluate functions that depend on a high number of parameters. This situation is frequently encountered in fields such as design engineering through numerical simulation. Bayesian optimization relying on surrogate models (Gaussian Processes) is particularly adapted to this context.The first part of this thesis is devoted to the development of new surrogate-assisted multi-objective optimization methods. To improve the attainment of Pareto optimal solutions, an infill criterion is tailored to direct the search towards a user-desired region of the objective space or, in its absence, towards the Pareto front center introduced in our work. Besides targeting a well-chosen part of the Pareto front, the method also considers the optimization budget in order to provide an as wide as possible range of optimal solutions in the limit of the available resources.Next, inspired by shape optimization problems, an optimization method with dimension reduction is proposed to tackle the curse of dimensionality. The approach hinges on the construction of hierarchized problem-related auxiliary variables that can describe all candidates globally, through a principal component analysis of potential solutions. Few of these variables suffice to approach any solution, and the most influential ones are selected and prioritized inside an additive Gaussian Process. This variable categorization is then further exploited in the Bayesian optimization algorithm which operates in reduced dimension
APA, Harvard, Vancouver, ISO, and other styles
5

Scotto, Di Perrotolo Alexandre. "A Theoretical Framework for Bayesian Optimization Convergence." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-225129.

Full text
Abstract:
Bayesian optimization is a well known class of derivative-free optimization algorithms mainly used for expensive black-box objective functions. Despite their efficiency, they suffer from a lack of rigorous convergence criterion which makes them more prone to be used as modeling tools rather than optimizing tools. This master thesis proposes, analyzes, and tests a globally convergent framework (that is to say the convergence to a stationary point regardless the initial sample) for Bayesian optimization algorithms. The framework design intends to preserve the global search characteristics for minimum while being rigorously monitored to converge.
Bayesiansk optimering är en välkänd klass av globala optimeringsalgoritmer som inte beror av derivator och främst används för optimering av dyra svartlådsfunktioner. Trots sin relativa effektivitet lider de av en brist av stringent konvergenskriterium som gör dem mer benägna att användas som modelleringsverktyg istället för som optimeringsverktyg. Denna rapport är avsedd att föreslå, analysera och testa en ett globalt konvergerande ramverk (på ett sätt som som beskrivs vidare) för Bayesianska optimeringsalgoritmer, som ärver de globala sökegenskaperna för minimum medan de noggrant övervakas för att konvergera.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ziyu. "Practical and theoretical advances in Bayesian optimization." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:9612d870-015e-4236-8c8d-0419670172fb.

Full text
Abstract:
Bayesian optimization forms a set of powerful tools that allows efficient blackbox optimization and has general applications in a large variety of fields. In this work we seek to advance Bayesian optimization both in the theoretical and the practical fronts as well as apply Bayesian optimization to novel and difficult problems in order to advance the state of the art. Chapter 1 gives a broad overview of Bayesian optimization. We start by covering the published applications of Bayesian optimization. The chapter then proceeds to introduce the essential ingredients of Bayesian optimization in depth. After going through some practical considerations, the theory and history of Bayesian optimization, we end the chapter with a discussion on the latest extensions and open problems. In Chapters 2-4, we solve three outstanding problems in the Bayesian optimization literature. Traditional Bayesian optimization approaches need to solve an auxiliary non-convex global optimization problem in the inner loop. The difficulties in solving this auxiliary optimization problem not only break the assumptions of most theoretical works in this area but also lead to computationally inefficient solutions. In Chapter 2, we propose the first algorithm in Bayesian optimization that does not need to solve auxiliary optimization problems and prove its convergence. In Bayesian optimization, it is often important to tune the hyper-parameters of the underlying Gaussian processes. There did not exist theoretical results that allow noisy observations and at the same time varying hyper-parameters. Chapter 3, proves the first such result. Bayesian optimization is very effective when the dimensionality of the problem is low. Scaling Bayesian optimization to high dimensionality, however, has been a long standing open problem of the field. In Chapter 4, we develop an algorithm that extends Bayesian optimization to very high dimensionalities where the underlying problems have low intrinsic dimensionality. We also prove theoretical guarantees of the proposed algorithm. In Chapter 5, we turn our attention to improving an essential component of Bayesian optimization: acquisition functions. Acquisition functions form a critical component of Bayesian optimization and yet there does not exist an optimal acquisition function that is easily computable. Instead of relying on one acquisition function, we develop a new information-theoretic portfolio of acquisition functions. We show empirically that our approach is more effective than any single acquisition function in the portfolio. Last but not least, in Chapter 6 we adapt Bayesian optimization to derive an adaptive Hamiltonian Monte Carlo sampler. Hamiltonian Monte Carlo is one of the most effective MCMC algorithms. It is, however, notoriously difficult to tune. In this chapter, we follow the approach of adapting Markov chains in order to improve their convergence where our adaptive strategy is based on Bayesian optimization. We provide theoretical analysis as well as a comprehensive set of experiments demonstrating the effectiveness of our proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Zinberg, Ben (Ben I. ). "Bayesian optimization as a probabilistic meta-program." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106374.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 50).
This thesis answers two questions: 1. How should probabilistic programming languages in- corporate Gaussian processes? and 2. Is it possible to write a probabilistic meta-program for Bayesian optimization, a probabilistic meta-algorithm that can combine regression frameworks such as Gaussian processes with a broad class of parameter estimation and optimization techniques? We answer both questions affirmatively, presenting both an implementation and informal semantics for Gaussian process models in probabilistic programming systems, and a probabilistic meta-program for Bayesian optimization. The meta-program exposes modularity common to a wide range of Bayesian optimization methods in a way that is not apparent from their usual treatment in statistics.
by Ben Zinberg.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Zheng S. M. Massachusetts Institute of Technology. "An optimization based algorithm for Bayesian inference." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98815.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 75-76).
In the Bayesian statistical paradigm, uncertainty in the parameters of a physical system is characterized by a probability distribution. Information from observations is incorporated by updating this distribution from prior to posterior. Quantities of interest, such as credible regions, event probabilities, and other expectations can then be obtained from the posterior distribution. One major task in Bayesian inference is then to characterize the posterior distribution, for example, through sampling. Markov chain Monte Carlo (MCMC) algorithms are often used to sample from posterior distributions using only unnormalized evaluations of the posterior density. However, high dimensional Bayesian inference problems are challenging for MCMC-type sampling algorithms, because accurate proposal distributions are needed in order for the sampling to be efficient. One method to obtain efficient proposal samples is an optimization-based algorithm titled 'Randomize-then-Optimize' (RTO). We build upon RTO by developing a new geometric interpretation that describes the samples as projections of Gaussian-distributed points, in the joint data and parameter space, onto a nonlinear manifold defined by the forward model. This interpretation reveals generalizations of RTO that can be used. We use this interpretation to draw connections between RTO and two other sampling techniques, transport map based MCMC and implicit sampling. In addition, we motivate and propose an adaptive version of RTO designed to be more robust and efficient. Finally, we introduce a variable transformation to apply RTO to problems with non-Gaussian priors, such as Bayesian inverse problems with Li-type priors. We demonstrate several orders of magnitude in computational savings from this strategy on a high-dimensional inverse problem.
by Zheng Wang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Carstens, Herman. "A Bayesian approach to energy monitoring optimization." Thesis, University of Pretoria, 2017. http://hdl.handle.net/2263/63791.

Full text
Abstract:
This thesis develops methods for reducing energy Measurement and Verification (M&V) costs through the use of Bayesian statistics. M&V quantifies the savings of energy efficiency and demand side projects by comparing the energy use in a given period to what that use would have been, had no interventions taken place. The case of a large-scale lighting retrofit study, where incandescent lamps are replaced by Compact Fluorescent Lamps (CFLs), is considered. These projects often need to be monitored over a number of years with a predetermined level of statistical rigour, making M&V very expensive. M&V lighting retrofit projects have two interrelated uncertainty components that need to be addressed, and which form the basis of this thesis. The first is the uncertainty in the annual energy use of the average lamp, and the second the persistence of the savings over multiple years, determined by the number of lamps that are still functioning in a given year. For longitudinal projects, the results from these two aspects need to be obtained for multiple years. This thesis addresses these problems by using the Bayesian statistical paradigm. Bayesian statistics is still relatively unknown in M&V, and presents an opportunity for increasing the efficiency of statistical analyses, especially for such projects. After a thorough literature review, especially of measurement uncertainty in M&V, and an introduction to Bayesian statistics for M&V, three methods are developed. These methods address the three types of uncertainty in M&V: measurement, sampling, and modelling. The first method is a low-cost energy meter calibration technique. The second method is a Dynamic Linear Model (DLM) with Bayesian Forecasting for determining the size of the metering sample that needs to be taken in a given year. The third method is a Dynamic Generalised Linear Model (DGLM) for determining the size of the population survival survey sample. It is often required by law that M&V energy meters be calibrated periodically by accredited laboratories. This can be expensive and inconvenient, especially if the facility needs to be shut down for meter installation or removal. Some jurisdictions also require meters to be calibrated in-situ; in their operating environments. However, it is shown that metering uncertainty makes a relatively small impact to overall M&V uncertainty in the presence of sampling, and therefore the costs of such laboratory calibration may outweigh the benefits. The proposed technique uses another commercial-grade meter (which also measures with error) to achieve this calibration in-situ. This is done by accounting for the mismeasurement effect through a mathematical technique called Simulation Extrapolation (SIMEX). The SIMEX result is refined using Bayesian statistics, and achieves acceptably low error rates and accurate parameter estimates. The second technique uses a DLM with Bayesian forecasting to quantify the uncertainty in metering only a sample of the total population of lighting circuits. A Genetic Algorithm (GA) is then applied to determine an efficient sampling plan. Bayesian statistics is especially useful in this case because it allows the results from previous years to inform the planning of future samples. It also allows for exact uncertainty quantification, where current confidence interval techniques do not always do so. Results show a cost reduction of up to 66%, but this depends on the costing scheme used. The study then explores the robustness of the efficient sampling plans to forecast error, and finds a 50% chance of undersampling for such plans, due to the standard M&V sampling formula which lacks statistical power. The third technique uses a DGLM in the same way as the DLM, except for population survival survey samples and persistence studies, not metering samples. Convolving the binomial survey result distributions inside a GA is problematic, and instead of Monte Carlo simulation, a relatively new technique called Mellin Transform Moment Calculation is applied to the problem. The technique is then expanded to model stratified sampling designs for heterogeneous populations. Results show a cost reduction of 17-40%, although this depends on the costing scheme used. Finally the DLM and DGLM are combined into an efficient overall M&V plan where metering and survey costs are traded off over multiple years, while still adhering to statistical precision constraints. This is done for simple random sampling and stratified designs. Monitoring costs are reduced by 26-40% for the costing scheme assumed. The results demonstrate the power and flexibility of Bayesian statistics for M&V applications, both in terms of exact uncertainty quantification, and by increasing the efficiency of the study and reducing monitoring costs.
Hierdie proefskrif ontwikkel metodes waarmee die koste van energiemonitering en verifieëring (M&V) deur Bayesiese statistiek verlaag kan word. M&V bepaal die hoeveelheid besparings wat deur energiedoeltreffendheid- en vraagkantbestuurprojekte behaal kan word. Dit word gedoen deur die energieverbruik in ’n gegewe tydperk te vergelyk met wat dit sou wees indien geen ingryping plaasgevind het nie. ’n Grootskaalse beligtingsretrofitstudie, waar filamentgloeilampe met fluoresserende spaarlampe vervang word, dien as ’n gevallestudie. Sulke projekte moet gewoonlik oor baie jare met ’n vasgestelde statistiese akkuuraatheid gemonitor word, wat M&V duur kan maak. Twee verwante onsekerheidskomponente moet in M&V beligtingsprojekte aangespreek word, en vorm die grondslag van hierdie proefskrif. Ten eerste is daar die onsekerheid in jaarlikse energieverbruik van die gemiddelde lamp. Ten tweede is daar die volhoubaarheid van die besparings oor veelvoudige jare, wat bepaal word deur die aantal lampe wat tot in ’n gegewe jaar behoue bly. Vir longitudinale projekte moet hierdie twee komponente oor veelvoudige jare bepaal word. Hierdie proefskrif spreek die probleem deur middel van ’n Bayesiese paradigma aan. Bayesiese statistiek is nog relatief onbekend in M&V, en bied ’n geleentheid om die doeltreffendheid van statistiese analises te verhoog, veral vir bogenoemde projekte. Die proefskrif begin met ’n deeglike literatuurstudie, veral met betrekking tot metingsonsekerheid in M&V. Daarna word ’n inleiding tot Bayesiese statistiek vir M&V voorgehou, en drie metodes word ontwikkel. Hierdie metodes spreek die drie hoofbronne van onsekerheid in M&V aan: metings, opnames, en modellering. Die eerste metode is ’n laekoste energiemeterkalibrasietegniek. Die tweede metode is ’n Dinamiese Linieêre Model (DLM) met Bayesiese vooruitskatting, waarmee meter opnamegroottes bepaal kan word. Die derde metode is ’n Dinamiese Veralgemeende Linieêre Model (DVLM), waarmee bevolkingsoorlewing opnamegroottes bepaal kan word. Volgens wet moet M&V energiemeters gereeld deur erkende laboratoria gekalibreer word. Dit kan duur en ongerieflik wees, veral as die aanleg tydens meterverwydering en -installering afgeskakel moet word. Sommige regsgebiede vereis ook dat meters in-situ gekalibreer word; in hul bedryfsomgewings. Tog word dit aangetoon dat metingsonsekerheid ’n klein deel van die algehele M&V onsekerheid beslaan, veral wanneer opnames gedoen word. Dit bevraagteken die kostevoordeel van laboratoriumkalibrering. Die voorgestelde tegniek gebruik ’n ander kommersieële-akkuurraatheidsgraad meter (wat self ’n nie-weglaatbare metingsfout bevat), om die kalibrasie in-situ te behaal. Dit word gedoen deur die metingsfout deur SIMulerings EKStraptolering (SIMEKS) te verminder. Die SIMEKS resultaat word dan deur Bayesiese statistiek verbeter, en behaal aanvaarbare foutbereike en akkuurate parameterafskattings. Die tweede tegniek gebruik ’n DLM met Bayesiese vooruitskatting om die onsekerheid in die meting van die opnamemonster van die algehele bevolking af te skat. ’n Genetiese Algoritme (GA) word dan toegepas om doeltreffende opnamegroottes te vind. Bayesiese statistiek is veral nuttig in hierdie geval aangesien dit vorige jare se uitslae kan gebruik om huidige afskattings te belig Dit laat ook die presiese afskatting van onsekerheid toe, terwyl standaard vertrouensintervaltegnieke dit nie doen nie. Resultate toon ’n kostebesparing van tot 66%. Die studie ondersoek dan die standvastigheid van kostedoeltreffende opnameplanne in die teenwoordigheid van vooruitskattingsfoute. Dit word gevind dat kostedoeltreffende opnamegroottes 50% van die tyd te klein is, vanweë die gebrek aan statistiese krag in die standaard M&V formules. Die derde tegniek gebruik ’n DVLM op dieselfde manier as die DLM, behalwe dat bevolkingsoorlewingopnamegroottes ondersoek word. Die saamrol van binomiale opname-uitslae binne die GA skep ’n probleem, en in plaas van ’n Monte Carlo simulasie word die relatiewe nuwe Mellin Vervorming Moment Berekening op die probleem toegepas. Die tegniek word dan uitgebou om laagsgewyse opname-ontwerpe vir heterogene bevolkings te vind. Die uitslae wys ’n 17-40% kosteverlaging, alhoewel dit van die koste-skema afhang. Laastens word die DLM en DVLM saamgevoeg om ’n doeltreffende algehele M&V plan, waar meting en opnamekostes teen mekaar afgespeel word, te ontwerp. Dit word vir eenvoudige en laagsgewyse opname-ontwerpe gedoen. Moniteringskostes word met 26-40% verlaag, maar hang van die aangenome koste-skema af. Die uitslae bewys die krag en buigsaamheid van Bayesiese statistiek vir M&V toepassings, beide vir presiese onsekerheidskwantifisering, en deur die doeltreffendheid van die dataverbruik te verhoog en sodoende moniteringskostes te verlaag.
Thesis (PhD)--University of Pretoria, 2017.
National Research Foundation
Department of Science and Technology
National Hub for the Postgraduate Programme in Energy Efficiency and Demand Side Management
Electrical, Electronic and Computer Engineering
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
10

Taheri, Sona. "Learning Bayesian networks based on optimization approaches." Thesis, University of Ballarat, 2012. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/36051.

Full text
Abstract:
Learning accurate classifiers from preclassified data is a very active research topic in machine learning and artifcial intelligence. There are numerous classifier paradigms, among which Bayesian Networks are very effective and well known in domains with uncertainty. Bayesian Networks are widely used representation frameworks for reasoning with probabilistic information. These models use graphs to capture dependence and independence relationships between feature variables, allowing a concise representation of the knowledge as well as efficient graph based query processing algorithms. This representation is defined by two components: structure learning and parameter learning. The structure of this model represents a directed acyclic graph. The nodes in the graph correspond to the feature variables in the domain, and the arcs (edges) show the causal relationships between feature variables. A directed edge relates the variables so that the variable corresponding to the terminal node (child) will be conditioned on the variable corresponding to the initial node (parent). The parameter learning represents probabilities and conditional probabilities based on prior information or past experience. The set of probabilities are represented in the conditional probability table. Once the network structure is constructed, the probabilistic inferences are readily calculated, and can be performed to predict the outcome of some variables based on the observations of others. However, the problem of structure learning is a complex problem since the number of candidate structures grows exponentially when the number of feature variables increases. This thesis is devoted to the development of learning structures and parameters in Bayesian Networks. Different models based on optimization techniques are introduced to construct an optimal structure of a Bayesian Network. These models also consider the improvement of the Naive Bayes' structure by developing new algorithms to alleviate the independence assumptions. We present various models to learn parameters of Bayesian Networks; in particular we propose optimization models for the Naive Bayes and the Tree Augmented Naive Bayes by considering different objective functions. To solve corresponding optimization problems in Bayesian Networks, we develop new optimization algorithms. Local optimization methods are introduced based on the combination of the gradient and Newton methods. It is proved that the proposed methods are globally convergent and have superlinear convergence rates. As a global search we use the global optimization method, AGOP, implemented in the open software library GANSO. We apply the proposed local methods in the combination with AGOP. Therefore, the main contributions of this thesis include (a) new algorithms for learning an optimal structure of a Bayesian Network; (b) new models for learning the parameters of Bayesian Networks with the given structures; and finally (c) new optimization algorithms for optimizing the proposed models in (a) and (b). To validate the proposed methods, we conduct experiments across a number of real world problems. Print version is available at: http://library.federation.edu.au/record=b1804607~S4
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Bayesian Optimization"

1

Liu, Peng. Bayesian Optimization. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pelikan, Martin. Hierarchical Bayesian Optimization Algorithm. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/b10910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Archetti, Francesco, and Antonio Candelieri. Bayesian Optimization and Data Science. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24494-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mockus, Jonas. Bayesian Approach to Global Optimization. Dordrecht: Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-009-0909-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Packwood, Daniel. Bayesian Optimization for Materials Science. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6781-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhigljavsky, Anatoly, and Antanas Žilinskas. Bayesian and High-Dimensional Global Optimization. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64712-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

M, Colosimo Bianca, and Del Castillo Enrique, eds. Bayesian process monitoring, control and optimization. Boca Raton: Chapman and Hall/CRC, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pourmohamad, Tony, and Herbert K. H. Lee. Bayesian Optimization with Application to Computer Experiments. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82458-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pourmohamad, Tony, and Herbert K. H. Lee. Bayesian Optimization with Application to Computer Experiments. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82458-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mockus, Jonas, William Eddy, Audris Mockus, Linas Mockus, and Gintaras Reklaitis. Bayesian Heuristic Approach to Discrete and Global Optimization. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-2627-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Bayesian Optimization"

1

Liu, Peng. "Gaussian Process Regression with GPyTorch." In Bayesian Optimization, 101–30. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Peng. "Case Study: Tuning CNN Learning Rate with BoTorch." In Bayesian Optimization, 185–223. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Peng. "Knowledge Gradient: Nested Optimization vs. One-Shot Learning." In Bayesian Optimization, 155–84. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Peng. "Monte Carlo Acquisition Function with Sobol Sequences and Random Restart." In Bayesian Optimization, 131–54. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Peng. "Gaussian Processes." In Bayesian Optimization, 33–67. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Peng. "Bayesian Optimization Overview." In Bayesian Optimization, 1–32. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Peng. "Bayesian Decision Theory and Expected Improvement." In Bayesian Optimization, 69–99. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9063-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Hao, and Kaifeng Yang. "Bayesian Optimization." In Natural Computing Series, 271–97. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25263-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Agrawal, Tanay. "Bayesian Optimization." In Hyperparameter Optimization in Machine Learning, 81–108. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6579-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Archetti, Francesco, and Antonio Candelieri. "Exotic Bayesian Optimization." In SpringerBriefs in Optimization, 73–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24494-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian Optimization"

1

Juneja, Namit, Varun Chandola, Jaroslaw Zola, Olga Wodo, and Parth Desai. "Resource Efficient Bayesian Optimization." In 2024 IEEE 17th International Conference on Cloud Computing (CLOUD), 12–19. IEEE, 2024. http://dx.doi.org/10.1109/cloud62652.2024.00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bilal, Ahmad, Abdul Hadee, Yash H. Shah, Sohom Bhattacharjee, and Choon Sik Cho. "RCS Minimization using Bayesian Optimization." In 2024 54th European Microwave Conference (EuMC), 740–43. IEEE, 2024. http://dx.doi.org/10.23919/eumc61614.2024.10732618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nambiraja, Shyam Sundar, and Giulia Pedrielli. "Multi Agent Rollout for Bayesian Optimization." In 2024 Winter Simulation Conference (WSC), 3518–29. IEEE, 2024. https://doi.org/10.1109/wsc63780.2024.10838839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Turan, Mehmet, and Ahmed H. AKGiriray. "Bayesian Optimization of Passive RF Circuits." In 2024 8th International Symposium on Innovative Approaches in Smart Technologies (ISAS), 1–6. IEEE, 2024. https://doi.org/10.1109/isas64331.2024.10845681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Macé, Maxime, Tassadit Amghar, Paul Richard, and Emmanuelle Ménétrier. "Renyi Entropy Search for Bayesian Optimization." In 2024 IEEE 36th International Conference on Tools with Artificial Intelligence (ICTAI), 782–89. IEEE, 2024. https://doi.org/10.1109/ictai62512.2024.00115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kato, Masahiro, Kentaro Baba, Hibiki Kaibuchi, and Ryo Inokuchi. "Bayesian Portfolio Optimization by Predictive Synthesis." In 2024 16th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI), 523–28. IEEE, 2024. http://dx.doi.org/10.1109/iiai-aai63651.2024.00100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yahya, Abdelmajid Ben, Robbe De Laet, Santiago Ramos Garces, Nick Van Oosterwyck, Ivan De Boi, Annie Cuyt, and Stijn Derammelaere. "Geometric Optimization through CAD-Based Bayesian Optimization with unknown constraint." In 2024 12th International Conference on Control, Mechatronics and Automation (ICCMA), 394–402. IEEE, 2024. https://doi.org/10.1109/iccma63715.2024.10843905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al Kontar, Raed. "Collaborative and Federated Black-box Optimization: A Bayesian Optimization Perspective." In 2024 IEEE International Conference on Big Data (BigData), 7854–59. IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Habibeh, Mohammad, and Jeff Eldred. "Bayesian Optimization For Accelerator Tuning." In Bayesian Optimization For Accelerator Tuning. US DOE, 2024. http://dx.doi.org/10.2172/2427359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Couckuyt, Ivo, Sebastian Rojas Gonzalez, and Juergen Branke. "Bayesian optimization." In GECCO '22: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3520304.3533654.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Bayesian Optimization"

1

Brown, Jesse, Goran Arbanas, Dorothea Wiarda, and Andrew Holcomb. Bayesian Optimization Framework for Imperfect Data or Models. Office of Scientific and Technical Information (OSTI), June 2022. http://dx.doi.org/10.2172/1874643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Willcox, Karen, and Youssef Marzouk. Large-Scale Optimization for Bayesian Inference in Complex Systems. Office of Scientific and Technical Information (OSTI), November 2013. http://dx.doi.org/10.2172/1104917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Biros, George. Large-Scale Optimization for Bayesian Inference in Complex Systems. Office of Scientific and Technical Information (OSTI), August 2014. http://dx.doi.org/10.2172/1234919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Day, Amber, Sinead Williamson, and Natalie Klein. Utilizing Bayesian Optimization for Efficient Dispersion Curve Feature Acquisition. Office of Scientific and Technical Information (OSTI), April 2024. http://dx.doi.org/10.2172/2335729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghattas, Omar. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems. Office of Scientific and Technical Information (OSTI), October 2013. http://dx.doi.org/10.2172/1113343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Day, Amber. Complex-Valued Signal Denoising and Bayesian Optimization for Detection of Synthetic Opioids. Office of Scientific and Technical Information (OSTI), November 2022. http://dx.doi.org/10.2172/1897402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Candy, J. V. Model-Based Localizatiion in a Shallow Ocean Environment: A Sequential Bayesian/Optimization Approach. Office of Scientific and Technical Information (OSTI), March 2019. http://dx.doi.org/10.2172/1544957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Catanach, Thomas, and Kevin Monogue. Analysis and Optimization of Seismo-Acoustic Monitoring Networks with Bayesian Optimal Experimental Design. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1815356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Darling, Arthur H., and William J. Vaughan. The Optimal Sample Size for Contingent Valuation Surveys: Applications to Project Analysis. Inter-American Development Bank, April 2000. http://dx.doi.org/10.18235/0008824.

Full text
Abstract:
One of the first questions that has to be answered in the survey design process is "How many subjects should be interviewed?" The answer can have significant implications for the cost of project preparation, since in Latin America and the Caribbean costs per interview can range from US$20 to US$100. Traditionally, the sample size question has been answered in an unsatisfactory way by either dividing an exogenously fixed survey budget by the cost per interview or by employing some variant of a standard statistical tolerance interval formula. The answer is not to be found in the environmental economics literature. But, it can be developed by adapting a Bayesian decision analysis approach from business statistics. The paper explains and illustrates, with a worked example, the rationale for and mechanics of a sequential Bayesian optimization technique, which is only applicable when there is some monetary payoff to alternative courses of action that can be linked to the sample data.
APA, Harvard, Vancouver, ISO, and other styles
10

Qi, Fei, Zhaohui Xia, Gaoyang Tang, Hang Yang, Yu Song, Guangrui Qian, Xiong An, Chunhuan Lin, and Guangming Shi. A Graph-based Evolutionary Algorithm for Automated Machine Learning. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ser.v1i2.77.

Full text
Abstract:
As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography