To see the other types of publications on this topic, follow the link: Varianty montáže.

Dissertations / Theses on the topic 'Varianty montáže'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Varianty montáže.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Juřicová, Vendula. "Koncept montážní linky pro montáž centrální části systému termoregulace motoru." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232183.

Full text
Abstract:
This diploma thesis is focused on design of assemble unit technology for mounting the central part of motor heat regulation system. In the beginning the current status is described,  followed by the analysis of mounting options, specifying assembly times and determining cycles of the assebly line and developing logistic support. The resulting layout should be ergonomically acceptable, cost-effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
2

Jůzl, Martin. "Výrobní hala LD Seating - stavebně technologický projekt." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-371963.

Full text
Abstract:
The aim of the diploma thesis is a building-technological project on the production and assembly hall LD Seating in Boskovice u Brna. The solution is influenced by the existing hall in the area of the investor, where there will be a future new building. Based on this fact, the appropriate measures are chosen. The project puts emphasis on the off-road transport and the technological process of assembly of the prefabricated reinforced concrete hall, which will be solved from the financial and time point of view. The workflow will be affected by machine assembly, site facilities, and wider transport relationships. The project also solves the concept of the whole issue as a whole. All building objects are taken into account. Based on this, an object schedule and budget are developed.
APA, Harvard, Vancouver, ISO, and other styles
3

Kánová, Eliška. "Zhodnocení běžných účtů metodami operačního výzkumu." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-194229.

Full text
Abstract:
The aim of this diploma thesis is a selection of the best current account. This thesis describes how the bank system functions in the Czech Republic. The characteristics of the banking institutions which provide current accounts and the characteristics of the compared current accounts are presented in this thesis. The Monte Carlo method and the methods multi-criteria assessment of variants are described theoretically. The selection is based on the methods of the operational research. The first part of the practical chapter focuses on the Monte Carlo method which selects the best account by charges for different types of clients. The methods of multi-criteria assessment of variants, specifically methods TOPSIS and PROMETHEE, are also applied. The calculations are carried out by add-in SANNA for MS Excel. In conclusion, both approaches to solution are compared and the current account according to the preferences of each client type is recommended.
APA, Harvard, Vancouver, ISO, and other styles
4

Wilhelm, Pavel. "Návrh variant racionalizace operace vkládání skel v montážní lince Škoda Auto." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231414.

Full text
Abstract:
This thesis focuses on the windscreen mounting options on the assembly line at Škoda Auto in Mladá Boleslav. The aim is to describe the possible assembly options and to determine the optimal variant. In the first part the key theoretical information is described, followed by an analysis of the current state. After the introduction of two new proposals the variatiants are compared to each other in terms of finance and overall suitability using multicriteria method.
APA, Harvard, Vancouver, ISO, and other styles
5

Rowland, Kelly L. "Advanced Quadrature Selection for Monte Carlo Variance Reduction." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10817512.

Full text
Abstract:

Neutral particle radiation transport simulations are critical for radiation shielding and deep penetration applications. Arriving at a solution for a given response of interest can be computationally difficult because of the magnitude of particle attenuation often seen in these shielding problems. Hybrid methods, which aim to synergize the individual favorable aspects of deterministic and stochastic solution methods for solving the steady-state neutron transport equation, are commonly used in radiation shielding applications to achieve statistically meaningful results in a reduced amount of computational time and effort. The current state of the art in hybrid calculations is the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods, which generate Monte Carlo variance reduction parameters based on deterministically-calculated scalar flux solutions. For certain types of radiation shielding problems, however, results produced using these methods suffer from unphysical oscillations in scalar flux solutions that are a product of angular discretization. These aberrations are termed “ray effects”.

The Lagrange Discrete Ordinates (LDO) equations retain the formal structure of the traditional discrete ordinates formulation of the neutron transport equation and mitigate ray effects at high angular resolution. In this work, the LDO equations have been implemented in the Exnihilo parallel neutral particle radiation transport framework, with the deterministic scalar flux solutions passed to the Automated Variance Reduction Generator (ADVANTG) software and the resultant Monte Carlo variance reduction parameters’ efficacy assessed based on results from MCNP5. Studies were conducted in both the CADIS and FW-CADIS contexts, with the LDO equations’ variance reduction parameters seeing their best performance in the FW-CADIS method, especially for photon transport.

APA, Harvard, Vancouver, ISO, and other styles
6

Kozelský, Aleš. "Realizace montážní linky ventilů AdBlue." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229424.

Full text
Abstract:
This diploma thesis concerns in design and realization of assembling line of a 2/2 seat valve for commercial vehicles sector. Design is using Autodesk Inventor. Thesis describes phases and goals of project management – in this case management of technological/manufacturing transfer.
APA, Harvard, Vancouver, ISO, and other styles
7

Aghedo, Maurice Enoghayinagbon. "Variance reduction in Monte Carlo methods of estimating distribution functions." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Péraud, Jean-Philippe M. (Jean-Philippe Michel). "Low variance methods for Monte Carlo simulation of phonon transport." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/69799.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-97).
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing the theory of phonons, we use scientific literature to write a Monte Carlo code solving the Boltzmann Transport Equation for phonons. As a first improvement to the particle method presented, we choose to use the Boltzmann Equation in terms of energy as a more convenient and accurate formulation to develop such a code. Then, we use the concept of control variates in order to introduce the notion of deviational particles. Noticing that a thermalized system at equilibrium is inherently a solution of the Boltzmann Transport Equation, we take advantage of this deterministic piece of information: we only simulate the deviation from a nearby equilibrium, which removes a great part of the statistical uncertainty. Doing so, the standard deviation of the result that we obtain is proportional to the deviation from equilibrium. In other words, we are able to simulate signals of arbitrarily low amplitude with no additional computational cost. After exploring two other variants based on the idea of control variates, we validate our code on a few theoretical results derived from the Boltzmann equation. Finally, we present a few applications of the methods.
by Jean-Philippe M. Péraud.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Whittle, Joss. "Quality assessment and variance reduction in Monte Carlo rendering algorithms." Thesis, Swansea University, 2018. https://cronfa.swan.ac.uk/Record/cronfa40271.

Full text
Abstract:
Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit. When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed. When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images. In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value.
APA, Harvard, Vancouver, ISO, and other styles
10

Frendin, Carl, and Andreas Sjöroos. "Go Go! - Evaluating Different Variants of Monte Carlo Tree Search for Playing Go." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157520.

Full text
Abstract:
Monte Carlo Tree Search (MCTS) is a Go algorithm that is used in many recent strong go playing agents. In this report we test and compare different algorithms related to Monte Carlo simulation, seeing how well they do against each other under different time constraints on consumer hardware. This is done with our own implementation of Go rules and algorithms written in Java. The All Moves As First (AMAF) algorithmhad the best performance in the performed tests.
APA, Harvard, Vancouver, ISO, and other styles
11

Singh, Gurprit. "Sampling and Variance Analysis for Monte Carlo Integration in Spherical Domain." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10121/document.

Full text
Abstract:
Cette thèse introduit un cadre théorique pour l'étude de différents schémas d'échantillonnage dans un domaine sphérique, et de leurs effets sur le calcul d'intégrales pour l'illumination globale. Le calcul de l'illumination (du transport lumineux) est un composant majeur de la synthèse d'images réalistes, qui se traduit par l'évaluation d'intégrales multidimensionnelles. Les schémas d'intégration numériques de type Monte-Carlo sont utilisés intensivement pour le calcul de telles intégrales. L'un des aspects majeurs de tout schéma d'intégration numérique est l'échantillonnage. En effet, la façon dont les échantillons sont distribués dans le domaine d'intégration peut fortement affecter le résultat final. Par exemple, pour la synthèse d'images, les effets liés aux différents schémas d'échantillonnage apparaissent sous la forme d'artéfacts structurés ou, au contrire, de bruit non structuré. Dans de nombreuses situations, des résultats complètement faux (biaisés) peuvent être obtenus à cause du schéma d'échantillonnage utilisé pour réaliser l'intégration. La distribution d'un échantillonnage peut être caractérisée à l'aide de son spectre de Fourier. Des schémas d'échantillonnage peuvent être générés à partir d'un spectre de puissance dans le domaine de Fourier. Cette technique peut être utilisée pour améliorer l'erreur d'intégration, car un tel contrôle spectral permet d'adapter le schéma d'échantillonnage au spectre de Fourier de l'intégrande. Il n'existe cependant pas de relation directe entre l'erreur dans l'intégration par méthode de Monte-Carlo et le spectre de puissance de la distribution des échantillons. Dans ces travaux, nous proposons une formulation de la variance qui établit un lien direct entre la variance d'une méthode de Monte-Carlo, les spectres de puissance du schéma d'échantillonnage ainsi que de l'intégrande. Pour obtenir notre formulation de la variance, nous utilisons la notion d'homogénéité de la distribution des échantillons qui permet d'exprimer l'erreur de l'intégration par une méthode de Monte-Carlo uniquement sous forme de variance. À partir de cette formulation de la variance, nous développons un outil d'analyse pouvant être utilisé pour déterminer le taux de convergence théorique de la variance de différents schémas d'échantillonnage proposés dans la littérature. Notre analyse fournit un éclairage sur les bonnes pratiques à mettre en œuvre dans la définition de nouveaux schémas d'échantillonnage basés sur l'intégrande
This dissertation introduces a theoretical framework to study different sampling patterns in the spherical domain and their effects in the evaluation of global illumination integrals. Evaluating illumination (light transport) is one of the most essential aspect in image synthesis to achieve realism which involves solving multi-dimensional space integrals. Monte Carlo based numerical integration schemes are heavily employed to solve these high dimensional integrals. One of the most important aspect of any numerical integration method is sampling. The way samples are distributed on an integration domain can greatly affect the final result. For example, in images, the effects of various sampling patterns appear in the form of either structural artifacts or completely unstructured noise. In many cases, we may get completely false (biased) results due to the sampling pattern used in integration. The distribution of sampling patterns can be characterized using their Fourier power spectra. It is also possible to use the Fourier power spectrum as input, to generate the corresponding sample distribution. This further allows spectral control over the sample distributions. Since this spectral control allows tailoring new sampling patterns directly from the input Fourier power spectrum, it can be used to improve error in integration. However, a direct relation between the error in Monte Carlo integration and the sampling power spectrum is missing. In this work, we propose a variance formulation, that establishes a direct link between the variance in Monte Carlo integration and the power spectra of both the sampling pattern and the integrand involved. To derive our closed-form variance formulation, we use the notion of homogeneous sample distributions that allows expression of error in Monte Carlo integration, only in the form of variance. Based on our variance formulation, we develop an analysis tool that can be used to derive theoretical variance convergence rates of various state-of-the-art sampling patterns. Our analysis gives insights to design principles that can be used to tailor new sampling patterns based on the integrand
APA, Harvard, Vancouver, ISO, and other styles
12

Höök, Lars Josef. "Variance reduction methods for numerical solution of plasma kinetic diffusion." Licentiate thesis, KTH, Fusionsplasmafysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91332.

Full text
Abstract:
Performing detailed simulations of plasma kinetic diffusion is a challenging task and currently requires the largest computational facilities in the world. The reason for this is that, the physics in a confined heated plasma occur on a broad range of temporal and spatial scales. It is therefore of interest to improve the computational algorithms together with the development of more powerful computational resources. Kinetic diffusion processes in plasmas are commonly simulated with the Monte Carlo method, where a discrete set of particles are sampled from a distribution function and advanced in a Lagrangian frame according to a set of stochastic differential equations. The Monte Carlo method introduces computational error in the form of statistical random noise produced by a finite number of particles (or markers) N and the error scales as αN−β where β = 1/2 for the standard Monte Carlo method. This requires a large number of simulated particles in order to obtain a sufficiently low numerical noise level. Therefore it is essential to use techniques that reduce the numerical noise. Such methods are commonly called variance reduction methods. In this thesis, we have developed new variance reduction methods with application to plasma kinetic diffusion. The methods are suitable for simulation of RF-heating and transport, but are not limited to these types of problems. We have derived a novel variance reduction method that minimizes the number of required particles from an optimization model. This implicitly reduces the variance when calculating the expected value of the distribution, since for a fixed error the  optimization model ensures that a minimal number of particles are needed. Techniques that reduce the noise by improving the order of convergence, have also been considered. Two different methods have been tested on a neutral beam injection scenario. The methods are the scrambled Brownian bridge method and a method here called the sorting and mixing method of L´ecot and Khettabi[1999]. Both methods converge faster than the standard Monte Carlo method for modest number of time steps, but fail to converge correctly for large number of time steps, a range required for detailed plasma kinetic simulations. Different techniques are discussed that have the potential of improving the convergence to this range of time steps.
QC 20120314
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Na. "Control variate approach for multi-user estimation via Monte Carlo simulation." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12857.

Full text
Abstract:
Thesis (Ph.D.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
Monte Carlo (MC) simulation forms a very flexible and widely used computational method employed in many areas of science and engineering. The focus of this research is on the variance reduction technique of Control Variates (CV) which is a statistical approach used to improve the efficiency of MC simulation. We consider parametric estimation problems encountered in analysing stochastic systems where the stochastic system performance or its sensitivity depends on some model or decision parameter. Furthermore, we assume that the estimation is performed by one or more users at one or several parameter values. A store and reuse setting is introduced where at a set-up stage some information is gathered computationally and stored. The stored information is then used at the estimation phase by users to help with their estimation problems. Three problems in this setting are addressed. (i) An analysis of the user's choices at the estimation phase is provided. The information generated at the set-up phase is stored in the form of information about a set of random variables that can be used as control variates. Users need to decide whether, and if so how, to use the stored information. A so-called cost-adjusted mean squared error is used as a measure cost of the available estimators and user's decision is formulated as a constrained minimization problem. (ii) A recent approach to defining generic control variates in parametric estimation problems is generalized in two distinct directions: the first involves considering an alternative parametrization of the original problem through a change of probability measure. This parametrization is particularly relevant to sensitivity estimation problems with respect to model and decision parameters. In the second, for problems where the quantities of interest are defined on sample paths of stochastic processes that model the underlying stochastic dynamics, systematic control variate selection based on approximate dynamics is proposed. (iii) When common random inputs are used parametric estimation variables become statistically dependent. This dependence is explicitly modelled as a random field and conditions are derived to imply the effectiveness of estimation variables as control variates. Comparisons with the metamodeling approach of Kriging and recently proposed Stochastic Kriging that use similar inputs data to predict the mean of the estimation variable are provided.
APA, Harvard, Vancouver, ISO, and other styles
14

Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.

Full text
Abstract:
L’algorithme Adaptive Multilevel Splitting (AMS) a récemment fait son apparition dans la littérature de mathématiques appliquées, en tant que méthode de réduction de variance pour la simulation Monte Carlo de chaı̂nes de Markov. Ce travail de thèse se propose d’implémenter cette méthode de réduction de variance adaptative dans le code Monte-Carlo de transport de particules TRIPOLI-4,dédié entre autres aux études de radioprotection et d’instrumentation nucléaire. Caractérisées par de fortes atténuations des rayonnements dans la matière, ces études entrent dans la problématique du traitement d’évènements rares. Outre son implémentation inédite dans ce domaine d’application, deux nouvelles fonctionnalités ont été développées pour l’AMS, testées puis validées. La première est une procédure d’encaissement au vol permettant d’optimiser plusieurs scores en une seule simulation AMS. La seconde est une extension de l’AMS aux processus branchants, courants dans les simulations de radioprotection, par exemple lors du transport couplé de neutrons et des photons induits par ces derniers. L’efficacité et la robustesse de l’AMS dans ce nouveau cadre applicatif ont été démontrées dans des configurations physiquement très sévères (atténuations du flux de particules de plus de 10 ordres de grandeur), mettant ainsi en évidence les avantages prometteurs de l’AMS par rapport aux méthodes de réduction de variance existantes
The Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
APA, Harvard, Vancouver, ISO, and other styles
15

Järnberg, Emelie. "Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197322.

Full text
Abstract:
In this thesis, the credit worthiness of a company is modelled using a stochastic process. Two credit models are considered; Merton's model, which models the value of a firm's assets using geometric Brownian motion, and the distance to default model, which is driven by a two factor jump diffusion process. The probability of default and the default time are simulated using Monte Carlo and the number of scenarios needed to obtain convergence in the simulations is investigated. The simulations are performed using the probability matrix method (PMM), which means that a transition probability matrix describing the process is created and used for the simulations. Besides this, two variance reduction techniques are investigated; importance sampling and antithetic variates.
I den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
APA, Harvard, Vancouver, ISO, and other styles
16

Zaidi, Nikki. "Hidden Variance in Multiple Mini-Interview Scores." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427797882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Burešová, Jana. "Oceňování derivátů pomocí Monte Carlo simulací." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-11476.

Full text
Abstract:
Pricing of more complex derivatives is very often based on Monte Carlo simulations. Estimates given by these simulations are derived from thousands of scenarions for the underlying asset price developement. These estimates can be more precise in case of higher number of scenarions or in case of modifications of a simulation mentioned in this master thesis. First part of the thesis includes theoretic description of variance reduction techniques, second part consists of implementation of all techniques in pricing a barrier option and of their comparison. We conclude the thesis by two statements. The former one says that usage of each technique is subject to simulation specifics, the latter one recommends to use MC simulations even in the case a closed-form formula was derived.
APA, Harvard, Vancouver, ISO, and other styles
18

Yue, Rong-xian. "Applications of quasi-Monte Carlo methods in model-robust response surface designs." HKBU Institutional Repository, 1997. http://repository.hkbu.edu.hk/etd_ra/178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nowak, Michel. "Accelerating Monte Carlo particle transport with adaptively generated importance maps." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS403/document.

Full text
Abstract:
Les simulations Monte Carlo de transport de particules sont un outil incontournable pour l'étude de problèmes de radioprotection. Leur utilisation implique l'échantillonnage d'événements rares grâce à des méthode de réduction de variance qui reposent sur l'estimation de la contribution d'une particule au détecteur. On construit cette estimation sous forme d'une carte d'importance.L’objet de cette étude est de proposer une stratégie qui permette de générer de manière adaptative des cartes d'importance durant la simulation Monte Carlo elle-même. Le travail a été réalisé dans le code de transport des particules TRIPOLI-4®, développé à la Direction de l’énergie nucléaire du CEA (Salay, France).Le cœur du travail a consisté à estimer le flux adjoint à partir des trajectoires simulées avec l'Adaptive Multilevel Splitting, une méthode de réduction de variance robuste. Ce développement a été validé à l'aide de l'intégration d'un module déterministe dans TRIPOLI-4®.Trois stratégies sont proposés pour la réutilisation de ce score en tant que carte d'importance dans la simulation Monte Carlo. Deux d'entre elles proposent d'estimer la convergence du score adjoint lors de phases d'exploitation.Ce travail conclut sur le lissage du score adjoint avec des méthodes d'apprentissage automatique, en se concentrant plus particulièrement sur les estimateurs de densité à noyaux
Monte Carlo methods are a reference asset for the study of radiation transport in shielding problems. Their use naturally implies the sampling of rare events and needs to be tackled with variance reduction methods. These methods require the definition of an importance function/map. The aim of this study is to propose an adaptivestrategy for the generation of such importance maps during the Montne Carlo simulation. The work was performed within TRIPOLI-4®, a Monte Carlo transport code developped at the nuclear energy division of CEA in Saclay, France. The core of this PhD thesis is the implementation of a forward-weighted adjoint score that relies on the trajectories sampled with Adaptive Multilevel Splitting, a robust variance reduction method. It was validated with the integration of a deterministic module in TRIPOLI-4®. Three strategies were proposed for the reintegrationof this score as an importance map and accelerations were observed. Two of these strategies assess the convergence of the adjoint score during exploitation phases by evalutating the figure of merit yielded by the use of the current adjoint score. Finally, the smoothing of the importance map with machine learning algorithms concludes this work with a special focus on Kernel Density Estimators
APA, Harvard, Vancouver, ISO, and other styles
20

Maire, Sylvain. "Réduction de variance pour l'intégration numérique et pour le calcul critique en transport neutronique." Toulon, 2001. http://www.theses.fr/2001TOUL0013.

Full text
Abstract:
Cette thèse est consacrée aux méthodes de Monte-Carlo et plus particulièrement à la réduction de variance. Dans la première partie, on étudie un algorithme probabiliste, fondée sur une utilisation itérative de la méthode des variables de contrôle, permettant le calcul d'approximations quadratiques. Son utilisation en dimension un pour les fonctions régulières à l'aide de la base de Fourier après périodisation, des bases de polynômes orthogonaux de Legendre et Tchebychef, fournit des estimateurs ayant un ordre de convergence accru pour l'intégration Monte-Carlo. On l'étend ensuite au cadre multidimensionne! par un choix judicieux des fonctions de base, permettant d'atténuer l'effet dimensionnel. La validation numérique est effectuée sur de nombreux exemples et applications. La deuxième partie est consacrée à l'étude du régime critique en transport neutronique. La méthode développée consiste à calculer numériquement la valeur propre principale de l'opérateur de transport neutronique en combi¬nant le développement asymptotique de la solution du problâme d'évolution associé avec le calcul par une méthode de Monte-Carlo de son interprétation probabiliste. Différentes techniques de réduction de varîance sont mises en place dans l'étude de nombreux modèles homogènes et inhomogènes. Une interprétation probabiliste de la valeur propre principale est donnée pour un modèle homogène particulier
This work deals with Monte Carlo methods and is especially devoted to variance-reduction. In the first part, we study a probabilistic algorithm, based on iterated control variates, wich enables the computation of mean-square ap-. Proximations. We obtain Monte Carlo estimators with increased convergence rate for monodimensional regular functions using it with periodized Fourier basis, Legendre and Tchebychef polynomial basis. It is then extended to the multidimensional case in trying to attenuate the dimensional effect by making a good choice of the basis functions. Various numerical examples and applications are studied. The second part deals with criticality in neutron transport theory. We develop a numerical method to compute the principal eigenvalue of the neutron transport operator by combining the Monte-Carlo computation of the solution of the relative Cauchy problem and its formal eigenfunction expansion. Various variance-reduction methods are tested on both homogeneous and inhomo-geaeous models. The stochastic representation of the principal eigenvalue is obtained for a peculiar homogeneous model
APA, Harvard, Vancouver, ISO, and other styles
21

Stockbridge, Rebecca. "Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301665.

Full text
Abstract:
Stochastic programming combines ideas from deterministic optimization with probability and statistics to produce more accurate models of optimization problems involving uncertainty. However, due to their size, stochastic programming problems can be extremely difficult to solve and instead approximate solutions are used. Therefore, there is a need for methods that can accurately identify optimal or near optimal solutions. In this dissertation, we focus on improving Monte-Carlo sampling-based methods that assess the quality of potential solutions to stochastic programs by estimating optimality gaps. In particular, we aim to reduce the bias and/or variance of these estimators. We first propose a technique to reduce the bias of optimality gap estimators which is based on probability metrics and stability results in stochastic programming. This method, which requires the solution of a minimum-weight perfect matching problem, can be run in polynomial time in sample size. We establish asymptotic properties and present computational results. We then investigate the use of sampling schemes to reduce the variance of optimality gap estimators, and in particular focus on antithetic variates and Latin hypercube sampling. We also combine these methods with the bias reduction technique discussed above. Asymptotic properties of the resultant estimators are presented, and computational results on a range of test problems are discussed. Finally, we apply methods of assessing solution quality using antithetic variates and Latin hypercube sampling to a sequential sampling procedure to solve stochastic programs. In this setting, we use Latin hypercube sampling when generating a sequence of candidate solutions that is input to the procedure. We prove that these procedures produce a high-quality solution with high probability, asymptotically, and terminate in a finite number of iterations. Computational results are presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Landon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Ramström, Alexander. "Pricing of European and Asian options with Monte Carlo simulations : Variance reduction and low-discrepancy techniques." Thesis, Umeå universitet, Nationalekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-145942.

Full text
Abstract:
This thesis evaluates different models accuracy of option pricing by MonteCarlo simulations when changing parameter values and the number of simulations. By simulating the asset movements thousands of times and use well established theory one can approximate the price of one-year financialoptions and for the European options also compare them to the price from Black-Scholes exact pricing formula. The models in this thesis became two direct variance reducing models, a low-discrepancy model and also the Standard model. The results show that the models that controls the generating of random numbers has the best estimating of the price where Quasi-MC performed better than the others. A Hybrid model was con-structed from two established models and it showed that it was accurate when it comes to estimating the option price and even beat the Quasi-MCmost of the times.
APA, Harvard, Vancouver, ISO, and other styles
24

Solomon, Clell J. Jr. "Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7014.

Full text
Abstract:
Doctor of Philosophy
Department of Mechanical and Nuclear Engineering
J. Kenneth Shultis
A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
APA, Harvard, Vancouver, ISO, and other styles
25

Shaw, Benjamin Stuard. "Structure and Variability of the North Atlantic Meridional Overturning Circulation from Observations and Numerical Models." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/74.

Full text
Abstract:
This study presents an analysis of observed Atlantic Meridional Overturning Circulation (AMOC) variability at 26.5°N on submonthly to interannual time scales compared to variability characteristics produced by a selection of five high- and low-resolution, synoptically and climatologically forced OGCMs. The focus of the analysis is on the relative contributions of ocean mesoscale eddies and synoptic atmospheric forcing to the overall AMOC variability. Observations used in this study were collected within the framework of the joint U.K.-U.S. Rapid Climate Change (RAPID)-Meridional Overturning Circulation & Heat Flux Array (MOCHA) Program. The RAPID-MOCHA array has now been in place for nearly 6 years, of which 4 years of data (2004-2007) are analyzed in this study. At 26.5°N, the MOC strength measured by the RAPID-MOCHA array is 18.5 Sv. Overall, the models tend to produce a realistic, though slightly underestimated, MOC. With the exception of one of the high-resolution, synoptically forced models, standard deviations of model-produced MOC are lower than the observed standard deviation by 1.5 to 2 Sv. A comparison of the MOC spectra at 26.5°N shows that model variability is weaker than observed variability at periods longer than 100 days. Of the five models investigated in this study, two were selected for a more in-depth examination. One model is forced by a monthly climatology derived from 6-hourly NCEP/NCAR winds (OFES-CLIM), whereas the other is forced by NCEP/NCAR reanalysis daily winds and fluxes (OFES-NCEP). They are identically configured, presenting an opportunity to explain differences in their MOCs by their differences in forcing. Both of these models were produced by the OGCM for the Earth Simulator (OFES), operated by the Japan Agency for Marine-Earth Science & Technology (JAMSTEC). The effects of Ekman transport on the strength, variability, and meridional decorrelation scale are investigated for the OFES models. This study finds that AMOC variance due to Ekman forcing is distributed nearly evenly between the submonthly, intraseasonal, and seasonal period bands. When Ekman forcing is removed, the remaining variance is the result of geostrophic motions. In the intraseasonal period band this geostrophic AMOC variance is dominated by eddy activity, and variance in the submonthly period band is dominated by forced geostrophic motions such as Rossby and Kelvin waves. It is also found that MOC variability is coherent over a meridional distance of ~8° throughout the study region, and that this coherence scale is intrinsic to both Ekman and geostrophic motions. A Monte Carlo-style evaluation of the 27-year-long OFES-NCEP timeseries is used to investigate the ability of a four year MOC strength timeseries to represent the characteristics of lengthier timeseries. It is found that a randomly selected four year timeseries will fall within ~1 Sv of the true mean 95% of the time, but long term trends cannot be accurately calculated from a four year timeseries. Errors in the calculated trend are noticeably reduced for each additional year until the timeseries reaches ~11 years in length. For timeseries longer than 11-years, the trend's 95% confidence interval asymptotes to 2 Sv/decade.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Yushun. "Asymptotique suramortie de la dynamique de Langevin et réduction de variance par repondération." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2024/document.

Full text
Abstract:
Cette thèse est consacrée à l’étude de deux problèmes différents : l’asymptotique suramortie de la dynamique de Langevin d’une part, et l’étude d’une technique de réduction de variance dans une méthode de Monte Carlo par une repondération optimale des échantillons, d’autre part. Dans le premier problème, on montre la convergence en distribution de processus de Langevin dans l’asymptotique sur-amortie. La preuve repose sur la méthode classique des “fonctions test perturbées”, qui est utilisée pour montrer la tension dans l’espace des chemins, puis pour identifier la limite comme solution d’un problème de martingale. L’originalité du résultat tient aux hypothèses très faibles faites sur la régularité de l’énergie potentielle. Dans le deuxième problème, nous concevons des méthodes de réduction de la variance pour l’estimation de Monte Carlo d’une espérance de type E[φ(X, Y )], lorsque la distribution de X est exactement connue. L’idée générale est de donner à chaque échantillon un poids, de sorte que la distribution empirique pondérée qui en résulterait une marginale par rapport à la variable X aussi proche que possible de sa cible. Nous prouvons plusieurs résultats théoriques sur la méthode, en identifiant des régimes où la réduction de la variance est garantie. Nous montrons l’efficacité de la méthode en pratique, par des tests numériques qui comparent diverses variantes de notre méthode avec la méthode naïve et des techniques de variable de contrôle. La méthode est également illustrée pour une simulation d’équation différentielle stochastique de Langevin
This dissertation is devoted to studying two different problems: the over-damped asymp- totics of Langevin dynamics and a new variance reduction technique based on an optimal reweighting of samples.In the first problem, the convergence in distribution of Langevin processes in the over- damped asymptotic is proven. The proof relies on the classical perturbed test function (or corrector) method, which is used (i) to show tightness in path space, and (ii) to identify the extracted limit with a martingale problem. The result holds assuming the continuity of the gradient of the potential energy, and a mild control of the initial kinetic energy. In the second problem, we devise methods of variance reduction for the Monte Carlo estimation of an expectation of the type E [φ(X, Y )], when the distribution of X is exactly known. The key general idea is to give each individual sample a weight, so that the resulting weighted empirical distribution has a marginal with respect to the variable X as close as possible to its target. We prove several theoretical results on the method, identifying settings where the variance reduction is guaranteed, and also illustrate the use of the weighting method in Langevin stochastic differential equation. We perform numerical tests comparing the methods and demonstrating their efficiency
APA, Harvard, Vancouver, ISO, and other styles
27

Dehaye, Benjamin. "Accélération de la convergence dans le code de transport de particules Monte-Carlo TRIPOLI-4® en criticité." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112332/document.

Full text
Abstract:
Un certain nombre de domaines tels que les études de criticité requièrent le calcul de certaines grandeurs neutroniques d'intérêt. Il existe deux types de code : les codes déterministes et les codes stochastiques. Ces derniers sont réputés simuler la physique de la configuration traitée de manière exacte. Toutefois, le temps de calcul nécessaire peut s'avérer très élevé.Le travail réalisé dans cette thèse a pour but de bâtir une stratégie d'accélération de la convergence de la criticité dans le code de calcul TRIPOLI-4®. Nous souhaitons mettre en œuvre le jeu à variance nulle. Pour ce faire, il est nécessaire de calculer le flux adjoint. L'originalité de cette thèse est de calculer directement le flux adjoint par une simulation directe Monte-Carlo sans passer par un code externe, grâce à la méthode de la matrice de fission. Ce flux adjoint est ensuite utilisé comme carte d'importance afin d'accélérer la convergence de la simulation
Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used : deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4®. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Jinsong. "Variance analysis for kernel smoothing of a varying-coefficient model with longitudinal data /." Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/chenj/jinsongchen.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Müller, Armin [Verfasser], Hermann [Akademischer Betreuer] Singer, and Hermann [Gutachter] Singer. "Variance reduced Monte-Carlo Simulations of Stochastic Differential Equations / Armin Müller ; Gutachter: Hermann Singer ; Betreuer: Hermann Singer." Hagen : FernUniversität in Hagen, 2017. http://d-nb.info/1177893266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Moreni, Nicola. "Méthodes de Monte Carlo et valorisation d' options." Paris 6, 2005. http://www.theses.fr/2005PA066626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Arouna, Bouhari. "Algotithmes stochastiques et méthodes de Monte Carlo." Phd thesis, Ecole des Ponts ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00001269.

Full text
Abstract:
Dans cette thèse,nous proposons de nouvelles techniques de réduction de variance, pourles simultions Monté Carlo. Par un simple changement de variable, nous modifions la loi de simulation de façon paramétrique. L'idée consiste ensuite à utiliser une version convenablement projetée des algorithmes de Robbins-Monro pour déterminer le paramètre optimal qui "minimise" la variance de l'estimation. Nous avons d'abord développé une implémentation séquentielle dans laquelle la variance est réduite dynamiquement au cours des itératons Monte Carlo. Enfin, dans la dernière partie de notre travail, l'idée principale a été d'interpréter la réduction de variance en termes de minimisation d'entropie relative entre une mesure de probabilité optimale donnée, et une famille paramétrique de mesures de probabilité. Nous avons prouvé des résultats théoriques généraux qui définissent un cadre rigoureux d'utilisation de ces méthodes, puis nous avons effectué plusieurs expérimentations en finance et en fiabilité qui justifient de leur efficacité réelle.
APA, Harvard, Vancouver, ISO, and other styles
33

Lescot, Numa. "Réduction de variance pour les sensibilités : application aux produits sur taux d'intérêt." Paris 6, 2012. http://www.theses.fr/2012PA066102.

Full text
Abstract:
This thesis studies variance reduction techniques for the problem of approximating functionals of diffusion processes, motivated by applications in computational finance to derivatives pricing and hedging. The main tool is Malliavin's stochastic calculus of variations, which yields simulatable representations of both sensitivities and the optimal strategy for variance reduction. In the first part we present a unified view of the control variates and importance sampling methodologies, and give a practical factorization of the optimal strategies. We introduce a parametric importance sampling algorithm and carry out its study in detail. To solve the corresponding optimization problem, we validate two procedures based respectively on stochastic approximation and minimizing an empirical counterpart. Several numerical examples are given which highlight the method's potential. In a second part we combine integration by parts with a Girsanov transform to obtain several stochastic representations of sensitivities. Going beyond a strictly elliptic framework, we show on a class of HJM models with stochastic volatility how to efficiently construct a covering vector field in the sense of Malliavin-Thalmaier. The last chapter, of a more applied nature, deals with a practical case of pricing and hedging exotic rates options
Cette thèse est consacrée à des techniques de réduction de variance pour l'approximation de fonctionnelles de processus de diffusion, motivées par l'évaluation et la couverture de produits dérivés en mathématiques financières. Notre principal outil est le calcul de Malliavin, qui donne des expressions simulables des sensibilités et de la stratégie optimale de réduction de variance. Dans une première partie on donne une présentation unifiée des méthodes de variables de controle et d'échantillonage préférentiel, ainsi qu'une factorisation opératoire des stratégies optimales. On introduit un algorithme d'échantillonnage préférentiel paramétrique dont on mène l'étude détaillée. Pour résoudre le problème d'optimisation associé, nous validons deux procédures basées respectivement sur l'approximation stochastique et la minimisation du critère empirique. Plusieurs exemples numériques illustrent la portée de la méthode. Dans une deuxième partie nous combinons intégration par parties et transformation de Girsanov pour proposer plusieurs représentations stochastiques des sensibilités. Au-delà du cadre strictement elliptique, on montre sur le cas d'un modèle HJM à volatilité stochastique une construction efficace de vecteur couvrant au sens de Malliavin-Thalmaier. Le dernier chapitre, de nature plus appliquée, présente un cas réel d'évaluation et de couverture d'options exotiques sur taux
APA, Harvard, Vancouver, ISO, and other styles
34

Pierre-Louis, Péguy. "Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/228433.

Full text
Abstract:
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
APA, Harvard, Vancouver, ISO, and other styles
35

Felcman, Adam. "Value at Risk: Historická simulace, variančně kovarianční metoda a Monte Carlo simulace." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-124888.

Full text
Abstract:
The diploma thesis "Value at Risk: Historical simulation, variance covariance method and Monte Carlo" aims to value the risk which real bond portfolio bears. The thesis is decomposed into two major chapters: Theoretical and Practical chapters. The first one speaks about VaR and conditional VaR theory including their advantages and disadvantages. Moreover, there are described three basic methods to calculate VaR and CVaR with adjustments to each method in order to increase the reliability of results. The last chapter brings results of VaR and CVaR computation. Many graphs, tables and images are added to the result section in order to make the outputs more visible and well-arranged.
APA, Harvard, Vancouver, ISO, and other styles
36

Sampson, Andrew. "Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed Tomography." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3063.

Full text
Abstract:
This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV’s, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV’s in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a single elliptical water cylinder using a primary point source (PPS) and a phase-space source (PSS). The best gains were found by using the PSS yielding average efficiency gains of 250 relative to non-weight windowed MC utilizing the PPS. Furthermore, computing 360 projections on a 40 by 30 pixel grid requires only 48 min on a single CPU core allowing clinical use via parallel processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
37

Fakhereddine, Rana. "Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM047/document.

Full text
Abstract:
Les méthodes de Monte Carlo (MC) sont des méthodes numériques qui utilisent des nombres aléatoires pour résoudre avec des ordinateurs des problèmes des sciences appliquées et des techniques. On estime une quantité par des évaluations répétées utilisant N valeurs et l'erreur de la méthode est approchée par la variance de l'estimateur. Le présent travail analyse des méthodes de réduction de la variance et examine leur efficacité pour l'intégration numérique et la résolution d'équations différentielles et intégrales. Nous présentons d'abord les méthodes MC stratifiées et les méthodes d'échantillonnage par hypercube latin (LHS : Latin Hypercube Sampling). Parmi les méthodes de stratification, nous privilégions la méthode simple (MCS) : l'hypercube unité Is := [0; 1)s est divisé en N sous-cubes d'égale mesure, et un point aléatoire est choisi dans chacun des sous-cubes. Nous analysons la variance de ces méthodes pour le problème de la quadrature numérique. Nous étudions particulièrment le cas de l'estimation de la mesure d'un sous-ensemble de Is. La variance de la méthode MCS peut être majorée par O(1=N1+1=s). Les résultats d'expériences numériques en dimensions 2,3 et 4 montrent que les majorations obtenues sont précises. Nous proposons ensuite une méthode hybride entre MCS et LHS, qui possède les propriétés de ces deux techniques, avec un point aléatoire dans chaque sous-cube et les projections des points sur chacun des axes de coordonnées également réparties de manière régulière : une projection dans chacun des N sousintervalles qui divisent I := [0; 1) uniformément. Cette technique est appelée Stratification Sudoku (SS). Dans le même cadre d'analyse que précédemment, nous montrons que la variance de la méthode SS est majorée par O(1=N1+1=s) ; des expériences numériques en dimensions 2,3 et 4 valident les majorations démontrées. Nous présentons ensuite une approche de la méthode de marche aléatoire utilisant les techniques de réduction de variance précédentes. Nous proposons un algorithme de résolution de l'équation de diffusion, avec un coefficient de diffusion constant ou non-constant en espace. On utilise des particules échantillonnées suivant la distribution initiale, qui effectuent un déplacement gaussien à chaque pas de temps. On ordonne les particules suivant leur position à chaque étape et on remplace les nombres aléatoires qui permettent de calculer les déplacements par les points stratifiés utilisés précédemment. On évalue l'amélioration apportée par cette technique sur des exemples numériques Nous utilisons finalement une approche analogue pour la résolution numérique de l'équation de coagulation, qui modélise l'évolution de la taille de particules pouvant s'agglomérer. Les particules sont d'abord échantillonnées suivant la distribution initiale des tailles. On choisit un pas de temps et, à chaque étape et pour chaque particule, on choisit au hasard un partenaire de coalescence et un nombre aléatoire qui décide de cette coalescence. Si l'on classe les particules suivant leur taille à chaque pas de temps et si l'on remplace les nombres aléatoires par des points stratifiés, on observe une réduction de variance par rapport à l'algorithme MC usuel
Monte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm
APA, Harvard, Vancouver, ISO, and other styles
38

Rogers, Catherine Jane. "Power comparisons of four post-MANOVA tests under variance-covariance heterogeneity and non-normality in the two group case." Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/40171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Haseeb, Hayat, and Rahul Duggal. "Valuation of exotic options under the Constant Elasticity of Variance model by exact Monte Carlo simulation : A MATLAB GUI application." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26141.

Full text
Abstract:
Diffusions are broadly used in mathematical finance for modeling asset prices. We consider the exact path sampling of a constant elasticity of variance diffusion model obtained from a squared Bessel process. We have created a MATLAB GUI (Graphical User Interface) program to evaluate exotic options under the constant elasticity model by exact Monte Carlo simulation. The procedure and the mathematical background behind constructing the program are described and explained. We show the results obtained and an analysis is made based on the MATLAB program.
APA, Harvard, Vancouver, ISO, and other styles
40

Depinay, Jean-Marc. "Automatisation de méthodes de réduction de variance pour la résolution de l'équation de transport." Phd thesis, Ecole des Ponts ParisTech, 2000. http://tel.archives-ouvertes.fr/tel-00005592.

Full text
Abstract:
Les méthodes de Monte-Carlo sont souvent utilisées pour la résolution des problèmes neutroniques. La grande dimension du problème et la complexité des géométries réelles rendent, en effet, les méthodes numériques traditionelles difficiles à implémenter. Ces méthodes sont relativement faciles à mettre en oeuvre mais ont le défaut de converger lentement, la précision du calcul étant en 1/racine(n) où n est le nombre de simulations.
De nombreuses études ont été menées en vue d'accélérer la convergence de ce type d'algorithme. Ce travail s'inscrit dans cette mouvance et vise à rechercher et décrire des techniques d'accélération de convergence facilement implémentables et automatisables. Dans cette thèse, nous nous intéressons à des méthodes d'échantillonage préférentiel. Ces techniques classiques pour les équations de transport utilisent des paramètres qui sont usuellement fixés de façon empirique par des spécialistes. La principale originalité de notre travail est de proposer des méthodes qui s'automatisent facilement. L'originalité de l'algorithme tient d'une part à l'utilisation d'un échantillonage préférentiel sur la variable angulaire (biaisage angulaire), utilisé en plus de l'échantillonage de la variable de position, d'autre part en la description d'une technique de calcul explicite de tous les paramètres dans la réduction de variance. Ce dernier point permet l'automatisation quasi-complète de la procédure de réduction de variance.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Hangcheng. "Comparing Welch's ANOVA, a Kruskal-Wallis test and traditional ANOVA in case of Heterogeneity of Variance." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3985.

Full text
Abstract:
Analysis of variance (ANOVA) is a robust test against the normality assumption, but it may be inappropriate when the assumption of homogeneity of variance has been violated. Welch ANOVA and the Kruskal-Wallis test (a non-parametric method) can be applicable for this case. In this study we compare the three methods in empirical type I error rate and power, when heterogeneity of variance occurs and find out which method is the most suitable with which cases including balanced/unbalanced, small/large sample size, and/or with normal/non-normal distributions.
APA, Harvard, Vancouver, ISO, and other styles
42

Champciaux, Valentin. "Calcul accéléré de la dose périphérique en radiothérapie." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASP001.

Full text
Abstract:
La radiothérapie représente l'un des principaux traitements dont bénéficient les patients atteints par un cancer. Elle n'est toutefois pas sans risque~: les rayonnements ionisants utilisés pour détruire la tumeur peuvent favoriser l'apparition d'effets secondaires à long terme lorsqu'ils atteignent des régions éloignées de la zone traitée. On ne dispose aujourd'hui d'aucune méthode efficace pour connaître à l'avance cette dose périphérique, qui n'est alors pas incluse dans la planification du traitement. Notamment, la simulation Monte Carlo, reconnue pour sa précision dans le domaine, est limitée par l'important temps de calcul nécessaire. Ce travail se concentre sur l'étude puis l'implémentation dans un code de simulation Monte Carlo d'une méthode de réduction de variance, destinée à accélérer les estimations de la dose périphérique. Cette méthode, dite du «transport pseudo-déterministe», permet de créer de nouvelles particules qui sont amenées artificiellement dans la région où la dose est estimée, permettant de réduire la variance sur le résultat. Elle est implémentée dans le code Phoebe, accompagnée d'autres techniques simples qui comblent certaines de ses lacunes. L'outil ainsi créé est ensuite validé numériquement et expérimentalement, puis appliqué à un cas concret de modélisation en radiothérapie
Radiotherapy is currently one of the main treatment modalities for cancer. However, it is not without risk~: ionising radiation used to destroy tumors may promote the appearance of long-term side effects when particles reach areas far from the treatment beam. To date, there is no efficient method for predicting this peripheral dose of radiation, which istherefore not included in treatment planning. The Monte Carlo simulation method, known for its precision in dosimetry studies, is heavily limited by the computing time requiered. This work focuses on the study and then the implementation of a variance reduction method designed to speed up the peripheral dose estimates. This method, known as «pseudo-deterministic transport», allows the creation new particles that are artificially brought to the area of interest where the dose is estimated, thus reducing the variance on the estimation. This method is implemented in the simulation code Phoebe, along with some other simple techniques helping with its drawbacks. The tool created is then validated by comparison with results obtained with no variance reduction, and then applied to a concrete case of modeling in radiotherapy
APA, Harvard, Vancouver, ISO, and other styles
43

Åberg, K. Magnus. "Variance Reduction in Analytical Chemistry : New Numerical Methods in Chemometrics and Molecular Simulation." Doctoral thesis, Stockholm University, Department of Analytical Chemistry, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-283.

Full text
Abstract:

This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods.

Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules.

Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments.

Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV.

Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed.

Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).

APA, Harvard, Vancouver, ISO, and other styles
44

Elazhar, Halima. "Dosimétrie neutron en radiothérapie : étude expérimentale et développement d'un outil personnalisé de calcul de dose Monte Carlo." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAE013/document.

Full text
Abstract:
L’optimisation des traitements en radiothérapie vise à améliorer la précision de l’irradiation des cellules cancéreuses pour épargner le plus possible les organes environnants. Or la dose périphérique déposée dans les tissus les plus éloignés de la tumeur n’est actuellement pas calculée par les logiciels de planification de traitement, alors qu’elle peut être responsable de l’induction de cancers secondaires radio-induits. Parmi les différentes composantes, les neutrons produits par processus photo-nucléaires sont les particules secondaires pour lesquelles il y a un manque important de données dosimétriques. Une étude expérimentale et par simulation Monte Carlo de la production des neutrons secondaires en radiothérapie nous a conduit à développer un algorithme qui utilise la précision du calcul Monte Carlo pour l’estimation de la distribution 3D de la dose neutron délivrée au patient. Un tel outil permettra la création de bases de données dosimétriques pouvant être utilisées pour l’amélioration des modèles mathématiques « dose-risque » spécifiques à l’irradiation des organes périphériques à de faibles doses en radiothérapie
Treatment optimization in radiotherapy aims at increasing the accuracy of cancer cell irradiation while saving the surrounding healthy organs. However, the peripheral dose deposited in healthy tissues far away from the tumour are currently not calculated by the treatment planning systems even if it can be responsible for radiation induced secondary cancers. Among the different components, neutrons produced through photo-nuclear processes are suffering from an important lack of dosimetric data. An experimental and Monte Carlo simulation study of the secondary neutron production in radiotherapy led us to develop an algorithm using the Monte Carlo calculation precision to estimate the 3D neutron dose delivered to the patient. Such a tool will allow the generation of dosimetric data bases ready to be used for the improvement of “dose-risk” mathematical models specific to the low dose irradiation to peripheral organs occurring in radiotherapy
APA, Harvard, Vancouver, ISO, and other styles
45

Hoover, Jared Stephen. "Monte Carlo Modeling of a Varian 2100C 18 MV Megavoltage Photon Beam and Subsequent Dose Delivery using MCNP5." Thesis, Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16245.

Full text
Abstract:
A Varian 2100C 18 MV photon beam has been modeled in this work using the MCNP5 Monte Carlo particle transport user code. The subsequent beam irradiation was also delivered to a water phantom and benchmarked against experimentally measured depth dose data. The model presented in this work establishes the foundation to which further beam characteristics tuning is required in order to realistically model the beam mentioned above. It has been determined in this work that the initial electron beam energy of this beam model is sufficiently close to the electron beam energy from the linear accelerator used to obtain the benchmark depth dose data.
APA, Harvard, Vancouver, ISO, and other styles
46

Farah, Jad. "Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112183/document.

Full text
Abstract:
Afin d’optimiser la surveillance des travailleuses du nucléaire par anthroporadiamétrie, il est nécessaire de corriger les coefficients d’étalonnage obtenus à l’aide du fantôme physique masculin Livermore. Pour ce faire, des étalonnages numériques basés sur l’utilisation des calculs Monte Carlo associés à des fantômes numériques ont été utilisés. De tels étalonnages nécessitent d’une part le développement de fantômes représentatifs des tailles et des morphologies les plus communes et d’autre part des simulations Monte Carlo rapides et fiables. Une bibliothèque de fantômes thoraciques féminins a ainsi été développée en ajustant la masse des organes internes et de la poitrine suivant la taille et les recommandations de la chirurgie plastique. Par la suite, la bibliothèque a été utilisée pour étalonner le système de comptage du Secteur d’Analyses Médicales d’AREVA NC La Hague. De plus, une équation décrivant la variation de l’efficacité de comptage en fonction de l’énergie et de la morphologie a été développée. Enfin, des recommandations ont été données pour corriger les coefficients d’étalonnage du personnel féminin en fonction de la taille et de la poitrine. Enfin, pour accélérer les simulations, des méthodes de réduction de variance ainsi que des opérations de simplification de la géométrie ont été considérées.Par ailleurs, pour l’étude des cas de contamination complexes, il est proposé de remonter à la cartographie d’activité en associant aux mesures anthroporadiamétriques le calcul Monte Carlo. La méthode développée consiste à réaliser plusieurs mesures spectrométriques avec différents positionnements des détecteurs. Ensuite, il s’agit de séparer la contribution de chaque organe contaminé au comptage grâce au calcul Monte Carlo. L’ensemble des mesures réalisées au LEDI, au CIEMAT et au KIT ont démontré l’intérêt de cette méthode et l’apport des simulations Monte Carlo pour une analyse plus précise des mesures in vivo, permettant ainsi de déterminer la répartition de l’activité à la suite d’une contamination interne
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination
APA, Harvard, Vancouver, ISO, and other styles
47

Maire, Sylvain. "Quelques Techniques de Couplage entre Méthodes Numériques Déterministes et Méthodes de Monte-Carlo." Habilitation à diriger des recherches, Université du Sud Toulon Var, 2007. http://tel.archives-ouvertes.fr/tel-00579977.

Full text
Abstract:
Les travaux présentes s'inscrivent dans le cadre de la réduction de variance pour les méthodes de Monte-Carlo et plus généralement dans l'optimisation de méthodes numériques à l'aide de couplage entre des méthodes déterministes et des méthodes probabilistes. Trois thèmes principaux seront abordés à l'aide de ces techniques: l'intégration numérique sur un hypercube, la résolution d' équations aux dérivées partielles linéaires et le calcul des éléments propres principaux (valeur propre et vecteur propre) de certains opérateurs linéaires.
APA, Harvard, Vancouver, ISO, and other styles
48

Karawatzki, Roman, Josef Leydold, and Klaus Pötzelberger. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1400/1/document.pdf.

Full text
Abstract:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. An implementation of these algorithms in C is available from the authors. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
49

Ho, Kwok Wah. "RJMCMC algorithm for multivariate Gaussian mixtures with applications in linear mixed-effects models /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ISMT%202005%20HO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

An, Qian. "A Monte Carlo study of several alpha-adjustment procedures using a testing multiple hypotheses in factorial anova." Ohio : Ohio University, 2010. http://www.ohiolink.edu/etd/view.cgi?ohiou1269439475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography