Dissertations / Theses on the topic 'Varianty montáže'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Varianty montáže.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Juřicová, Vendula. "Koncept montážní linky pro montáž centrální části systému termoregulace motoru." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232183.
Full textJůzl, Martin. "Výrobní hala LD Seating - stavebně technologický projekt." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-371963.
Full textKánová, Eliška. "Zhodnocení běžných účtů metodami operačního výzkumu." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-194229.
Full textWilhelm, Pavel. "Návrh variant racionalizace operace vkládání skel v montážní lince Škoda Auto." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231414.
Full textRowland, Kelly L. "Advanced Quadrature Selection for Monte Carlo Variance Reduction." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10817512.
Full textNeutral particle radiation transport simulations are critical for radiation shielding and deep penetration applications. Arriving at a solution for a given response of interest can be computationally difficult because of the magnitude of particle attenuation often seen in these shielding problems. Hybrid methods, which aim to synergize the individual favorable aspects of deterministic and stochastic solution methods for solving the steady-state neutron transport equation, are commonly used in radiation shielding applications to achieve statistically meaningful results in a reduced amount of computational time and effort. The current state of the art in hybrid calculations is the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods, which generate Monte Carlo variance reduction parameters based on deterministically-calculated scalar flux solutions. For certain types of radiation shielding problems, however, results produced using these methods suffer from unphysical oscillations in scalar flux solutions that are a product of angular discretization. These aberrations are termed “ray effects”.
The Lagrange Discrete Ordinates (LDO) equations retain the formal structure of the traditional discrete ordinates formulation of the neutron transport equation and mitigate ray effects at high angular resolution. In this work, the LDO equations have been implemented in the Exnihilo parallel neutral particle radiation transport framework, with the deterministic scalar flux solutions passed to the Automated Variance Reduction Generator (ADVANTG) software and the resultant Monte Carlo variance reduction parameters’ efficacy assessed based on results from MCNP5. Studies were conducted in both the CADIS and FW-CADIS contexts, with the LDO equations’ variance reduction parameters seeing their best performance in the FW-CADIS method, especially for photon transport.
Kozelský, Aleš. "Realizace montážní linky ventilů AdBlue." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229424.
Full textAghedo, Maurice Enoghayinagbon. "Variance reduction in Monte Carlo methods of estimating distribution functions." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37385.
Full textPéraud, Jean-Philippe M. (Jean-Philippe Michel). "Low variance methods for Monte Carlo simulation of phonon transport." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/69799.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 95-97).
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing the theory of phonons, we use scientific literature to write a Monte Carlo code solving the Boltzmann Transport Equation for phonons. As a first improvement to the particle method presented, we choose to use the Boltzmann Equation in terms of energy as a more convenient and accurate formulation to develop such a code. Then, we use the concept of control variates in order to introduce the notion of deviational particles. Noticing that a thermalized system at equilibrium is inherently a solution of the Boltzmann Transport Equation, we take advantage of this deterministic piece of information: we only simulate the deviation from a nearby equilibrium, which removes a great part of the statistical uncertainty. Doing so, the standard deviation of the result that we obtain is proportional to the deviation from equilibrium. In other words, we are able to simulate signals of arbitrarily low amplitude with no additional computational cost. After exploring two other variants based on the idea of control variates, we validate our code on a few theoretical results derived from the Boltzmann equation. Finally, we present a few applications of the methods.
by Jean-Philippe M. Péraud.
S.M.
Whittle, Joss. "Quality assessment and variance reduction in Monte Carlo rendering algorithms." Thesis, Swansea University, 2018. https://cronfa.swan.ac.uk/Record/cronfa40271.
Full textFrendin, Carl, and Andreas Sjöroos. "Go Go! - Evaluating Different Variants of Monte Carlo Tree Search for Playing Go." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157520.
Full textSingh, Gurprit. "Sampling and Variance Analysis for Monte Carlo Integration in Spherical Domain." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10121/document.
Full textThis dissertation introduces a theoretical framework to study different sampling patterns in the spherical domain and their effects in the evaluation of global illumination integrals. Evaluating illumination (light transport) is one of the most essential aspect in image synthesis to achieve realism which involves solving multi-dimensional space integrals. Monte Carlo based numerical integration schemes are heavily employed to solve these high dimensional integrals. One of the most important aspect of any numerical integration method is sampling. The way samples are distributed on an integration domain can greatly affect the final result. For example, in images, the effects of various sampling patterns appear in the form of either structural artifacts or completely unstructured noise. In many cases, we may get completely false (biased) results due to the sampling pattern used in integration. The distribution of sampling patterns can be characterized using their Fourier power spectra. It is also possible to use the Fourier power spectrum as input, to generate the corresponding sample distribution. This further allows spectral control over the sample distributions. Since this spectral control allows tailoring new sampling patterns directly from the input Fourier power spectrum, it can be used to improve error in integration. However, a direct relation between the error in Monte Carlo integration and the sampling power spectrum is missing. In this work, we propose a variance formulation, that establishes a direct link between the variance in Monte Carlo integration and the power spectra of both the sampling pattern and the integrand involved. To derive our closed-form variance formulation, we use the notion of homogeneous sample distributions that allows expression of error in Monte Carlo integration, only in the form of variance. Based on our variance formulation, we develop an analysis tool that can be used to derive theoretical variance convergence rates of various state-of-the-art sampling patterns. Our analysis gives insights to design principles that can be used to tailor new sampling patterns based on the integrand
Höök, Lars Josef. "Variance reduction methods for numerical solution of plasma kinetic diffusion." Licentiate thesis, KTH, Fusionsplasmafysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91332.
Full textQC 20120314
Sun, Na. "Control variate approach for multi-user estimation via Monte Carlo simulation." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12857.
Full textMonte Carlo (MC) simulation forms a very flexible and widely used computational method employed in many areas of science and engineering. The focus of this research is on the variance reduction technique of Control Variates (CV) which is a statistical approach used to improve the efficiency of MC simulation. We consider parametric estimation problems encountered in analysing stochastic systems where the stochastic system performance or its sensitivity depends on some model or decision parameter. Furthermore, we assume that the estimation is performed by one or more users at one or several parameter values. A store and reuse setting is introduced where at a set-up stage some information is gathered computationally and stored. The stored information is then used at the estimation phase by users to help with their estimation problems. Three problems in this setting are addressed. (i) An analysis of the user's choices at the estimation phase is provided. The information generated at the set-up phase is stored in the form of information about a set of random variables that can be used as control variates. Users need to decide whether, and if so how, to use the stored information. A so-called cost-adjusted mean squared error is used as a measure cost of the available estimators and user's decision is formulated as a constrained minimization problem. (ii) A recent approach to defining generic control variates in parametric estimation problems is generalized in two distinct directions: the first involves considering an alternative parametrization of the original problem through a change of probability measure. This parametrization is particularly relevant to sensitivity estimation problems with respect to model and decision parameters. In the second, for problems where the quantities of interest are defined on sample paths of stochastic processes that model the underlying stochastic dynamics, systematic control variate selection based on approximate dynamics is proposed. (iii) When common random inputs are used parametric estimation variables become statistically dependent. This dependence is explicitly modelled as a random field and conditions are derived to imply the effectiveness of estimation variables as control variates. Comparisons with the metamodeling approach of Kriging and recently proposed Stochastic Kriging that use similar inputs data to predict the mean of the estimation variable are provided.
Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.
Full textThe Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
Järnberg, Emelie. "Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197322.
Full textI den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
Zaidi, Nikki. "Hidden Variance in Multiple Mini-Interview Scores." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427797882.
Full textBurešová, Jana. "Oceňování derivátů pomocí Monte Carlo simulací." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-11476.
Full textYue, Rong-xian. "Applications of quasi-Monte Carlo methods in model-robust response surface designs." HKBU Institutional Repository, 1997. http://repository.hkbu.edu.hk/etd_ra/178.
Full textNowak, Michel. "Accelerating Monte Carlo particle transport with adaptively generated importance maps." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS403/document.
Full textMonte Carlo methods are a reference asset for the study of radiation transport in shielding problems. Their use naturally implies the sampling of rare events and needs to be tackled with variance reduction methods. These methods require the definition of an importance function/map. The aim of this study is to propose an adaptivestrategy for the generation of such importance maps during the Montne Carlo simulation. The work was performed within TRIPOLI-4®, a Monte Carlo transport code developped at the nuclear energy division of CEA in Saclay, France. The core of this PhD thesis is the implementation of a forward-weighted adjoint score that relies on the trajectories sampled with Adaptive Multilevel Splitting, a robust variance reduction method. It was validated with the integration of a deterministic module in TRIPOLI-4®. Three strategies were proposed for the reintegrationof this score as an importance map and accelerations were observed. Two of these strategies assess the convergence of the adjoint score during exploitation phases by evalutating the figure of merit yielded by the use of the current adjoint score. Finally, the smoothing of the importance map with machine learning algorithms concludes this work with a special focus on Kernel Density Estimators
Maire, Sylvain. "Réduction de variance pour l'intégration numérique et pour le calcul critique en transport neutronique." Toulon, 2001. http://www.theses.fr/2001TOUL0013.
Full textThis work deals with Monte Carlo methods and is especially devoted to variance-reduction. In the first part, we study a probabilistic algorithm, based on iterated control variates, wich enables the computation of mean-square ap-. Proximations. We obtain Monte Carlo estimators with increased convergence rate for monodimensional regular functions using it with periodized Fourier basis, Legendre and Tchebychef polynomial basis. It is then extended to the multidimensional case in trying to attenuate the dimensional effect by making a good choice of the basis functions. Various numerical examples and applications are studied. The second part deals with criticality in neutron transport theory. We develop a numerical method to compute the principal eigenvalue of the neutron transport operator by combining the Monte-Carlo computation of the solution of the relative Cauchy problem and its formal eigenfunction expansion. Various variance-reduction methods are tested on both homogeneous and inhomo-geaeous models. The stochastic representation of the principal eigenvalue is obtained for a peculiar homogeneous model
Stockbridge, Rebecca. "Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301665.
Full textLandon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
Ramström, Alexander. "Pricing of European and Asian options with Monte Carlo simulations : Variance reduction and low-discrepancy techniques." Thesis, Umeå universitet, Nationalekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-145942.
Full textSolomon, Clell J. Jr. "Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7014.
Full textDepartment of Mechanical and Nuclear Engineering
J. Kenneth Shultis
A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
Shaw, Benjamin Stuard. "Structure and Variability of the North Atlantic Meridional Overturning Circulation from Observations and Numerical Models." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/74.
Full textXu, Yushun. "Asymptotique suramortie de la dynamique de Langevin et réduction de variance par repondération." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2024/document.
Full textThis dissertation is devoted to studying two different problems: the over-damped asymp- totics of Langevin dynamics and a new variance reduction technique based on an optimal reweighting of samples.In the first problem, the convergence in distribution of Langevin processes in the over- damped asymptotic is proven. The proof relies on the classical perturbed test function (or corrector) method, which is used (i) to show tightness in path space, and (ii) to identify the extracted limit with a martingale problem. The result holds assuming the continuity of the gradient of the potential energy, and a mild control of the initial kinetic energy. In the second problem, we devise methods of variance reduction for the Monte Carlo estimation of an expectation of the type E [φ(X, Y )], when the distribution of X is exactly known. The key general idea is to give each individual sample a weight, so that the resulting weighted empirical distribution has a marginal with respect to the variable X as close as possible to its target. We prove several theoretical results on the method, identifying settings where the variance reduction is guaranteed, and also illustrate the use of the weighting method in Langevin stochastic differential equation. We perform numerical tests comparing the methods and demonstrating their efficiency
Dehaye, Benjamin. "Accélération de la convergence dans le code de transport de particules Monte-Carlo TRIPOLI-4® en criticité." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112332/document.
Full textFields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used : deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4®. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation
Chen, Jinsong. "Variance analysis for kernel smoothing of a varying-coefficient model with longitudinal data /." Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/chenj/jinsongchen.pdf.
Full textGuha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.
Full textMüller, Armin [Verfasser], Hermann [Akademischer Betreuer] Singer, and Hermann [Gutachter] Singer. "Variance reduced Monte-Carlo Simulations of Stochastic Differential Equations / Armin Müller ; Gutachter: Hermann Singer ; Betreuer: Hermann Singer." Hagen : FernUniversität in Hagen, 2017. http://d-nb.info/1177893266/34.
Full textMoreni, Nicola. "Méthodes de Monte Carlo et valorisation d' options." Paris 6, 2005. http://www.theses.fr/2005PA066626.
Full textArouna, Bouhari. "Algotithmes stochastiques et méthodes de Monte Carlo." Phd thesis, Ecole des Ponts ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00001269.
Full textLescot, Numa. "Réduction de variance pour les sensibilités : application aux produits sur taux d'intérêt." Paris 6, 2012. http://www.theses.fr/2012PA066102.
Full textCette thèse est consacrée à des techniques de réduction de variance pour l'approximation de fonctionnelles de processus de diffusion, motivées par l'évaluation et la couverture de produits dérivés en mathématiques financières. Notre principal outil est le calcul de Malliavin, qui donne des expressions simulables des sensibilités et de la stratégie optimale de réduction de variance. Dans une première partie on donne une présentation unifiée des méthodes de variables de controle et d'échantillonage préférentiel, ainsi qu'une factorisation opératoire des stratégies optimales. On introduit un algorithme d'échantillonnage préférentiel paramétrique dont on mène l'étude détaillée. Pour résoudre le problème d'optimisation associé, nous validons deux procédures basées respectivement sur l'approximation stochastique et la minimisation du critère empirique. Plusieurs exemples numériques illustrent la portée de la méthode. Dans une deuxième partie nous combinons intégration par parties et transformation de Girsanov pour proposer plusieurs représentations stochastiques des sensibilités. Au-delà du cadre strictement elliptique, on montre sur le cas d'un modèle HJM à volatilité stochastique une construction efficace de vecteur couvrant au sens de Malliavin-Thalmaier. Le dernier chapitre, de nature plus appliquée, présente un cas réel d'évaluation et de couverture d'options exotiques sur taux
Pierre-Louis, Péguy. "Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/228433.
Full textFelcman, Adam. "Value at Risk: Historická simulace, variančně kovarianční metoda a Monte Carlo simulace." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-124888.
Full textSampson, Andrew. "Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed Tomography." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3063.
Full textFakhereddine, Rana. "Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM047/document.
Full textMonte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm
Rogers, Catherine Jane. "Power comparisons of four post-MANOVA tests under variance-covariance heterogeneity and non-normality in the two group case." Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/40171.
Full textHaseeb, Hayat, and Rahul Duggal. "Valuation of exotic options under the Constant Elasticity of Variance model by exact Monte Carlo simulation : A MATLAB GUI application." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26141.
Full textDepinay, Jean-Marc. "Automatisation de méthodes de réduction de variance pour la résolution de l'équation de transport." Phd thesis, Ecole des Ponts ParisTech, 2000. http://tel.archives-ouvertes.fr/tel-00005592.
Full textDe nombreuses études ont été menées en vue d'accélérer la convergence de ce type d'algorithme. Ce travail s'inscrit dans cette mouvance et vise à rechercher et décrire des techniques d'accélération de convergence facilement implémentables et automatisables. Dans cette thèse, nous nous intéressons à des méthodes d'échantillonage préférentiel. Ces techniques classiques pour les équations de transport utilisent des paramètres qui sont usuellement fixés de façon empirique par des spécialistes. La principale originalité de notre travail est de proposer des méthodes qui s'automatisent facilement. L'originalité de l'algorithme tient d'une part à l'utilisation d'un échantillonage préférentiel sur la variable angulaire (biaisage angulaire), utilisé en plus de l'échantillonage de la variable de position, d'autre part en la description d'une technique de calcul explicite de tous les paramètres dans la réduction de variance. Ce dernier point permet l'automatisation quasi-complète de la procédure de réduction de variance.
Liu, Hangcheng. "Comparing Welch's ANOVA, a Kruskal-Wallis test and traditional ANOVA in case of Heterogeneity of Variance." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3985.
Full textChampciaux, Valentin. "Calcul accéléré de la dose périphérique en radiothérapie." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASP001.
Full textRadiotherapy is currently one of the main treatment modalities for cancer. However, it is not without risk~: ionising radiation used to destroy tumors may promote the appearance of long-term side effects when particles reach areas far from the treatment beam. To date, there is no efficient method for predicting this peripheral dose of radiation, which istherefore not included in treatment planning. The Monte Carlo simulation method, known for its precision in dosimetry studies, is heavily limited by the computing time requiered. This work focuses on the study and then the implementation of a variance reduction method designed to speed up the peripheral dose estimates. This method, known as «pseudo-deterministic transport», allows the creation new particles that are artificially brought to the area of interest where the dose is estimated, thus reducing the variance on the estimation. This method is implemented in the simulation code Phoebe, along with some other simple techniques helping with its drawbacks. The tool created is then validated by comparison with results obtained with no variance reduction, and then applied to a concrete case of modeling in radiotherapy
Åberg, K. Magnus. "Variance Reduction in Analytical Chemistry : New Numerical Methods in Chemometrics and Molecular Simulation." Doctoral thesis, Stockholm University, Department of Analytical Chemistry, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-283.
Full textThis thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods.
Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules.
Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments.
Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV.
Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed.
Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Elazhar, Halima. "Dosimétrie neutron en radiothérapie : étude expérimentale et développement d'un outil personnalisé de calcul de dose Monte Carlo." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAE013/document.
Full textTreatment optimization in radiotherapy aims at increasing the accuracy of cancer cell irradiation while saving the surrounding healthy organs. However, the peripheral dose deposited in healthy tissues far away from the tumour are currently not calculated by the treatment planning systems even if it can be responsible for radiation induced secondary cancers. Among the different components, neutrons produced through photo-nuclear processes are suffering from an important lack of dosimetric data. An experimental and Monte Carlo simulation study of the secondary neutron production in radiotherapy led us to develop an algorithm using the Monte Carlo calculation precision to estimate the 3D neutron dose delivered to the patient. Such a tool will allow the generation of dosimetric data bases ready to be used for the improvement of “dose-risk” mathematical models specific to the low dose irradiation to peripheral organs occurring in radiotherapy
Hoover, Jared Stephen. "Monte Carlo Modeling of a Varian 2100C 18 MV Megavoltage Photon Beam and Subsequent Dose Delivery using MCNP5." Thesis, Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16245.
Full textFarah, Jad. "Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112183/document.
Full textTo optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination
Maire, Sylvain. "Quelques Techniques de Couplage entre Méthodes Numériques Déterministes et Méthodes de Monte-Carlo." Habilitation à diriger des recherches, Université du Sud Toulon Var, 2007. http://tel.archives-ouvertes.fr/tel-00579977.
Full textKarawatzki, Roman, Josef Leydold, and Klaus Pötzelberger. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1400/1/document.pdf.
Full textSeries: Research Report Series / Department of Statistics and Mathematics
Ho, Kwok Wah. "RJMCMC algorithm for multivariate Gaussian mixtures with applications in linear mixed-effects models /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ISMT%202005%20HO.
Full textAn, Qian. "A Monte Carlo study of several alpha-adjustment procedures using a testing multiple hypotheses in factorial anova." Ohio : Ohio University, 2010. http://www.ohiolink.edu/etd/view.cgi?ohiou1269439475.
Full text