To see the other types of publications on this topic, follow the link: Particle Monte Carlo.

Dissertations / Theses on the topic 'Particle Monte Carlo'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Particle Monte Carlo.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high-dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. In this thesis we propose a new Monte Carlo framework in which we build efficient high-dimensional proposal distributions using SMC methods. This allows us to design effective MCMC algorithms in complex scenarios where standard strategies fail. We demonstrate these algorithms on a number of example problems, including simulated tempering, nonlinear non-Gaussian state-space model, and protein folding.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Lulu Ph D. Massachusetts Institute of Technology. "Acceleration methods for Monte Carlo particle transport simulations." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112521.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 166-175).
Performing nuclear reactor core physics analysis is a crucial step in the process of both designing and understanding nuclear power reactors. Advancements in the nuclear industry demand more accurate and detailed results from reactor analysis. Monte Carlo (MC) eigenvalue neutron transport methods are uniquely qualified to provide these results, due to their accurate treatment of space, angle, and energy dependencies of neutron distributions. Monte Carlo eigenvalue simulations are, however, challenging, because they must resolve the fission source distribution and accumulate sufficient tally statistics, resulting in prohibitive run times. This thesis proposes the Low Order Operator (LOO) acceleration method to reduce the run time challenge, and provides analyses to support its use for full-scale reactor simulations. LOO is implemented in the continuous energy Monte Carlo code, OpenMC, and tested in 2D PWR benchmarks. The Low Order Operator (LOO) acceleration method is a deterministic transport method based on the Method of Characteristics. Similar to Coarse Mesh Finite Difference (CMFD), the other acceleration method evaluated in this thesis, LOO parameters are constructed from Monte Carlo tallies. The solutions to the LOO equations are then used to update Monte Carlo fission sources. This thesis deploys independent simulations to rigorously assess LOO, CMFD, and unaccelerated Monte Carlo, simulating up to a quarter of a trillion neutron histories for each simulation. Analysis and performance models are developed to address two aspects of the Monte Carlo run time challenge. First, this thesis demonstrates that acceleration methods can reduce the vast number of neutron histories required to converge the fission source distribution before tallies can be accumulated. Second, the slow convergence of tally statistics is improved with the acceleration methods for the earlier active cycles. A theoretical model is developed to explain the observed behaviors and predict convergence rates. Finally, numerical results and theoretical models shed light on the selection of optimal simulation parameters such that a desired statistical uncertainty can be achieved with minimum neutron histories. This thesis demonstrates that the conventional wisdom (e.g., maximizing the number of cycles rather than the number of neutrons per cycle) in performing unaccelerated MC simulations can be improved simply by using more optimal parameters. LOO acceleration provides reduction of a factor of at least 2.2 in neutron histories, compared to the unaccelerated Monte Carlo scheme, and the CPU time and memory overhead associated with LOO are small.
by Lulu Li.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Torfeh, Eva. "Monte Carlo microdosimetry of charged-particle microbeam irradiations." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0159/document.

Full text
Abstract:
L’interaction des particules chargées avec la matière conduit à un dépôt d’énergie très localisé dans des traces de dimensions sub-micrométriques. Cette propriété unique rend ce type de rayonnement ionisant particulièrement intéressant pour disséquer les mécanismes moléculaires radio-induits suite à l’échelle de la cellule. L’utilisation de microfaisceaux de particules chargées offre en outre la capacité d’irradier sélectivement à l’échelle du micromètre avec une dose contrôlée jusqu’à la particule unique. Mon travail a porté sur des irradiations réalisées avec le microfaisceau de particules chargées de la plateforme AIFIRA (Applications Interdisciplinaires des Faisceaux d’Ions en Région Aquitaine) du CENBG. Ce microfaisceau délivre des protons et particules alpha et est dédié aux irradiations ciblées in vitro (cellules humains) et in vivo (C. elegans).En complément de l’intérêt qu’elles présentent pour des études expérimentales, les dépôts d’énergie et les interactions des particules chargées avec la matière peuvent être modélisés précisément tout au long de leur trajectoire en utilisant des codes de structures de traces basés sur des méthodes Monte Carlo. Ces outils de simulation permettent une caractérisation précise de la micro-dosimétrie des irradiations allant de la description détaillée des interactions physiques à l’échelle nanométrique jusqu’à la prédiction du nombre de dommages à l’ADN et leurs distributions dans l’espace.Au cours de ma thèse, j’ai développée des modèles micro-dosimétriques basés sur l’outil de modélisation Geant4-DNA dans deux cas. Le premier concerne la simulation de la distribution d’énergie déposée dans un noyau cellulaire et le calcul du nombre des différents types de dommages ADN (simple et double brin) aux échelles nanométrique et micrométrique, pour différents types et nombres de particules délivrées. Ces résultats sont confrontés à la mesure expérimentale de la cinétique de protéines de réparation de l’ADN marquées par GFP (Green Fluorescent Protein) dans des cellules humaines. Le second concerne la dosimétrie de l’irradiation d’un organisme multicellulaire dans le cadre d’études de l’instabilité génétique dans un organisme vivant au cours du développement (C. elegans). J’ai simulé la distribution de l’énergie déposée dans différents compartiments d’un modèle réaliste en 3D d’un embryon de C. elegans suite à des irradiations par protons. Enfin, et en parallèle de ces deux études, j’ai développé un protocole pour caractériser le microfaisceau d'AIFIRA à l’aide de détecteurs de traces fluorescent (FNTD) pour des irradiations par protons et par particules alpha. Ce type de détecteur permet en effet de visualiser les trajectoires des particules incidentes avec une résolution de l’ordre de 200 nm et d’examiner la qualité des irradiations cellulaires réalisées par le microfaisceau
The interaction of charged particles with matter leads to a very localized energy deposits in sub-micrometric tracks. This unique property makes this type of ionizing radiation particularly interesting for deciphering the radiation-induced molecular mechanisms at the cell scale. Charged particle microbeams (CPMs) provide the ability to target a given cell compartment at the micrometer scale with a controlled dose down to single particle. My work focused on irradiations carried out with the CPM at the AIFIRA facility in the CENBG (Applications Interdisciplinaires des Faisceaux d’Ions en Région Aquitaine). This microbeam delivers protons and alpha particles and is dedicated to targeted irradiation in vitro (human cells) and in vivo (C. elegans).In addition to their interest for experimental studies, the energy deposits and the interactions of charged particles with matter can be modeled precisely along their trajectory using track structure codes based on Monte Carlo methods. These simulation tools allow a precise characterization of the micro-dosimetry of the irradations from the detailed description of the physical interactions at the nanoscale to the prediction of the number of DNA damage, their complexity and their distribution in space.During my thesis, I developed micro-dosimetric models based on the Geant4-DNA modeling toolkit in two cases. The first concerns the simulation of the energy distribution deposited in a cell nucleus and the calculation of the number of different types of DNA damage (single and double strand breaks) at the nanometric and micrometric scales, for different types and numbers of delivered particles. These simulations are compared with experimental measurements of the kinetics of GFP-labeled (Green Fluorescent Protein) DNA repair proteins in human cells. The second is the dosimetry of irradiation of a multicellular organism to study the genetic instability in a living organism during development (C. elegans). I simulated the distribution of the energy deposited in different compartments of a realistic 3D model of a C. elegans embryo following proton irradiations. Finally, and in parallel with these two studies, I developed a protocol to characterize the AIFIRA microbeam using fluorescent nuclear track detector (FNTD) for proton and alpha particle irradiations. This type of detector makes it possible to visualize in 3D the incident particle tracks with a resolution of about 200 nm and to examine the quality of the cellular irradiations carried out by the CPM
APA, Harvard, Vancouver, ISO, and other styles
4

Miryusupov, Shohruh. "Particle methods in finance." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E069.

Full text
Abstract:
Cette thèse contient deux sujets différents la simulation d'événements rares et un transport d'homotopie pour l'estimation du modèle de volatilité stochastique, dont chacun est couvert dans une partie distincte de cette thèse. Les méthodes de particules, qui généralisent les modèles de Markov cachés, sont largement utilisées dans différents domaines tels que le traitement du signal, la biologie, l'estimation d'événements rares, la finance, etc. Il existe un certain nombre d'approches basées sur les méthodes de Monte Carlo, tels que Markov Chain Monte Carlo (MCMC), Monte Carlo séquentiel (SMC). Nous appliquons des algorithmes SMC pour estimer les probabilités de défaut dans un processus d'intensité basé sur un processus stable afin de calculer un ajustement de valeur de crédit (CV A) avec le wrong way risk (WWR). Nous proposons une nouvelle approche pour estimer les événements rares, basée sur la génération de chaînes de Markov en simulant le système hamiltonien. Nous démontrons les propriétés, ce qui nous permet d'avoir une chaîne de Markov ergodique et nous montrons la performance de notre approche sur l'exemple que nous rencontrons dans la valorisation des options. Dans la deuxième partie, nous visons à estimer numériquement un modèle de volatilité stochastique, et à le considérer dans le contexte d'un problème de transport, lorsque nous aimerions trouver «un plan de transport optimal» qui représente la mesure d'image. Dans un contexte de filtrage, nous le comprenons comme le transport de particules d'une distribution antérieure à une distribution postérieure dans le pseudo-temps. Nous avons également proposé de repondérer les particules transportées, de manière à ce que nous puissions nous diriger vers la zone où se concentrent les particules de poids élevé. Nous avons montré sur l'exemple du modèle de la volatilité stochastique de Stein-Stein l'application de notre méthode et illustré le biais et la variance
The thesis introduces simulation techniques that are based on particle methods and it consists of two parts, namely rare event simulation and a homotopy transport for stochastic volatility model estimation. Particle methods, that generalize hidden Markov models, are widely used in different fields such as signal processing, biology, rare events estimation, finance, etc. There are a number of approaches that are based on Monte Carlo methods that allow to approximate a target density such as Markov Chain Monte Carlo (MCMC), sequential Monte Carlo (SMC). We apply SMC algorithms to estimate default probabilities in a stable process based intensity process to compute a credit value adjustment (CV A) with a wrong way risk (WWR). We propose a novel approach to estimate rare events, which is based on the generation of Markov Chains by simulating the Hamiltonian system. We demonstrate the properties, that allows us to have ergodic Markov Chain and show the performance of our approach on the example that we encounter in option pricing.In the second part, we aim at numerically estimating a stochastic volatility model, and consider it in the context of a transportation problem, when we would like to find "an optimal transport map" that pushes forward the measure. In a filtering context, we understand it as the transportation of particles from a prior to a posterior distribution in pseudotime. We also proposed to reweight transported particles, so as we can direct to the area, where particles with high weights are concentrated. We showed the application of our method on the example of option pricing with Stein­Stein stochastic volatility model and illustrated the bias and variance
APA, Harvard, Vancouver, ISO, and other styles
5

Persing, Adam. "Some contributions to particle Markov chain Monte Carlo algorithms." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23277.

Full text
Abstract:
Hidden Markov models (HMMs) (Cappe et al., 2005) and discrete time stopped Markov processes (Del Moral, 2004, Section 2.2.3) are used to model phenomena in a wide range of fields. However, as practitioners develop more intricate models, analytical Bayesian inference becomes very difficult. In light of this issue, this work focuses on sampling from the posteriors of HMMs and stopped Markov processes using sequential Monte Carlo (SMC) (Doucet et al. 2008, Doucet et al. 2001, Gordon et al. 1993) and, more importantly, particle Markov chain Monte Carlo (PMCMC) (Andrieu et al., 2010). The thesis consists of three major contributions, which enhance the performance of PMCMC. The first work focuses on HMMs, and it begins by introducing a new SMC smoothing (Briers et al. 2010, Fearnhead et al. 2010) estimate of the HMM's normalising constant; we prove the estimate's unbiasedness and a central limit theorem. We use this estimate to develop new PMCMC algorithms that, under certain algorithmic settings, require less computational time than the algorithms of Andrieu et al. (2010). Our new estimate also leads to the discovery of an optimal setting for the smoothers of Briers et al. (2010) and Fearnhead et al. (2010). As this setting is not available for the general class of HMMs, we develop three algorithms for approximating it. The second major work builds from Jasra et al. (2013) and Whiteley et al. (2012) to develop new SMC and PMCMC algorithms that draw from HMMs whose observations have intractable density functions. While these types of algorithms have appeared before (see Jasra et al. 2013, Jasra et al. 2012, and Martin et al. 2012), this work uses twisted proposals as in Whiteley et al. (2012) to reduce the variance of SMC estimates of the normalising constant to improve the convergence of PMCMC in some scenarios. Finally, the third project is concerned with inferring the unknown parameters of stopped Markov processes that are only observed upon reaching their terminal sets. Bayesian inference has not been attempted on this class of problems before. The parameters are inferred through two new adaptive and non-adaptive PMCMC algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Lyberatos, Andreas. "Monte Carlo simulations of interaction effects in fine particle ferromagnets." Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nowak, Michel. "Accelerating Monte Carlo particle transport with adaptively generated importance maps." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS403/document.

Full text
Abstract:
Les simulations Monte Carlo de transport de particules sont un outil incontournable pour l'étude de problèmes de radioprotection. Leur utilisation implique l'échantillonnage d'événements rares grâce à des méthode de réduction de variance qui reposent sur l'estimation de la contribution d'une particule au détecteur. On construit cette estimation sous forme d'une carte d'importance.L’objet de cette étude est de proposer une stratégie qui permette de générer de manière adaptative des cartes d'importance durant la simulation Monte Carlo elle-même. Le travail a été réalisé dans le code de transport des particules TRIPOLI-4®, développé à la Direction de l’énergie nucléaire du CEA (Salay, France).Le cœur du travail a consisté à estimer le flux adjoint à partir des trajectoires simulées avec l'Adaptive Multilevel Splitting, une méthode de réduction de variance robuste. Ce développement a été validé à l'aide de l'intégration d'un module déterministe dans TRIPOLI-4®.Trois stratégies sont proposés pour la réutilisation de ce score en tant que carte d'importance dans la simulation Monte Carlo. Deux d'entre elles proposent d'estimer la convergence du score adjoint lors de phases d'exploitation.Ce travail conclut sur le lissage du score adjoint avec des méthodes d'apprentissage automatique, en se concentrant plus particulièrement sur les estimateurs de densité à noyaux
Monte Carlo methods are a reference asset for the study of radiation transport in shielding problems. Their use naturally implies the sampling of rare events and needs to be tackled with variance reduction methods. These methods require the definition of an importance function/map. The aim of this study is to propose an adaptivestrategy for the generation of such importance maps during the Montne Carlo simulation. The work was performed within TRIPOLI-4®, a Monte Carlo transport code developped at the nuclear energy division of CEA in Saclay, France. The core of this PhD thesis is the implementation of a forward-weighted adjoint score that relies on the trajectories sampled with Adaptive Multilevel Splitting, a robust variance reduction method. It was validated with the integration of a deterministic module in TRIPOLI-4®. Three strategies were proposed for the reintegrationof this score as an importance map and accelerations were observed. Two of these strategies assess the convergence of the adjoint score during exploitation phases by evalutating the figure of merit yielded by the use of the current adjoint score. Finally, the smoothing of the importance map with machine learning algorithms concludes this work with a special focus on Kernel Density Estimators
APA, Harvard, Vancouver, ISO, and other styles
8

Norris, Michael K. "INCORPORATING HISTOGRAMS OF ORIENTED GRADIENTS INTO MONTE CARLO LOCALIZATION." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1629.

Full text
Abstract:
This work presents improvements to Monte Carlo Localization (MCL) for a mobile robot using computer vision. Solutions to the localization problem aim to provide fine resolution on location approximation, and also be resistant to changes in the environment. One such environment change is the kidnapped/teleported robot problem, where a robot is suddenly transported to a new location and must re-localize. The standard method of "Augmented MCL" uses particle filtering combined with addition of random particles under certain conditions to solve the kidnapped robot problem. This solution is robust, but not always fast. This work combines Histogram of Oriented Gradients (HOG) computer vision with particle filtering to speed up the localization process. The major slowdown in Augmented MCL is the conditional addition of random particles, which depends on the ratio of a short term and long term average of particle weights. This ratio does not change quickly when a robot is kidnapped, leading the robot to believe it is in the wrong location for a period of time. This work replaces this average-based conditional with a comparison of the HOG image directly in front of the robot with a cached version. This resulted in a speedup ranging from from 25.3% to 80.7% (depending on parameters used) in localization time over the baseline Augmented MCL.
APA, Harvard, Vancouver, ISO, and other styles
9

Horelik, Nicholas E. (Nicholas Edward). "Domain decomposition for Monte Carlo particle transport simulations of nuclear reactors." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97859.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 151-158).
Monte Carlo (MC) neutral particle transport methods have long been considered the gold-standard for nuclear simulations, but high computational cost has limited their use significantly. However, as we move towards higher-fidelity nuclear reactor analyses the method has become competitive with traditional deterministic transport algorithms for the same level of accuracy, especially considering the inherent parallelism of the method and the ever-increasing concurrency of modern high performance computers. Yet before such analysis can be practical, several algorithmic challenges must be addressed, particularly in regards to the memory requirements of the method. In this thesis, a robust domain decomposition algorithm is proposed to alleviate this, along with models and analysis to support its use for full-scale reactor analysis. Algorithms were implemented in the full-physics Monte Carlo code OpenMC, and tested for a highly-detailed PWR benchmark: BEAVRS. The proposed domain decomposition implementation incorporates efficient algorithms for scalable inter-domain particle communication in a manner that is reproducible with any pseudo-random number seed. Algorithms are also proposed to scalably manage material and tally data with on-the-fly allocation during simulation, along with numerous optimizations required for scalability as the domain mesh is refined and divided among thousands of compute processes. The algorithms were tested on two supercomputers, namely the Mira Blue Gene/Q and the Titan XK7, demonstrating good performance with realistic tallies and materials requiring over a terabyte of aggregate memory. Performance models were also developed to more accurately predict the network and load imbalance penalties that arise from communicating particles between distributed compute nodes tracking different spatial domains. These were evaluated using machine properties and tallied particle movement characteristics, and empirically validated with observed timing results from the new implementation. Network penalties were shown to be almost negligible with per-process particle counts as low as 1000, and load imbalance penalties higher than a factor of four were not observed or predicted for finer domain meshes relevant to reactor analysis. Load balancing strategies were also explored, and intra-domain replication was shown to be very effective at improving parallel efficiencies without adding significant complexity to the algorithm or burden to the user. Performance of the strategy was quantified with a performance model, and shown to agree well with observed timings. Imbalances were shown to be almost completely removed for the finest domain meshes. Finally, full-core studies were carried out to demonstrate the efficacy of domain-decomposed Monte Carlo in tackling the full scope of the problem. A detailed mesh required for a robust depletion treatment was used, and good performance was demonstrated for depletion tallies with 206 nuclides. The largest runs scored six reaction rates for each nuclide in 51M regions for a total aggregate memory requirement of 1.4TB, and particle tracking rates were consistent with those observed for smaller non-domain- decomposed runs with equivalent tally complexity. These types of runs were previously not achievable with traditional Monte Carlo methods, and can be accomplished with domain decomposition with between 1.4x and 1.75x overhead with simple load balancing.
by Nicholas Edward Horelik.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Romano, Paul K. (Paul Kollath). "Parallel algorithms for Monte Carlo particle transport simulation on exascale computing architectures." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/80415.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 191-199).
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallal efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O([square root]N) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes - in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters.
by Paul Kollath Romano.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.

Full text
Abstract:
L’algorithme Adaptive Multilevel Splitting (AMS) a récemment fait son apparition dans la littérature de mathématiques appliquées, en tant que méthode de réduction de variance pour la simulation Monte Carlo de chaı̂nes de Markov. Ce travail de thèse se propose d’implémenter cette méthode de réduction de variance adaptative dans le code Monte-Carlo de transport de particules TRIPOLI-4,dédié entre autres aux études de radioprotection et d’instrumentation nucléaire. Caractérisées par de fortes atténuations des rayonnements dans la matière, ces études entrent dans la problématique du traitement d’évènements rares. Outre son implémentation inédite dans ce domaine d’application, deux nouvelles fonctionnalités ont été développées pour l’AMS, testées puis validées. La première est une procédure d’encaissement au vol permettant d’optimiser plusieurs scores en une seule simulation AMS. La seconde est une extension de l’AMS aux processus branchants, courants dans les simulations de radioprotection, par exemple lors du transport couplé de neutrons et des photons induits par ces derniers. L’efficacité et la robustesse de l’AMS dans ce nouveau cadre applicatif ont été démontrées dans des configurations physiquement très sévères (atténuations du flux de particules de plus de 10 ordres de grandeur), mettant ainsi en évidence les avantages prometteurs de l’AMS par rapport aux méthodes de réduction de variance existantes
The Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
APA, Harvard, Vancouver, ISO, and other styles
12

Arnold, Andrea. "Sequential Monte Carlo Parameter Estimation for Differential Equations." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396617699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Full text
Abstract:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
APA, Harvard, Vancouver, ISO, and other styles
14

Kundu, Ashoke. "Monte Carlo simulation of gas-filled radiation detectors." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/987/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Taghavi, Ehsan, Fredrik Lindsten, Lennart Svensson, and Thomas B. Schön. "Adaptive stopping for fast particle smoothing." Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93461.

Full text
Abstract:
Particle smoothing is useful for offline state inference and parameter learning in nonlinear/non-Gaussian state-space models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejection-sampling-based FFBS (RS-FFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RS-FFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RS-FFBS.
CNDM
CADICS
APA, Harvard, Vancouver, ISO, and other styles
16

Peucelle, Cécile. "Spatial fractionation of the dose in charged particle therapy." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS363/document.

Full text
Abstract:
Malgré de récentes avancées, les traitements par radiothérapie (RT) demeurent insatisfaisants : la tolérance des tissus sains aux rayonnements limite la délivrance de fortes doses (potentiellement curatives) à la tumeur. Pour remédier à ce problème, de nouvelles approches basées sur des modes de dépôt de dose innovants sont aujourd’hui à l’étude. Parmi ces approches, la technique synchrotron “Minibeam Radiation Therapy” (MBRT) a démontré sa capacité à élever la résistance des tissus sains aux rayonnements, ainsi qu’à induire un important retard de croissance tumorale. La MBRT combine des faisceaux submillimétriques à un fractionnement spatial de la dose. Dans ce contexte, l’alliance de la balistique plus avantageuse des particules chargées (et leur sélectivité biologique) à la préservation des tissus sains observée en MBRT permettrait de préserver d’avantage les tissus sains. Cette stratégie innovante a été explorée durant ce travail de thèse. Deux voies ont notamment été étudiées: la MBRT par faisceaux de protons (pMBRT), et d’ions très lourds. Premièrement, la preuve de concept expérimentale de la pMBRT a été réalisée dans un centre clinique (Institut Curie, Centre de Protonthérapie d’Orsay). De plus, l'évaluation de potentielles optimisations de la pMBRT, à la fois en terme de configuration d’irradiation et de génération des minifaisceaux, a été menée dans une étude Monte Carlo (MC). Dans la seconde partie de ce travail, un nouvel usage potentiel des ions très lourds (néon et plus lourds) en radiothérapie a été évalué dans une étude MC. Les combiner à un fractionnement spatial permettrait de tirer profit de leur efficacité dans le traitement de tumeurs radiorésistantes (hypoxiques), un des principaux défis de la RT, tout en minimisant leurs effets secondaires. Les résultats obtenus au terme de ce travail sont favorables à une exploration approfondie de ces deux approches innovantes. Les données dosimétriques compilées dans ce manuscrit serviront à guider prochaines les expérimentations biologiques
Despite recent breakthroughs, radiotherapy (RT) treatments remain unsatisfactory : the tolerance of normal tissues to radiations still limits the possibility of delivering high (potentially curative) doses in the tumour. To overcome these difficulties, new RT approaches using distinct dose delivery methods are being explored. Among them, the synchrotron minibeam radiation therapy (MBRT) technique has been shown to lead to a remarkable normal tissue resistance to very high doses, and a significant tumour growth delay. MBRT allies sub-millimetric beams to a spatial fractionation of the dose. The combination of the more selective energy deposition of charged particles (and their biological selectivity) to the well-established normal tissue sparing of MBRT could lead to a further gain in normal tissue sparing. This innovative strategy was explored in this Ph.D. thesis. In particular, two new avenues were studied: proton MBRT (pMBRT) and very heavy ion MBRT. First, the experimental proof of concept of pMBRT was performed at a clinical facility (Institut Curie, Orsay, France). In addition, pMBRT setup and minibeam generation were optimised by means of Monte Carlo (MC) simulations. In the second part of this work, a potential renewed use of very heavy ions (neon and heavier) for therapy was evaluated in a MC study. Combining such ions to a spatial fractionation could allow profiting from their high efficiency in the treatment of hypoxic radioresistant tumours, one of the main challenges in RT, while reducing at maximum their side effects. The promising results obtained in this thesis support further explorations of these two novel avenues. The dosimetry knowledge acquired will serve to guide the biological experiments
APA, Harvard, Vancouver, ISO, and other styles
17

Vale, Rodrigo Telles da Silva. "Localização de Monte Carlo aplicada a robôs submarinos." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-26082015-162614/.

Full text
Abstract:
A tarefa de operar um veículo submarino durante missões de inspeção de ambientes estruturados como, por exemplo, duto de usinas hidrelétricas, é feita principalmente por meio de referências visuais e uma bússola magnética. Porém alguns ambientes desse tipo podem apresentar uma combinação de baixa visibilidade e anomalias ferromagnéticas que inviabilizaria esse tipo de operação. Este trabalho, motivado pelo desenvolvimento de um veículo submarino operado remotamente (ROV) para ser usado em ambientes com essas restrições, propõe um sistema de navegação que utiliza o conhecimento prévio das dimensões do ambiente para corrigir o estado do veículo por meio da correlação dessas dimensões com os dados de um sonar de imageamento 2D. Para fazer essa correlação é utilizado o ltro de partículas, que é uma implementação não paramétrica do ltro Bayesiano. Esse ltro faz a estimação do estado com base nos métodos sequenciais de Monte Carlo e permite trabalhar de uma maneira simples com modelos não lineares. A desvantagem desse tipo de fusão sensorial é o seu alto custo computacional o que geralmente o impede de ser utilizado em aplicações de tempo real. Para que seja possível utilizar esse ltro em tempo real, será proposto neste trabalho uma implementação paralela utilizando uma unidade de processamento gráco (GPU) da NVIDIA e a arquitetura CUDA. Neste trabalho também será feito um estudo da utilização de duas congurações de sensores no sistema de navegação proposto neste trabalho.
The task of navigating a Remotely Operated underwater Vehicles (ROV) during inspection of man-made structures is performed mostly by visual references and occasionally a magnetic compass. Yet, some environments present a combination of low visibility and ferromagnetic anomalies that negates this approach. This paper, motivated by the development of a ROV designed to work on such environment, proposes a navigation method for this kind of vehicle. As the modeling of the system is nonlinear, the method proposed uses a particle lter to represent the vehicle state that is a nonparametric implementation of the Bayes lter. This method to work needs a priori knowledge of the environment map and to make the data association with this map, a 2D image sonar is used. The drawback of the sensor fusion used in this work is its high computational cost which generally prevents it from being used in real time applications. To be possible for this lter to be used in real time application, in this work is proposed a parallel implementation using a graphics processing unit (GPU) from NVIDIA and CUDA architecture. In this work is also made a study of two types of sensors conguration on the navigation system proposed in this work.
APA, Harvard, Vancouver, ISO, and other styles
18

Gleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1208999423010-50333.

Full text
Abstract:
This thesis concerns with numerical methods for a theoretical description of high energy particle scattering experiments. It focuses on fixed order perturbative calculations, i.e. on matrix elements and scattering cross sections at leading and next-to-leading order. For the leading order a number of algorithms for the matrix element generation and the numeric integration over the phase space are studied and implemented in a computer code, which allows to push the current limits on the complexity of the final state and the precision. For next-to-leading order calculations necessary steps towards a fully automated treatment are performed. A subtraction method that allows a process independent regularization of the divergent virtual and real corrections is implemented, and a new approach for a semi-numerically evaluation of one-loop amplitudes is investigated.
APA, Harvard, Vancouver, ISO, and other styles
19

Stedman, Mark Laurence. "Ground state and finite-temperature quantum Monte Carlo simulations of many particle systems." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Gleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23900.

Full text
Abstract:
This thesis concerns with numerical methods for a theoretical description of high energy particle scattering experiments. It focuses on fixed order perturbative calculations, i.e. on matrix elements and scattering cross sections at leading and next-to-leading order. For the leading order a number of algorithms for the matrix element generation and the numeric integration over the phase space are studied and implemented in a computer code, which allows to push the current limits on the complexity of the final state and the precision. For next-to-leading order calculations necessary steps towards a fully automated treatment are performed. A subtraction method that allows a process independent regularization of the divergent virtual and real corrections is implemented, and a new approach for a semi-numerically evaluation of one-loop amplitudes is investigated.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Eugene. "The measurement and modeling of large particle transport in the atmosphere /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/10188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gebart, Joakim. "GPU Implementation of the Particle Filter." Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94190.

Full text
Abstract:
This thesis work analyses the obstacles faced when adapting the particle filtering algorithm to run on massively parallel compute architectures. Graphics processing units are one example of massively parallel compute architectures which allow for the developer to distribute computational load over hundreds or thousands of processor cores. This thesis studies an implementation written for NVIDIA GeForce GPUs, yielding varying speed ups, up to 3000% in some cases, when compared to the equivalent algorithm performed on CPU. The particle filter, also known in the literature as sequential Monte-Carlo methods, is an algorithm used for signal processing when the system generating the signals has a highly nonlinear behaviour or non-Gaussian noise distributions where a Kalman filter and its extended variants are not effective. The particle filter was chosen as a good candidate for parallelisation because of its inherently parallel nature. There are, however, several steps of the classic formulation where computations are dependent on other computations in the same step which requires them to be run in sequence instead of in parallel. To avoid these difficulties alternative ways of computing the results must be used, such as parallel scan operations and scatter/gather methods. Another area where parallel programming still is not widespread is the area of pseudo-random number generation. Pseudo-random numbers are required by the algorithm to simulate the process noise as well as for avoiding the particle depletion problem using a resampling step. In this thesis a recently published counter-based pseudo-random number generator is used.
APA, Harvard, Vancouver, ISO, and other styles
23

Lindsten, Fredrik. "Particle filters and Markov chains for learning of dynamical systems." Doctoral thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97692.

Full text
Abstract:
Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.
CNDM
CADICS
APA, Harvard, Vancouver, ISO, and other styles
24

Sebastian, Ahlberg. "A Monte Carlo study of the particle mobility in crowded nearly one-dimensional systems." Thesis, Umeå universitet, Institutionen för fysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-92769.

Full text
Abstract:
The study of crowding effects on particle diffusion is a large subject with implications in many scientific areas. The studies span from pure theoretical calculations to experiments actually measuring the movement of proteins diffusing in a cell. Even though the subject is important and has been studied heavily there are still aspects not fully understood.   This report describes a Monte Carlo simulation approach (Gillespie algorithm) to study the effects of crowding on particle diffusion in a quasi one-dimensional system. With quasi meaning that the particles diffuses on a one-dimensional lattice but has the possibility to disassociate from the lattice and then rebind at a latter stage. Different binding strategies are considered: rebinding to the same location and randomly choosing the binding location. The focus of the study is how these strategies affects the mobility (diffusion coefficient) of a tracer particle. The main result of this thesis is a graph showing the diffusion coefficient as a function of the binding rate for different binding strategies and particle densities. We provide analytical estimates for the diffusion coefficient in the unbinding rate limits which show good agreement with the simulations.
Hur "trängsel" (från engelskans "crowding" t ex molecular crowding) påverkar diffusionsprocesser är viktigt inom många olika vetenskapliga områden. Forskningen som för tillfället utförs sträcker sig från rent teoretiska beräkningar till experiments där man kan följa enskilda proteiners rörelse i en cell. Även fast ämnet är viktig och väl undersökt finns det fortfarande många aspekter som man inte förstår till fullo. I det här examensarbetet beskrivs en Monte Carlo metod (Gillespie algoritmen) för att studera hur trängsel påverkar en partikel som diffunderar i ett "nästan" en-dimensonellt system. Det är nästan en-dimensionellt i det avsedde att partiklarna diffunderar på ett gitter men kan binda av från gittret och binda tillbaka i ett senare skedde. Olika metoder för hur partiklarna binder till gittret undersöks: Återbinding till avbindingsplatsen och slumpmässigt vald återbindingsplats. Fokus ligger på att förklara hur dessa påverkar mobiliteten (diffusionskonstanten) av en spårningspartikel (tracer particle). Resultatet är en graf som visar diffusionskonstanten för spårningspartikeln som en funktion av avbindingsfrekvens för olika bindingstrategier och partikeldensiteter. Vi ger också analytiska resultat i gränsvärdet för höga och låga avbindingstakter vilka stämmer bra överens med simuleringar.
APA, Harvard, Vancouver, ISO, and other styles
25

REIS, T. M. "Bases gaussianas geradas com os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/7378.

Full text
Abstract:
Made available in DSpace on 2018-08-01T21:59:48Z (GMT). No. of bitstreams: 1 tese_10670_Tese - Thiago Mello dos Reis.pdf: 2877793 bytes, checksum: a65224aac4b723bdea3543cb8bc010e4 (MD5) Previous issue date: 2017-02-03
Os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization foram utilizados na geração de bases Gaussianas adaptadas para os átomos de H ao Ar, no estado fundamental. Um estudo sobre a eficiência e a confiabilidade de cada um dos métodos foi realizado. Para analisar a confiabilidade dos métodos propostos, fez-se um estudo específico envolvendo um conjunto teste de 15 átomos, a saber: N, Mg, Al, Cl, Ti, Ni, Br, Sr, Ru, Pd, Sb, Cs, Ir, Tl, At. Inicialmente, o método Coordenada Geradora Hartree-Fock Melhorado foi aplicado para gerar bases adaptadas usadas como ponto de partida para a geração de novas bases Gaussianas. Posteriormente, os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization foram desenvolvidos em estudos paralelos, porém seguindo o mesmo procedimento, a fim de termos a possibilidade de compará-los ao final do estudo. Previamente à efetiva aplicação dos métodos desenvolvidos, ambos foram calibrados visando definir os melhores parâmetros para os algoritmos utilizados; estudos sobre esquemas de resfriamento (para o método Monte Carlo Simulated Annealing ) e quantidade de partículas do enxame (para o método Particle Swarm Optimization), além do número total de passos para os algoritmos foram feitos. Após esta etapa de calibração, os dois métodos foram aplicados, juntamente com o princípio variacional, à função de onda Hartree-Fock para a obtenção de bases Gaussianas totalmente otimizadas. Em seguida, as bases foram contraídas tendo-se em vista a menor perda de energia observada, preconizando a contração dos expoentes mais internos. As duas últimas etapas do procedimento da geração das bases foram a inclusão de funções de polarização e funções difusas, respectivamente. Estes procedimentos foram feitos utilizando os métodos desenvolvidos neste trabalho através de cálculos a nível MP2. Os conjuntos de base gerados neste trabalho foram utilizados para cálculos práticos em sistemas atômicos e moleculares e os resultados foram comparados com resultados obtidos a partir de conjuntos de base similares relevantes na literatura. Verificamos que, para um mesmo nível de eficiência computacional entre os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization, há uma pequena diferença de eficácia entre eles, de modo que o método Monte Carlo Simulated Annealing apresentou resultados ligeiramente melhores para os cálculos performados. Comparando-se os resultados obtidos neste trabalho com os correspondentes encontrados na literatura, observamos valores numericamente comparáveis para as propriedades estudadas, todavia os métodos propostos neste trabalho são siginificativamente mais eficientes, sendo possível o estabelecimento de um único conjunto de passos nos algoritmos para diferentes sistemas atômicos. Ademais, verificamos que a etapa específica, referente a otimização proposta neste trabalho, é eficaz na tarefa de localizar o mínimo global das funções atômicas a nível de teoria HF. Estudos mais detalhados são necessários para constatar a real relação acerca da eficácia observada para os dois métodos propostos neste trabalho. O método Particle Swarm Optimization apresenta uma série de parâmetros que não tiveram sua influência checada neste trabalho. O fato dos métodos desenvolvidos neste trabalho terem sido construídos sobre bases Dupla Zeta não implica em restrição de generalidade, de tal sorte que estes métodos estão prontamente aptos para a aplicação no desenvolvimento de conjuntos de base gaussianas no ambiente atômico para conjuntos de base de qualidade variadas.
APA, Harvard, Vancouver, ISO, and other styles
26

Larmier, Coline. "Stochastic particle transport in disordered media : beyond the Boltzmann equation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS388/document.

Full text
Abstract:
Des milieux hétérogènes et désordonnés émergent dans plusieurs applications de la science et de l'ingénierie nucléaires, en particulier en ce qui concerne la propagation des neutrons et des photons. Les exemples sont très répandus et concernent par exemple la double hétérogénéité des éléments combustibles dans les réacteurs à lit de boulets ou l'évaluation de la probabilité de re-criticité suite aux arrangements aléatoires du combusitble résultant d'accidents graves. Dans cette thèse, nous étudierons le transport linéaire de particules dans des milieux aléatoires. Dans la première partie, nous nous concentrerons sur quelques modèles mathématiques qui peuvent être utilisés pour la description de matériaux aléatoires. Une attention particulière sera accordée aux tessellations stochastiques, où un domaine est partitionné en polyèdres convexes en échantillonnant des hyperplans aléatoires selon une probabilité donnée. Les inclusions stochastiques de sphères dans une matrice seront également brièvement introduites. Un code informatique sera développé afin de construire explicitement de telles géométries par des méthodes de Monte Carlo. Dans la deuxième partie, nous évaluerons ensuite les caractéristiques générales du transport de particules dans des milieux aléatoires. Pour ce faire, nous allons considérer quelques benchmarks assez simples pour permettre une compréhension approfondie des effets des géométries aléatoires sur les trajectoires de particules tout en conservant les propriétés clés du transport linéaire. Les calculs de transport seront réalisés en utilisant le code de transport de particules Monte Carlo Tripoli4, développé au SERMA. Les cas de modèles de désordre quenched et annealed seront considérés séparément. Dans le premier, un ensemble de géométries sera généré en utilisant notre code, et le problème de transport sera résolu pour chaque configuration: des moyennes d'ensemble seront alors prises pour les observables d'intérêt. Dans le second cas, un modèle de transport efficace capable de reproduire les effets du désordre dans une seule réalisation sera étudié. Les approximations des modèles annealed seront élucidées, et des améliorations significatives seront proposées
Heterogeneous and disordered media emerges in several applications in nuclear science and engineering, especially in relation to neutron and photon propagation. Examples are widespread and concern for instance the double-heterogeneity of the fuel elements in pebble-bed reactors, or the assessment of re-criticality probability due to the random arrangement of fuel resulting from severe accidents. In this Thesis, we will investigate linear particle transport in random media. In the first part, we will focus on some mathematical models that can be used for the description of random media. Special emphasis will be given to stochastic tessellations, where a domain is partitioned into convex polyhedra by sampling random hyperplanes according to a given probability. Stochastic inclusions of spheres into a matrix will be also briefly introduced. A computer code will be developed in order to explicitly construct such geometries by Monte Carlo methods. In the second part, we will then assess the general features of particle transport within random media. For this purpose, we will consider some benchmark problems that are simple enough so as to allow for a thorough understanding of the effects of the random geometries on particle trajectories and yet retain the key properties of linear transport. Transport calculations will be realized by using the Monte Carlo particle transport code Tripoli4, developed at SERMA. The cases of quenched and annealed disorder models will be separately considered. In the former, an ensemble of geometries will be generated by using our computer code, and the transport problem will be solved for each configuration: ensemble averages will then be taken for the observables of interest. In the latter, effective transport model capable of reproducing the effects of disorder in a single realization will be investigated. The approximations of the annealed disorder models will be elucidated, and significant ameliorations will be proposed
APA, Harvard, Vancouver, ISO, and other styles
27

Dahlgren, David. "Monte Carlo simulations of Linear Energy Transfer distributions in radiation therapy." Thesis, Uppsala universitet, Högenergifysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446550.

Full text
Abstract:
In radiotherapy, a quantity asked for by clinics when calculating a treatment plan, along withdose, is linear energy transfer. Linear energy transfer is defined as the absorbed energy intissue per particle track length and has been shown to increase with relative biologicaleffectiveness untill the overkilling effect. In this master thesis the dose averaged linear energytransfer from proton and carbon ion beams was simulated using the FLUKA multi purposeMonte Carlo code. The simulated distributions have been compared to algorithms fromRaySearch Laboratories AB in order to investigate the agreement between the computationmethods. For the proton computation algorithm improvements to the current scoring algorithmwere also implemented. A first version of the linear energy transfer validation code was alsoconstructed. Scoring of linear energy transfer in the RaySearch algorithm was done with theproton Monte Carlo dose engine and the carbon pencil beam dose engine. The results indicatedthat the dose averaged linear energy transfer from RaySearch Laboratories agreed well for lowenergies for both proton and carbon beams. For higher energies shape differences were notedwhen using both a small and large field size. The protons, the RaySearch algorithm initiallyoverestimates the linear energy transfer which could result from fluence differences in FLUKAcompared to the RaySearch algorithm. For carbon ions, the difference could stem from someloss of information in the tables used to calculate the linear energy transfer in the RaySearchalgorithm. From validation γ-tests the proton linear energy transfer passed for (3%/3mm) and(1%/1mm) with no voxels out of tolerance. γ-tests for the carbon linear energy transfer passedwith no voxels out of tolerance for (5%/5mm) and a fail rate of 2.92% for (3%/3mm).
APA, Harvard, Vancouver, ISO, and other styles
28

Hosseini, Navid. "Geant4 Based Monte Carlo Simulation For Carbon Fragmentation In Nuclear Emulsion." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614479/index.pdf.

Full text
Abstract:
The study is mainly focused on Monte Carlo simulation of carbon fragmentation in nuclear emulsion. The carbon ion is selected as a remarkable candidate for the cancer therapy usages due to its high efficiency in depositing majority of its energy in the narrow region which is called Bragg Peak. On the other hand, the main side effect of heavy-ion therapy is the radiation dose beyond the Bragg Peak which damages the healthy tissues. Therefore the use of heavy-ion in cancer therapy requires accurate understanding of ion-matter interactions which result in the production of secondary particles. A Geant4 based simulation of carbon fragmentation has been done considering 400 MeV/n carbon beam directed to the detector which is made of nuclear emulsion films, interleaved with lexan layers. Four different models in Geant4 are compared with recent real data. Among the four different models, Binary Cascade Model (BIC) shows a better agreement with real data.
APA, Harvard, Vancouver, ISO, and other styles
29

Zierenberg, Johannes. "From Particle Condensation to Polymer Aggregation: Phase Transitions and Structural Phases in Mesoscopic Systems." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-197255.

Full text
Abstract:
Die vorliegende Arbeit befasst sich mit den Gleichgewichtseigenschaften und Phasenübergängen in verdünnten Teilchen- und Polymersystemen, mit einem Fokus auf Teilchenkondensation und Polymeraggregation. Dazu werden sowohl analytische Argumente als auch hochentwickelte Monte Carlo Simulationen verwendet. Um die in dieser Arbeit erreichten Systemgrößen zu simulieren, wurde eine parallele Version der multikanonischen Methode entwickelt. Die Leistungsfähigkeit dieser Erweiterung wird an mehreren relevanten Beispielen demonstriert. Um Teilchenkondensation und Polymeraggregation in finiten Systemen und in geometrisch beschränkten Strukturen besser zu verstehen, wird der Einfluss von verschiedenen Parametern auf die jeweiligen Übergange untersucht. Dies beinhaltet unter anderem die Systemgröße und Dichte, sowie im Speziellen für semiflexible Polymere deren Steifigkeit. Betrachtet werden sowohl kanonische Observablen (Energie, Tropfen- bzw. Aggregatgröße, etc.) mit der dazugehörigen Übergangstemperatur und -breite, als auch eine mikrokanonische Analyse sowie die Barrieren der Freien Energie. Für semiflexible Polymere wird insbesondere der Einfluss von Steifigkeit auf die resultierende Struktur der Aggregate untersucht, die von amorphen Kugeln für flexible Polymere bis hin zu verdrehten Bündeln für steifere Polymere reichen. Ein weiterer Fokus liegt auf der Untersuchung von Übereinstimmungen zwischen den generischen Mechanismen in Kondensation und Aggregation: dem Übergang zwischen einer homogenen Phase und einer inhomogenen (gemischten) Phase. Auf diesem Niveau kann man Polymeraggregation als Kondensation von ausgedehnten Objekten verstehen. Dies zeigt sich vor allem in dem Skalierungsverhalten von kanonischen und mikrokanonischen Observablen, insbesondere an einem unerwarteten aber konsistenten Bereich für mittelgroße (mesoskopische) Systemgrößen.
APA, Harvard, Vancouver, ISO, and other styles
30

Böhlen, Till Tobias. "Monte Carlo particle transport codes for ion beam therapy treatment planning : Validation, development and applications." Doctoral thesis, Stockholms universitet, Fysikum, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-81111.

Full text
Abstract:
External radiotherapy with proton and ion beams needs accurate tools for the dosimetric characterization of treatment fields. Monte Carlo (MC) particle transport codes, such as FLUKA and GEANT4, can be a valuable method to increase accuracy of dose calculations and to support various aspects of ion beam therapy (IBT), such as treatment planning and monitoring. One of the prerequisites for such applications is however that the MC codes are able to model reliably and accurately the relevant physics processes. As a first focus of this thesis work, physics models of MC codes with importance for IBT are developed and validated with experimental data. As a result suitable models and code configurations for applications in IBT are established. The accuracy of FLUKA and GEANT4 in describing nuclear fragmentation processes and the production of secondary charged nuclear fragments is investigated for carbon ion therapy. As a complementary approach to evaluate the capability of FLUKA to describe the characteristics of mixed radiation fields created by ion beams, simulated microdosimetric quantities are compared with experimental data. The correct description of microdosimetric quantities is also important when they are used to predict values of relative biological effectiveness (RBE). Furthermore, two models describing Compton scattering and the acollinearity of two-quanta positron annihilation at rest in media were developed, validated and integrated in FLUKA. The detailed description of these processes is important for an accurate simulation of positron emission tomography (PET) and prompt-γ imaging. Both techniques are candidates to be used in clinical routine to monitor dose administration during cancer treatments with IBT. The second objective of this thesis is to contribute to the development of a MC-based treatment planning tool for protons and ions with atomic number Z ≤ 8 using FLUKA. In contrast to previous clinical FLUKA-based MC implementations for IBT which only re-calculate a given treatment plan, the developed prototype features inverse optimization of absorbed dose and RBE-weighted dose for single fields and simultaneous multiple-field optimization for realistic treatment conditions. In a study using this newly-developed tool, the robustness of IBT treatment fields to uncertainties in the prediction of RBE values is investigated, while comparing different optimization strategies.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 5: Submitted. Paper 6: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
31

Oertli, David Bernhardt. "Proton dose assessment to the human eye using Monte Carlo n-particle transport code (MCNPX)." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tran, Binh Phuoc. "Modeling of Ion Thruster Discharge Chamber Using 3D Particle-In-Cell Monte-Carlo-Collision Method." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33510.

Full text
Abstract:
This thesis is aimed toward developing a method to simulate ion thruster discharge chambers in a full three dimensional environment and to study the effect of discharge chamber size on ion thruster performance. The study focuses solely on ring-cusped thrusters that make use of Xenon for propellant and discharge cathode assembly for mean of propellant ionization. Commercial software is used in both the setup and analysis phases. Numerical simulation is handled by 3D Particle-In-Cell Monte-Carlo-Collision method. Simulation results are analyzed and compared with other works. It is concluded that the simulation methodology is validated and can be used to simulate different cases. Therefore, different simulation cases of varying chamber sizes are done and the results are used to develop a performance curve. This plot suggests that the most efficient case is the 30 cm thruster. The result further validates the simulation process since the operating parameters used for all of the cases are taken from a 30 cm thruster experiment. One of the obvious applications for such a simulation process is to determine a set of the most efficient operating parameters for a certain size thruster before actual fabrication and laboratory testing.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Papež, Milan. "Monte Carlo identifikační strategie pro stavové modely." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400416.

Full text
Abstract:
Stavové modely jsou neobyčejně užitečné v mnoha inženýrských a vědeckých oblastech. Jejich atraktivita vychází především z toho faktu, že poskytují obecný nástroj pro popis široké škály dynamických systémů reálného světa. Nicméně, z důvodu jejich obecnosti, přidružené úlohy inference parametrů a stavů jsou ve většině praktických situacích nepoddajné. Tato dizertační práce uvažuje dvě zvláště důležité třídy nelineárních a ne-Gaussovských stavových modelů: podmíněně konjugované stavové modely a Markovsky přepínající nelineární modely. Hlavní rys těchto modelů spočívá v tom, že---navzdory jejich nepoddajnosti---obsahují poddajnou podstrukturu. Nepoddajná část požaduje abychom využily aproximační techniky. Monte Carlo výpočetní metody představují teoreticky a prakticky dobře etablovaný nástroj pro řešení tohoto problému. Výhoda těchto modelů spočívá v tom, že poddajná část může být využita pro zvýšení efektivity Monte Carlo metod tím, že se uchýlíme k Rao-Blackwellizaci. Konkrétně, tato doktorská práce navrhuje dva Rao-Blackwellizované částicové filtry pro identifikaci buďto statických anebo časově proměnných parametrů v podmíněně konjugovaných stavových modelech. Kromě toho, tato práce adoptuje nedávnou particle Markov chain Monte Carlo metodologii pro návrh Rao-Blackwellizovaných částicových Gibbsových jader pro vyhlazování stavů v Markovsky přepínajících nelineárních modelech. Tyto jádra jsou posléze použity pro inferenci parametrů metodou maximální věrohodnosti v uvažovaných modelech. Výsledné experimenty demonstrují, že navržené algoritmy překonávají příbuzné techniky ve smyslu přesnosti odhadu a výpočetního času.
APA, Harvard, Vancouver, ISO, and other styles
34

Timmins, Benjamin H. "Automatic Particle Image Velocimetry Uncertainty Quantification." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/884.

Full text
Abstract:
The uncertainty of any measurement is the interval in which one believes the actual error lies. Particle Image Velocimetry (PIV) measurement error depends on the PIV algorithm used, a wide range of user inputs, flow characteristics, and the experimental setup. Since these factors vary in time and space, they lead to nonuniform error throughout the flow field. As such, a universal PIV uncertainty estimate is not adequate and can be misleading. This is of particular interest when PIV data are used for comparison with computational or experimental data. A method to estimate the uncertainty due to the PIV calculation of each individual velocity measurement is presented. The relationship between four error sources and their contribution to PIV error is first determined. The sources, or parameters, considered are particle image diameter, particle density, particle displacement, and velocity gradient, although this choice in parameters is arbitrary and may not be complete. This information provides a four-dimensional "uncertainty surface" for the PIV algorithm used. After PIV processing, our code "measures" the value of each of these parameters and estimates the velocity uncertainty for each vector in the flow field. The reliability of the methodology is validated using known flow fields so the actual error can be determined. Analysis shows that, for most flows, the uncertainty distribution obtained using this method fits the confidence interval. The method is general and can be adapted to any PIV analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Gillespie, Timothy James. "A computer model of beta particle dose distributions in lithium fluoride and tissue." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/16952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Chao. "ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511967797285962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lee, Anthony. "Towards smooth particle filters for likelihood estimation with multivariate latent variables." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1547.

Full text
Abstract:
In parametrized continuous state-space models, one can obtain estimates of the likelihood of the data for fixed parameters via the Sequential Monte Carlo methodology. Unfortunately, even if the likelihood is continuous in the parameters, the estimates produced by practical particle filters are not, even when common random numbers are used for each filter. This is because the same resampling step which drastically reduces the variance of the estimates also introduces discontinuities in the particles that are selected across filters when the parameters change. When the state variables are univariate, a method exists that gives an estimator of the log-likelihood that is continuous in the parameters. We present a non-trivial generalization of this method using tree-based o(N²) (and as low as O(N log N)) resampling schemes that induce significant correlation amongst the selected particles across filters. In turn, this reduces the variance of the difference between the likelihood evaluated for different values of the parameters and the resulting estimator is considerably smoother than naively running the filters with common random numbers. Importantly, in practice our methods require only a change to the resample operation in the SMC framework without the addition of any extra parameters and can therefore be used for any application in which particle filters are already used. In addition, excepting the optional use of interpolation in the schemes, there are no regularity conditions for their use although certain conditions make them more advantageous. In this thesis, we first introduce the relevant aspects of the SMC methodology to the task of likelihood estimation in continuous state-space models and present an overview of work related to the task of smooth likelihood estimation. Following this, we introduce theoretically correct resampling schemes that cannot be implemented and the practical tree-based resampling schemes that were developed instead. After presenting the performance of our schemes in various applications, we show that two of the schemes are asymptotically consistent with the theoretically correct but unimplementable methods introduced earlier. Finally, we conclude the thesis with a discussion.
APA, Harvard, Vancouver, ISO, and other styles
38

Rosencranz, Daniela Necsoiu. "Monte Carlo simulation and experimental studies of the production of neutron-rich medical isotopes using a particle accelerator." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3077/.

Full text
Abstract:
The developments of nuclear medicine lead to an increasing demand for the production of radioisotopes with suitable nuclear and chemical properties. Furthermore, from the literature it is evident that the production of radioisotopes using charged-particle accelerators instead of nuclear reactors is gaining increasing popularity. The main advantages of producing medical isotopes with accelerators are carrier free radionuclides of short lived isotopes, improved handling, reduction of the radioactive waste, and lower cost of isotope fabrication. Proton-rich isotopes are the result of nuclear interactions between enriched stable isotopes and energetic protons. An interesting observation is that during the production of proton-rich isotopes, fast and intermediately fast neutrons from nuclear reactions such as (p,xn) are also produced as a by-product in the nuclear reactions. This observation suggests that it is perhaps possible to use these neutrons to activate secondary targets for the production of neutron-rich isotopes. The study of secondary radioisotope production with fast neutrons from (p,xn) reactions using a particle accelerator is the main goal of the research in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
39

Landon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
40

Solomon, Clell J. Jr. "Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7014.

Full text
Abstract:
Doctor of Philosophy
Department of Mechanical and Nuclear Engineering
J. Kenneth Shultis
A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
APA, Harvard, Vancouver, ISO, and other styles
41

Hol, Jeroen D. "Resampling in particle filters." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2366.

Full text
Abstract:

In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.

APA, Harvard, Vancouver, ISO, and other styles
42

Nes, Elena. "Derivation of photon energy spectra from transmission measurements using large fields : a dissertation /." San Antonio : UTHSC, 2006. http://proquest.umi.com/pqdweb?did=1324388751&sid=1&Fmt=2&clientId=70986&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Schmidt, Daniel. "Kinetic Monte Carlo Methods for Computing First Capture Time Distributions in Models of Diffusive Absorption." Scholarship @ Claremont, 2017. https://scholarship.claremont.edu/hmc_theses/97.

Full text
Abstract:
In this paper, we consider the capture dynamics of a particle undergoing a random walk above a sheet of absorbing traps. In particular, we seek to characterize the distribution in time from when the particle is released to when it is absorbed. This problem is motivated by the study of lymphocytes in the human blood stream; for a particle near the surface of a lymphocyte, how long will it take for the particle to be captured? We model this problem as a diffusive process with a mixture of reflecting and absorbing boundary conditions. The model is analyzed from two approaches. The first is a numerical simulation using a Kinetic Monte Carlo (KMC) method that exploits exact solutions to accelerate a particle-based simulation of the capture time. A notable advantage of KMC is that run time is independent of how far from the traps one begins. We compare our results to the second approach, which is asymptotic approximations of the FPT distribution for particles that start far from the traps. Our goal is to validate the efficacy of homogenizing the surface boundary conditions, replacing the reflecting (Neumann) and absorbing (Dirichlet) boundary conditions with a mixed (Robin) boundary condition.
APA, Harvard, Vancouver, ISO, and other styles
44

Rousset, Mathias. "Méthodes de "Population Monte-Carlo'' en temps continu est physique numérique." Toulouse 3, 2006. http://www.theses.fr/2006TOU30251.

Full text
Abstract:
Dans cette thèse, nous nous intéressons aux méthodes numériques probabilistes dites de Population Monte-Carlo, du point de vue du temps continu. Ces méthodes PMC se ramènent au calcul séquentiel de moyennes pondérées de trajectoires Markoviennes. Nous démontrons la convergence (vers la fonction propre principale des opérateurs de Schrödinger) en temps long de la variance et du biais de cette méthode avec la bonne vitesse en 1/N. Ensuite, nous considérons le problème de l'échantillonnage séquentiel d'un flot continu de mesures de Boltzmann. Pour cela, à partir d'une dynamique Markovienne arbitraire, nous associons une dynamique renversée dans le temps dont la loi pondérée par une moyenne trajectorielle de Feynman-Kac explicitement calculable redonne la dynamique initiale ainsi que la mesure de Boltzmann à calculer. Enfin, nous généralisons ce problème au cas où la dynamique est due à l'évolution dans le temps de contraintes rigides sur les configurations possibles du processus. Nous calculons exactement les poids associés, qui font intervenir la courbure locale des sous-variétés générées par les contraintes.
In this dissertation, we focus on stochastic numerical methods of Population Monte-Carlo type, in the continuous time setting. These PMC methods resort to the sequential computation of averages of weighted Markovian paths. The practical implementation rely then on the time evolution of the empirical distribution of a system of N interacting walkers. We prove the long time convergence (towards Schrödinger groundstates) of the variance and bias of this method with the expected 1/N rate. Next, we consider the problem of sequential sampling of a continuous flow of Boltzmann measures. For this purpose, starting with any Markovian dynamics, we associate a second dynamics in reversed time whose law (weighted by a computable Feynman-Kac path average) gives out the original dynamics as well as the target Boltzmann measure. Finally, we generalize the latter problem to the case where the dynamics is caused by evolving rigid constraints on the positions of the process. We compute exactly the associated weights, which resorts to the local curvature of the manifold defined by the constraints
APA, Harvard, Vancouver, ISO, and other styles
45

Lindsten, Fredrik, Pete Bunch, Simon J. Godsill, and Thomas B. Schön. "Rao-Blackwellized particle smoothers for mixed linear/nonlinear state-space models." Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93460.

Full text
Abstract:
We consider the smoothing problem for a class of conditionally linear Gaussian state-space (CLGSS) models, referred to as mixed linear/nonlinear models. In contrast to the better studied hierarchical CLGSS models, these allow for an intricate cross dependence between the linear and the nonlinear parts of the state vector. We derive a Rao-Blackwellized particle smoother (RBPS) for this model class by exploiting its tractable substructure. The smoother is of the forward filtering/backward simulation type. A key feature of the proposed method is that, unlike existing RBPS for this model class, the linear part of the state vector is marginalized out in both the forward direction and in the backward direction.
CNDM
CADICS
APA, Harvard, Vancouver, ISO, and other styles
46

Barghouthi, Imad Ahmad. "A Monte Carlo Simulation of Coulomb Collisions and Wave-Particle Interactions in Space Plasma at High Lattitudes." DigitalCommons@USU, 1994. https://digitalcommons.usu.edu/etd/2272.

Full text
Abstract:
Four studies were considered to simulate the ion behavior in the auroral region and the polar wind. In study I, a Monte Carlo simulation was used to investigate the behavior of O+ ions that are E x B-drifting through a background of neutral O, with the effect of O+(Coulomb) self-collisions included. Wide ranges of the ion-to-neutral density ratio ni|nn and electrostatic field E were considered in order to investigate the change of ion behavior with respect to the solar cycle and altitude. For low altitudes and/or solar minimum (ni|nn≤10-5), the effect of self-collisions is negligible. For higher values of ni|nn, the effect of self-collisions becomes significant and, hence, the non-Maxwellian features of the O+ distributions are reduced. In study II, the steady-state flow of the polar wind protons through a background of O+ ions was studied. Special attention was given to using an accurate collision model. The Fokker-Planck expression was used to represent H+-O+ Coulomb collisions. The transition layer between the collision-dominated and the collision less regions plays a pivotal role in the behavior of the H+ flow. In the transition region, the shape of H+ distribution changes in a complicated manner from Maxwellian to "kidney bean". The flow also changes from subsonic to supersonic within the transition region. The heat fluxes of parallel and perpendicular energies change rapidly from their maximum (positive) to their minimum (negative) values within the same transition region. In study III, a Monte Carlo simulation was developed in order to study the effect of the wave-particle interactions (WPI) on O+ and H+ ions outflow in the polar wind. The simulation also considered the other mechanisms included in the classical polar wind studies such as gravity, polarization electrostatic field, and divergence of geomagnetic field lines. Also, an altitude dependent wave spectral density was adopted. The main conclusions are (I) the o+ velocity distribution develops conic features at high altitudes; (2) the O+ ions are preferentially energized; (3) the escape flux of O+ increased by a factor of 40, while the escape flux of H+ remained constant; (4) including the effect of a finite ion Larmor radius produced toroidal features for o+ and H+ distributions at higher altitudes. In study IV, a comparison between the effect of WPI on H+ and O+ ion outflow in the polar wind and in the auroral regions was studied. It was concluded that: (I) O+ is preferentially energized in both regions; (2) both ions (H+ and O+) are more energetic in the auroral region at most altitudes; (3) in the auroral region, the ion conics formed at lower altitudes, at 1.6 R, for O+ and 2.5 R, for H+, while in the polar wind H+ did not form conics and O+ formed conics at high altitudes; (4) the effects of body forces are more important in the polar wind than in the auroral region, and for O+ than H+.
APA, Harvard, Vancouver, ISO, and other styles
47

Al-Saadony, Muhannad. "Bayesian stochastic differential equation modelling with application to finance." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1530.

Full text
Abstract:
In this thesis, we consider some popular stochastic differential equation models used in finance, such as the Vasicek Interest Rate model, the Heston model and a new fractional Heston model. We discuss how to perform inference about unknown quantities associated with these models in the Bayesian framework. We describe sequential importance sampling, the particle filter and the auxiliary particle filter. We apply these inference methods to the Vasicek Interest Rate model and the standard stochastic volatility model, both to sample from the posterior distribution of the underlying processes and to update the posterior distribution of the parameters sequentially, as data arrive over time. We discuss the sensitivity of our results to prior assumptions. We then consider the use of Markov chain Monte Carlo (MCMC) methodology to sample from the posterior distribution of the underlying volatility process and of the unknown model parameters in the Heston model. The particle filter and the auxiliary particle filter are also employed to perform sequential inference. Next we extend the Heston model to the fractional Heston model, by replacing the Brownian motions that drive the underlying stochastic differential equations by fractional Brownian motions, so allowing a richer dependence structure across time. Again, we use a variety of methods to perform inference. We apply our methodology to simulated and real financial data with success. We then discuss how to make forecasts using both the Heston and the fractional Heston model. We make comparisons between the models and show that using our new fractional Heston model can lead to improve forecasts for real financial data.
APA, Harvard, Vancouver, ISO, and other styles
48

Zheng, Dongqin. "Evaluation and development of data assimilation in atmospheric dispersion models for use in nuclear emergencies." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B39346031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zheng, Dongqin, and 鄭冬琴. "Evaluation and development of data assimilation in atmospheric dispersion models for use in nuclear emergencies." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39346031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Parham, Jonathan Brent. "Physically consistent boundary conditions for free-molecular satellite aerodynamics." Thesis, Boston University, 2014. https://hdl.handle.net/2144/21230.

Full text
Abstract:
Thesis (M.Sc.Eng.)
To determine satellite trajectories in low earth orbit, engineers need to adequately estimate aerodynamic forces. But to this day, such a task su↵ers from inexact values of drag forces acting on complicated shapes that form modern spacecraft. While some of the complications arise from the uncertainty in the upper atmosphere, this work focuses on the problems in modeling the flow interaction with the satellite geometry. The only numerical approach that accurately captures e↵ects in this flow regime—like self-shadowing and multiple molecular reflections—is known as Test Particle Monte Carlo. This method executes a ray-tracing algorithm to follow particles that pass through a control volume containing the spacecraft and accumulates the momentum transfer to the body surfaces. Statistical fluctuations inherent in the approach demand particle numbers on the order of millions, often making this scheme too costly to be practical. This work presents a parallel Test Particle Monte Carlo method that takes advantage of both graphics processing units and multi-core central processing units. The speed at which this model can run with millions of particles enabled the exploration of regimes where a flaw was revealed in the model’s initial particle seeding. A new model introduces an analytical fix to this flaw—consisting of initial position distributions at the boundary of a spherical control volume and an integral for the correct number flux—which is used to seed the calculation. This thesis includes validation of the proposed model using analytical solutions for several simple geometries and demonstrates uses of the method for the aero-stabilization of the Phobos-Grunt Martian probe and pose-estimation for the ICESat mission.
2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography