Dissertations / Theses on the topic 'Particle Monte Carlo'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Particle Monte Carlo.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.
Full textLi, Lulu Ph D. Massachusetts Institute of Technology. "Acceleration methods for Monte Carlo particle transport simulations." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112521.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 166-175).
Performing nuclear reactor core physics analysis is a crucial step in the process of both designing and understanding nuclear power reactors. Advancements in the nuclear industry demand more accurate and detailed results from reactor analysis. Monte Carlo (MC) eigenvalue neutron transport methods are uniquely qualified to provide these results, due to their accurate treatment of space, angle, and energy dependencies of neutron distributions. Monte Carlo eigenvalue simulations are, however, challenging, because they must resolve the fission source distribution and accumulate sufficient tally statistics, resulting in prohibitive run times. This thesis proposes the Low Order Operator (LOO) acceleration method to reduce the run time challenge, and provides analyses to support its use for full-scale reactor simulations. LOO is implemented in the continuous energy Monte Carlo code, OpenMC, and tested in 2D PWR benchmarks. The Low Order Operator (LOO) acceleration method is a deterministic transport method based on the Method of Characteristics. Similar to Coarse Mesh Finite Difference (CMFD), the other acceleration method evaluated in this thesis, LOO parameters are constructed from Monte Carlo tallies. The solutions to the LOO equations are then used to update Monte Carlo fission sources. This thesis deploys independent simulations to rigorously assess LOO, CMFD, and unaccelerated Monte Carlo, simulating up to a quarter of a trillion neutron histories for each simulation. Analysis and performance models are developed to address two aspects of the Monte Carlo run time challenge. First, this thesis demonstrates that acceleration methods can reduce the vast number of neutron histories required to converge the fission source distribution before tallies can be accumulated. Second, the slow convergence of tally statistics is improved with the acceleration methods for the earlier active cycles. A theoretical model is developed to explain the observed behaviors and predict convergence rates. Finally, numerical results and theoretical models shed light on the selection of optimal simulation parameters such that a desired statistical uncertainty can be achieved with minimum neutron histories. This thesis demonstrates that the conventional wisdom (e.g., maximizing the number of cycles rather than the number of neutrons per cycle) in performing unaccelerated MC simulations can be improved simply by using more optimal parameters. LOO acceleration provides reduction of a factor of at least 2.2 in neutron histories, compared to the unaccelerated Monte Carlo scheme, and the CPU time and memory overhead associated with LOO are small.
by Lulu Li.
Ph. D.
Torfeh, Eva. "Monte Carlo microdosimetry of charged-particle microbeam irradiations." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0159/document.
Full textThe interaction of charged particles with matter leads to a very localized energy deposits in sub-micrometric tracks. This unique property makes this type of ionizing radiation particularly interesting for deciphering the radiation-induced molecular mechanisms at the cell scale. Charged particle microbeams (CPMs) provide the ability to target a given cell compartment at the micrometer scale with a controlled dose down to single particle. My work focused on irradiations carried out with the CPM at the AIFIRA facility in the CENBG (Applications Interdisciplinaires des Faisceaux d’Ions en Région Aquitaine). This microbeam delivers protons and alpha particles and is dedicated to targeted irradiation in vitro (human cells) and in vivo (C. elegans).In addition to their interest for experimental studies, the energy deposits and the interactions of charged particles with matter can be modeled precisely along their trajectory using track structure codes based on Monte Carlo methods. These simulation tools allow a precise characterization of the micro-dosimetry of the irradations from the detailed description of the physical interactions at the nanoscale to the prediction of the number of DNA damage, their complexity and their distribution in space.During my thesis, I developed micro-dosimetric models based on the Geant4-DNA modeling toolkit in two cases. The first concerns the simulation of the energy distribution deposited in a cell nucleus and the calculation of the number of different types of DNA damage (single and double strand breaks) at the nanometric and micrometric scales, for different types and numbers of delivered particles. These simulations are compared with experimental measurements of the kinetics of GFP-labeled (Green Fluorescent Protein) DNA repair proteins in human cells. The second is the dosimetry of irradiation of a multicellular organism to study the genetic instability in a living organism during development (C. elegans). I simulated the distribution of the energy deposited in different compartments of a realistic 3D model of a C. elegans embryo following proton irradiations. Finally, and in parallel with these two studies, I developed a protocol to characterize the AIFIRA microbeam using fluorescent nuclear track detector (FNTD) for proton and alpha particle irradiations. This type of detector makes it possible to visualize in 3D the incident particle tracks with a resolution of about 200 nm and to examine the quality of the cellular irradiations carried out by the CPM
Miryusupov, Shohruh. "Particle methods in finance." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E069.
Full textThe thesis introduces simulation techniques that are based on particle methods and it consists of two parts, namely rare event simulation and a homotopy transport for stochastic volatility model estimation. Particle methods, that generalize hidden Markov models, are widely used in different fields such as signal processing, biology, rare events estimation, finance, etc. There are a number of approaches that are based on Monte Carlo methods that allow to approximate a target density such as Markov Chain Monte Carlo (MCMC), sequential Monte Carlo (SMC). We apply SMC algorithms to estimate default probabilities in a stable process based intensity process to compute a credit value adjustment (CV A) with a wrong way risk (WWR). We propose a novel approach to estimate rare events, which is based on the generation of Markov Chains by simulating the Hamiltonian system. We demonstrate the properties, that allows us to have ergodic Markov Chain and show the performance of our approach on the example that we encounter in option pricing.In the second part, we aim at numerically estimating a stochastic volatility model, and consider it in the context of a transportation problem, when we would like to find "an optimal transport map" that pushes forward the measure. In a filtering context, we understand it as the transportation of particles from a prior to a posterior distribution in pseudotime. We also proposed to reweight transported particles, so as we can direct to the area, where particles with high weights are concentrated. We showed the application of our method on the example of option pricing with SteinStein stochastic volatility model and illustrated the bias and variance
Persing, Adam. "Some contributions to particle Markov chain Monte Carlo algorithms." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23277.
Full textLyberatos, Andreas. "Monte Carlo simulations of interaction effects in fine particle ferromagnets." Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38088.
Full textNowak, Michel. "Accelerating Monte Carlo particle transport with adaptively generated importance maps." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS403/document.
Full textMonte Carlo methods are a reference asset for the study of radiation transport in shielding problems. Their use naturally implies the sampling of rare events and needs to be tackled with variance reduction methods. These methods require the definition of an importance function/map. The aim of this study is to propose an adaptivestrategy for the generation of such importance maps during the Montne Carlo simulation. The work was performed within TRIPOLI-4®, a Monte Carlo transport code developped at the nuclear energy division of CEA in Saclay, France. The core of this PhD thesis is the implementation of a forward-weighted adjoint score that relies on the trajectories sampled with Adaptive Multilevel Splitting, a robust variance reduction method. It was validated with the integration of a deterministic module in TRIPOLI-4®. Three strategies were proposed for the reintegrationof this score as an importance map and accelerations were observed. Two of these strategies assess the convergence of the adjoint score during exploitation phases by evalutating the figure of merit yielded by the use of the current adjoint score. Finally, the smoothing of the importance map with machine learning algorithms concludes this work with a special focus on Kernel Density Estimators
Norris, Michael K. "INCORPORATING HISTOGRAMS OF ORIENTED GRADIENTS INTO MONTE CARLO LOCALIZATION." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1629.
Full textHorelik, Nicholas E. (Nicholas Edward). "Domain decomposition for Monte Carlo particle transport simulations of nuclear reactors." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97859.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 151-158).
Monte Carlo (MC) neutral particle transport methods have long been considered the gold-standard for nuclear simulations, but high computational cost has limited their use significantly. However, as we move towards higher-fidelity nuclear reactor analyses the method has become competitive with traditional deterministic transport algorithms for the same level of accuracy, especially considering the inherent parallelism of the method and the ever-increasing concurrency of modern high performance computers. Yet before such analysis can be practical, several algorithmic challenges must be addressed, particularly in regards to the memory requirements of the method. In this thesis, a robust domain decomposition algorithm is proposed to alleviate this, along with models and analysis to support its use for full-scale reactor analysis. Algorithms were implemented in the full-physics Monte Carlo code OpenMC, and tested for a highly-detailed PWR benchmark: BEAVRS. The proposed domain decomposition implementation incorporates efficient algorithms for scalable inter-domain particle communication in a manner that is reproducible with any pseudo-random number seed. Algorithms are also proposed to scalably manage material and tally data with on-the-fly allocation during simulation, along with numerous optimizations required for scalability as the domain mesh is refined and divided among thousands of compute processes. The algorithms were tested on two supercomputers, namely the Mira Blue Gene/Q and the Titan XK7, demonstrating good performance with realistic tallies and materials requiring over a terabyte of aggregate memory. Performance models were also developed to more accurately predict the network and load imbalance penalties that arise from communicating particles between distributed compute nodes tracking different spatial domains. These were evaluated using machine properties and tallied particle movement characteristics, and empirically validated with observed timing results from the new implementation. Network penalties were shown to be almost negligible with per-process particle counts as low as 1000, and load imbalance penalties higher than a factor of four were not observed or predicted for finer domain meshes relevant to reactor analysis. Load balancing strategies were also explored, and intra-domain replication was shown to be very effective at improving parallel efficiencies without adding significant complexity to the algorithm or burden to the user. Performance of the strategy was quantified with a performance model, and shown to agree well with observed timings. Imbalances were shown to be almost completely removed for the finest domain meshes. Finally, full-core studies were carried out to demonstrate the efficacy of domain-decomposed Monte Carlo in tackling the full scope of the problem. A detailed mesh required for a robust depletion treatment was used, and good performance was demonstrated for depletion tallies with 206 nuclides. The largest runs scored six reaction rates for each nuclide in 51M regions for a total aggregate memory requirement of 1.4TB, and particle tracking rates were consistent with those observed for smaller non-domain- decomposed runs with equivalent tally complexity. These types of runs were previously not achievable with traditional Monte Carlo methods, and can be accomplished with domain decomposition with between 1.4x and 1.75x overhead with simple load balancing.
by Nicholas Edward Horelik.
Ph. D.
Romano, Paul K. (Paul Kollath). "Parallel algorithms for Monte Carlo particle transport simulation on exascale computing architectures." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/80415.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 191-199).
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallal efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O([square root]N) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes - in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters.
by Paul Kollath Romano.
Ph.D.
Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.
Full textThe Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
Arnold, Andrea. "Sequential Monte Carlo Parameter Estimation for Differential Equations." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396617699.
Full textDahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.
Full textBorde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
Kundu, Ashoke. "Monte Carlo simulation of gas-filled radiation detectors." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/987/.
Full textTaghavi, Ehsan, Fredrik Lindsten, Lennart Svensson, and Thomas B. Schön. "Adaptive stopping for fast particle smoothing." Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93461.
Full textCNDM
CADICS
Peucelle, Cécile. "Spatial fractionation of the dose in charged particle therapy." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS363/document.
Full textDespite recent breakthroughs, radiotherapy (RT) treatments remain unsatisfactory : the tolerance of normal tissues to radiations still limits the possibility of delivering high (potentially curative) doses in the tumour. To overcome these difficulties, new RT approaches using distinct dose delivery methods are being explored. Among them, the synchrotron minibeam radiation therapy (MBRT) technique has been shown to lead to a remarkable normal tissue resistance to very high doses, and a significant tumour growth delay. MBRT allies sub-millimetric beams to a spatial fractionation of the dose. The combination of the more selective energy deposition of charged particles (and their biological selectivity) to the well-established normal tissue sparing of MBRT could lead to a further gain in normal tissue sparing. This innovative strategy was explored in this Ph.D. thesis. In particular, two new avenues were studied: proton MBRT (pMBRT) and very heavy ion MBRT. First, the experimental proof of concept of pMBRT was performed at a clinical facility (Institut Curie, Orsay, France). In addition, pMBRT setup and minibeam generation were optimised by means of Monte Carlo (MC) simulations. In the second part of this work, a potential renewed use of very heavy ions (neon and heavier) for therapy was evaluated in a MC study. Combining such ions to a spatial fractionation could allow profiting from their high efficiency in the treatment of hypoxic radioresistant tumours, one of the main challenges in RT, while reducing at maximum their side effects. The promising results obtained in this thesis support further explorations of these two novel avenues. The dosimetry knowledge acquired will serve to guide the biological experiments
Vale, Rodrigo Telles da Silva. "Localização de Monte Carlo aplicada a robôs submarinos." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-26082015-162614/.
Full textThe task of navigating a Remotely Operated underwater Vehicles (ROV) during inspection of man-made structures is performed mostly by visual references and occasionally a magnetic compass. Yet, some environments present a combination of low visibility and ferromagnetic anomalies that negates this approach. This paper, motivated by the development of a ROV designed to work on such environment, proposes a navigation method for this kind of vehicle. As the modeling of the system is nonlinear, the method proposed uses a particle lter to represent the vehicle state that is a nonparametric implementation of the Bayes lter. This method to work needs a priori knowledge of the environment map and to make the data association with this map, a 2D image sonar is used. The drawback of the sensor fusion used in this work is its high computational cost which generally prevents it from being used in real time applications. To be possible for this lter to be used in real time application, in this work is proposed a parallel implementation using a graphics processing unit (GPU) from NVIDIA and CUDA architecture. In this work is also made a study of two types of sensors conguration on the navigation system proposed in this work.
Gleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1208999423010-50333.
Full textStedman, Mark Laurence. "Ground state and finite-temperature quantum Monte Carlo simulations of many particle systems." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391825.
Full textGleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23900.
Full textKim, Eugene. "The measurement and modeling of large particle transport in the atmosphere /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/10188.
Full textGebart, Joakim. "GPU Implementation of the Particle Filter." Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94190.
Full textLindsten, Fredrik. "Particle filters and Markov chains for learning of dynamical systems." Doctoral thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97692.
Full textCNDM
CADICS
Sebastian, Ahlberg. "A Monte Carlo study of the particle mobility in crowded nearly one-dimensional systems." Thesis, Umeå universitet, Institutionen för fysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-92769.
Full textHur "trängsel" (från engelskans "crowding" t ex molecular crowding) påverkar diffusionsprocesser är viktigt inom många olika vetenskapliga områden. Forskningen som för tillfället utförs sträcker sig från rent teoretiska beräkningar till experiments där man kan följa enskilda proteiners rörelse i en cell. Även fast ämnet är viktig och väl undersökt finns det fortfarande många aspekter som man inte förstår till fullo. I det här examensarbetet beskrivs en Monte Carlo metod (Gillespie algoritmen) för att studera hur trängsel påverkar en partikel som diffunderar i ett "nästan" en-dimensonellt system. Det är nästan en-dimensionellt i det avsedde att partiklarna diffunderar på ett gitter men kan binda av från gittret och binda tillbaka i ett senare skedde. Olika metoder för hur partiklarna binder till gittret undersöks: Återbinding till avbindingsplatsen och slumpmässigt vald återbindingsplats. Fokus ligger på att förklara hur dessa påverkar mobiliteten (diffusionskonstanten) av en spårningspartikel (tracer particle). Resultatet är en graf som visar diffusionskonstanten för spårningspartikeln som en funktion av avbindingsfrekvens för olika bindingstrategier och partikeldensiteter. Vi ger också analytiska resultat i gränsvärdet för höga och låga avbindingstakter vilka stämmer bra överens med simuleringar.
REIS, T. M. "Bases gaussianas geradas com os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/7378.
Full textOs métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization foram utilizados na geração de bases Gaussianas adaptadas para os átomos de H ao Ar, no estado fundamental. Um estudo sobre a eficiência e a confiabilidade de cada um dos métodos foi realizado. Para analisar a confiabilidade dos métodos propostos, fez-se um estudo específico envolvendo um conjunto teste de 15 átomos, a saber: N, Mg, Al, Cl, Ti, Ni, Br, Sr, Ru, Pd, Sb, Cs, Ir, Tl, At. Inicialmente, o método Coordenada Geradora Hartree-Fock Melhorado foi aplicado para gerar bases adaptadas usadas como ponto de partida para a geração de novas bases Gaussianas. Posteriormente, os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization foram desenvolvidos em estudos paralelos, porém seguindo o mesmo procedimento, a fim de termos a possibilidade de compará-los ao final do estudo. Previamente à efetiva aplicação dos métodos desenvolvidos, ambos foram calibrados visando definir os melhores parâmetros para os algoritmos utilizados; estudos sobre esquemas de resfriamento (para o método Monte Carlo Simulated Annealing ) e quantidade de partículas do enxame (para o método Particle Swarm Optimization), além do número total de passos para os algoritmos foram feitos. Após esta etapa de calibração, os dois métodos foram aplicados, juntamente com o princípio variacional, à função de onda Hartree-Fock para a obtenção de bases Gaussianas totalmente otimizadas. Em seguida, as bases foram contraídas tendo-se em vista a menor perda de energia observada, preconizando a contração dos expoentes mais internos. As duas últimas etapas do procedimento da geração das bases foram a inclusão de funções de polarização e funções difusas, respectivamente. Estes procedimentos foram feitos utilizando os métodos desenvolvidos neste trabalho através de cálculos a nível MP2. Os conjuntos de base gerados neste trabalho foram utilizados para cálculos práticos em sistemas atômicos e moleculares e os resultados foram comparados com resultados obtidos a partir de conjuntos de base similares relevantes na literatura. Verificamos que, para um mesmo nível de eficiência computacional entre os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization, há uma pequena diferença de eficácia entre eles, de modo que o método Monte Carlo Simulated Annealing apresentou resultados ligeiramente melhores para os cálculos performados. Comparando-se os resultados obtidos neste trabalho com os correspondentes encontrados na literatura, observamos valores numericamente comparáveis para as propriedades estudadas, todavia os métodos propostos neste trabalho são siginificativamente mais eficientes, sendo possível o estabelecimento de um único conjunto de passos nos algoritmos para diferentes sistemas atômicos. Ademais, verificamos que a etapa específica, referente a otimização proposta neste trabalho, é eficaz na tarefa de localizar o mínimo global das funções atômicas a nível de teoria HF. Estudos mais detalhados são necessários para constatar a real relação acerca da eficácia observada para os dois métodos propostos neste trabalho. O método Particle Swarm Optimization apresenta uma série de parâmetros que não tiveram sua influência checada neste trabalho. O fato dos métodos desenvolvidos neste trabalho terem sido construídos sobre bases Dupla Zeta não implica em restrição de generalidade, de tal sorte que estes métodos estão prontamente aptos para a aplicação no desenvolvimento de conjuntos de base gaussianas no ambiente atômico para conjuntos de base de qualidade variadas.
Larmier, Coline. "Stochastic particle transport in disordered media : beyond the Boltzmann equation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS388/document.
Full textHeterogeneous and disordered media emerges in several applications in nuclear science and engineering, especially in relation to neutron and photon propagation. Examples are widespread and concern for instance the double-heterogeneity of the fuel elements in pebble-bed reactors, or the assessment of re-criticality probability due to the random arrangement of fuel resulting from severe accidents. In this Thesis, we will investigate linear particle transport in random media. In the first part, we will focus on some mathematical models that can be used for the description of random media. Special emphasis will be given to stochastic tessellations, where a domain is partitioned into convex polyhedra by sampling random hyperplanes according to a given probability. Stochastic inclusions of spheres into a matrix will be also briefly introduced. A computer code will be developed in order to explicitly construct such geometries by Monte Carlo methods. In the second part, we will then assess the general features of particle transport within random media. For this purpose, we will consider some benchmark problems that are simple enough so as to allow for a thorough understanding of the effects of the random geometries on particle trajectories and yet retain the key properties of linear transport. Transport calculations will be realized by using the Monte Carlo particle transport code Tripoli4, developed at SERMA. The cases of quenched and annealed disorder models will be separately considered. In the former, an ensemble of geometries will be generated by using our computer code, and the transport problem will be solved for each configuration: ensemble averages will then be taken for the observables of interest. In the latter, effective transport model capable of reproducing the effects of disorder in a single realization will be investigated. The approximations of the annealed disorder models will be elucidated, and significant ameliorations will be proposed
Dahlgren, David. "Monte Carlo simulations of Linear Energy Transfer distributions in radiation therapy." Thesis, Uppsala universitet, Högenergifysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446550.
Full textHosseini, Navid. "Geant4 Based Monte Carlo Simulation For Carbon Fragmentation In Nuclear Emulsion." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614479/index.pdf.
Full textZierenberg, Johannes. "From Particle Condensation to Polymer Aggregation: Phase Transitions and Structural Phases in Mesoscopic Systems." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-197255.
Full textBöhlen, Till Tobias. "Monte Carlo particle transport codes for ion beam therapy treatment planning : Validation, development and applications." Doctoral thesis, Stockholms universitet, Fysikum, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-81111.
Full textAt the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 5: Submitted. Paper 6: Manuscript.
Oertli, David Bernhardt. "Proton dose assessment to the human eye using Monte Carlo n-particle transport code (MCNPX)." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1024.
Full textTran, Binh Phuoc. "Modeling of Ion Thruster Discharge Chamber Using 3D Particle-In-Cell Monte-Carlo-Collision Method." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33510.
Full textMaster of Science
Papež, Milan. "Monte Carlo identifikační strategie pro stavové modely." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400416.
Full textTimmins, Benjamin H. "Automatic Particle Image Velocimetry Uncertainty Quantification." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/884.
Full textGillespie, Timothy James. "A computer model of beta particle dose distributions in lithium fluoride and tissue." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/16952.
Full textYang, Chao. "ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511967797285962.
Full textLee, Anthony. "Towards smooth particle filters for likelihood estimation with multivariate latent variables." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1547.
Full textRosencranz, Daniela Necsoiu. "Monte Carlo simulation and experimental studies of the production of neutron-rich medical isotopes using a particle accelerator." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3077/.
Full textLandon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
Solomon, Clell J. Jr. "Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7014.
Full textDepartment of Mechanical and Nuclear Engineering
J. Kenneth Shultis
A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
Hol, Jeroen D. "Resampling in particle filters." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2366.
Full textIn this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.
Nes, Elena. "Derivation of photon energy spectra from transmission measurements using large fields : a dissertation /." San Antonio : UTHSC, 2006. http://proquest.umi.com/pqdweb?did=1324388751&sid=1&Fmt=2&clientId=70986&RQT=309&VName=PQD.
Full textSchmidt, Daniel. "Kinetic Monte Carlo Methods for Computing First Capture Time Distributions in Models of Diffusive Absorption." Scholarship @ Claremont, 2017. https://scholarship.claremont.edu/hmc_theses/97.
Full textRousset, Mathias. "Méthodes de "Population Monte-Carlo'' en temps continu est physique numérique." Toulouse 3, 2006. http://www.theses.fr/2006TOU30251.
Full textIn this dissertation, we focus on stochastic numerical methods of Population Monte-Carlo type, in the continuous time setting. These PMC methods resort to the sequential computation of averages of weighted Markovian paths. The practical implementation rely then on the time evolution of the empirical distribution of a system of N interacting walkers. We prove the long time convergence (towards Schrödinger groundstates) of the variance and bias of this method with the expected 1/N rate. Next, we consider the problem of sequential sampling of a continuous flow of Boltzmann measures. For this purpose, starting with any Markovian dynamics, we associate a second dynamics in reversed time whose law (weighted by a computable Feynman-Kac path average) gives out the original dynamics as well as the target Boltzmann measure. Finally, we generalize the latter problem to the case where the dynamics is caused by evolving rigid constraints on the positions of the process. We compute exactly the associated weights, which resorts to the local curvature of the manifold defined by the constraints
Lindsten, Fredrik, Pete Bunch, Simon J. Godsill, and Thomas B. Schön. "Rao-Blackwellized particle smoothers for mixed linear/nonlinear state-space models." Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93460.
Full textCNDM
CADICS
Barghouthi, Imad Ahmad. "A Monte Carlo Simulation of Coulomb Collisions and Wave-Particle Interactions in Space Plasma at High Lattitudes." DigitalCommons@USU, 1994. https://digitalcommons.usu.edu/etd/2272.
Full textAl-Saadony, Muhannad. "Bayesian stochastic differential equation modelling with application to finance." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1530.
Full textZheng, Dongqin. "Evaluation and development of data assimilation in atmospheric dispersion models for use in nuclear emergencies." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B39346031.
Full textZheng, Dongqin, and 鄭冬琴. "Evaluation and development of data assimilation in atmospheric dispersion models for use in nuclear emergencies." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39346031.
Full textParham, Jonathan Brent. "Physically consistent boundary conditions for free-molecular satellite aerodynamics." Thesis, Boston University, 2014. https://hdl.handle.net/2144/21230.
Full textTo determine satellite trajectories in low earth orbit, engineers need to adequately estimate aerodynamic forces. But to this day, such a task su↵ers from inexact values of drag forces acting on complicated shapes that form modern spacecraft. While some of the complications arise from the uncertainty in the upper atmosphere, this work focuses on the problems in modeling the flow interaction with the satellite geometry. The only numerical approach that accurately captures e↵ects in this flow regime—like self-shadowing and multiple molecular reflections—is known as Test Particle Monte Carlo. This method executes a ray-tracing algorithm to follow particles that pass through a control volume containing the spacecraft and accumulates the momentum transfer to the body surfaces. Statistical fluctuations inherent in the approach demand particle numbers on the order of millions, often making this scheme too costly to be practical. This work presents a parallel Test Particle Monte Carlo method that takes advantage of both graphics processing units and multi-core central processing units. The speed at which this model can run with millions of particles enabled the exploration of regimes where a flaw was revealed in the model’s initial particle seeding. A new model introduces an analytical fix to this flaw—consisting of initial position distributions at the boundary of a spherical control volume and an integral for the correct number flux—which is used to seed the calculation. This thesis includes validation of the proposed model using analytical solutions for several simple geometries and demonstrates uses of the method for the aero-stabilization of the Phobos-Grunt Martian probe and pose-estimation for the ICESat mission.
2031-01-01