Dissertations / Theses on the topic 'Monte Carlo event generators'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo event generators.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Nail, Graeme. "Quantum chromodynamics : simulation in Monte Carlo event generators." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/quantum-chromodynamics-simulation-in-monte-carlo-event-generators(46dc6f2e-1552-4dfa-b435-9608932a3261).html.
Full textSchälicke, Andreas. "Event generation at hadron colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1122466458074-11492.
Full textThis work deals with the accurate simulation of high energy hadron-hadron-collision experiments, as they are currently performed at Fermilab Tevatron or as they are expected at the Large Hadron Collider at CERN. For a precise description of these experiments an algorithm is investigated, which enables the inclusion of exact multi-jet matrix elements in the simulation. The implementation of this algorithm in the event generator "SHERPA" and the extension of its parton shower is the main topic of this work. The results are compared with those of other simulation programs and with experimental data
Siódmok, Andrzej. "Theoretical predictions for the Drell-Yan process through a Monte Carlo event generator." Paris 6, 2010. http://www.theses.fr/2010PA066666.
Full textThis work concerns the study of the Drell-Yan (DY) process in hadronic collisions which is a very important process for the experimental program at the Large Hadron Collider (LHC). In the first chapter we present the calculation of multiphoton radiation effects in leptonic Z-boson decays in the framework of the Yennie-Frautschi-Suura exclusive exponentiation. This calculation is implemented in the ZINHAC program, written in C++, which is a dedicated Monte Carlo event generator for precision description of the neutral-current DY process, i. E. Z/gamma production with leptonic decays in hadronic collisions. In the second chapter we concentrate on the QCD corrections to the transverse momentum (pT) spectrum of vector bosons in the DY processes. We present a new model of non-perturbative gluon emission in an initial-state parton shower. This model gives a good description of pT spectrum of Z boson for the data taken in previous experiments over a wide range of CM energy. The model's prediction for the pT distribution of the Z bosons for the LHC is also presented and used for a comparison with other approaches. In the third chapter we focus our attention on the measurement of the W-boson mass (MW). The result of this investigation shows that several important sources of errors have been neglected in all the previous analyses performed by the LHC experimental Collaborations. For the very first time the precision of MW is evaluated in the presence of these effects. This evaluation shows that in order to reach a desired precision of MW at the LHC, novel measurement strategies must be developed. We provide two examples of such strategies
Kuhn, Ralf. "Event generation at lepton colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2002. http://nbn-resolving.de/urn:nbn:de:swb:14-1033454996140-50042.
Full textDas Monte Carlo Simulationspaket APACIC++/AMEGIC++ ist in der Lage Elektron-Positron Annihilationsexperimente wie sie bei Lep am Cern stattfanden und zukuenftig an einem Linearbeschleuniger, z.B. Tesla am Desy durchgefuehrt werden zu beschreiben. Dabei ist APACIC++ verantwortlich fuer die gesamte Generierung eines Ereignisses und AMEGIC++ ein dedizierter Matrixelement-Generator. Die Entwicklung beider Programme war das Hauptthema meiner Dissertation
Winter, Jan-Christopher. "QCD jet evolution at high and low scales." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1208912443778-27732.
Full textWinter, Jan-Christopher. "QCD jet evolution at high and low scales." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23602.
Full textPopov, Dmitry [Verfasser], Michael [Akademischer Betreuer] Schmelling, and Werner [Gutachter] Hofmann. "Development of the Monte Carlo event generator tuning software package Lagrange and its application to tune the PYTHIA model to the LHCb data / Dmitry Popov. Betreuer: Michael Schmelling. Gutachter: Werner Hofmann." Dortmund : Universitätsbibliothek Dortmund, 2014. http://d-nb.info/1107051576/34.
Full textRamnath, Andrecia. "Exclusive J/Ψ Vector-Meson production in high-energy nuclear collisions: a cross-section determinaton in the Colour Glass Condensate effective field theory and a feasibility study using the STARlight Monte Carlo event generator." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/9214.
Full textThe cross-section calculation for exclusive J /Ψ vector-meson production in ultra-peripheral heavy ion collisions is approached in two ways. First, the setup for a theoretical calculation is done in the context of the Colour Glass Condensate effective field theory. Rapidity-averaged n-point correlators are used to describe the strong interaction part of this process. The JIMWLK equation can be used to predict the energy evolution of a correlator. In order to facilitate practical calculations, an approximation scheme must be employed. The Gaussian Truncation is one such method, which approximates correlators in terms of new 2-point functions. This work takes the first step beyond this truncation scheme by considering higher-order n-point functions in the approximation. An expression for the cross-section is written, which takes parametrised 2- and 4-point correlators as input. This expression can be used as the basis for a full cross-section calculation. The second part of the thesis is a feasibility study using Monte Carlo simulations done by the STARlight event generator. A prediction is made for how many exclusive J /Ψ vector-mesons are expected to be detected by ATLAS in a data set corresponding to 160 μb−1 total integrated luminosity. It is found that the muon reconstruction efficiencies for low pT muons is too poor in ATLAS to do this analysis effectively. On the order of 150 candidate events are expected from all the Pb-Pb collision data collected in 2011. The feasibility study acts as a preliminary investigation for a full cross-section measurement using ATLAS collision data. Once this is completed, it can be compared with the theoretical prediction for the cross-section.
Siegert, Frank. "Monte-Carlo event generation for the LHC." Thesis, Durham University, 2010. http://etheses.dur.ac.uk/484/.
Full textSuzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.
Full textGudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.
Full textQC 20141216
Tolley, Emma Elizabeth. "Monte Carlo event reconstruction implemented with artificial neural networks." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65535.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 41).
I implemented event reconstruction of a Monte Carlo simulation using neural networks. The OLYMPUS Collaboration is using a Monte Carlo simulation of the OLYMPUS particle detector to evaluate systematics and reconstruct events. This simulation registers the passage of particles as 'hits' in the detector elements, which can be used to determine event parameters such as momentum and direction. However, these hits are often obscured by noise. Using Geant4 and ROOT, I wrote a program that uses artificial neural networks to separate track hits from noise and reconstruct event parameters. The classification network successfully discriminates between track hits and noise for 97.48% of events. The reconstruction networks determine the various event parameters to within 2-3%.
by Emma Elizabeth Tolley.
S.B.
Gudmundsson, Thorbjörn. "Markov chain Monte Carlo for rare-event simulation in heavy-tailed settings." Licentiate thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134624.
Full textDocker, Jones Mykalin Ann. "Monte Carlo Simulation of High Energy Neutrino Event Transport in Cerenkov Detectors." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1327.
Full textRobbiano, Vincent P. "Simulations of Organic Solar Cells with an Event-Driven Monte Carlo Algorithm." University of Akron / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=akron1311812696.
Full textGleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1208999423010-50333.
Full textGleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23900.
Full textJegourel, Cyrille. "Rare event simulation for statistical model checking." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.
Full textIn this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
Regali, Christopher Ralph [Verfasser], and Horst [Akademischer Betreuer] Fischer. "Exclusive event generation for the COMPASS-II experiment at CERN and improvements for the Monte-Carlo chain." Freiburg : Universität, 2016. http://d-nb.info/1122831862/34.
Full textBedair, Khaled Farag Emam. "Statistical Methods for Multi-type Recurrent Event Data Based on Monte Carlo EM Algorithms and Copula Frailties." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64981.
Full textPh. D.
Järnberg, Emelie. "Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197322.
Full textI den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
Forssman, Niklas. "Monte Carlo simulation study of the e+e- →Lambda Lamba-Bar reaction with the BESIII experiment." Thesis, Uppsala universitet, Kärnfysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297851.
Full textAtt studera vad som händer vid reaktioner där elektroner och positroner kolliderar och annihilerar så att hadroner kan bildas ur energin kan vara till stor hjälp när vi vill förstå standardmodellen och dess krafter, i synnerhet den starka kraften, som kan studeras i sådana reaktioner. Genom att utföra precisa mätningar av tvärsnitt för hadronproduktion får man fram de elektromagnetiska formfaktorerna GE och GM som beskriver hadronernas inre struktur. Hadroner är sammansatta system av kvarkar och den starka kraften binder dessa kvarkar.\\Under mitt examensarbete har jag använt mig av data från detektorn BESIII som finns vid BEPC-II (Beijing Electron-Positron Collider) i Kina. Uppsala universitet har flera forskare som jobbar med BESIII experimentet. Målet var att kvalitetssäkra den tidigare analys som gjorts för reaktionen e+e- → Lambda Lambda-Bar vid 2.396 GeV. Jag började med att göra Monte Carlo-simuleringar. Reaktionerna har genererats med två olika generatorer, ConExc och PHSP. Dessa generatorer har använts till olika ändamål. De genererade partiklarnas färd genom detektorn har sedan simulerats. Då bildas data av samma typ som dem man får från experiment. Jag har analyserat dessa simulerade data för att hitta en metod som kan filtrera bort bakgrundsstörningar samtidigt som intressanta data sparas. Kriterier utarbetade av Dr. Cui Li har använts för att skapa denna metod. Min algortim gav en total effektivitet på 14%, vilket stämmer bra med den tidigare algoritmen som Cui Li skapade, även där var effektiviteten 14%. Detta ger förtroende för min algortim och den stärker även Cui Lis resultat.
Bradley, Randolph L. (Randolph Lewis). "Evaluating inventory segmentation strategies for aftermarket service parts in heavy industry using linked discrete-event and Monte Carlo simulations." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77459.
Full textVita. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 104-106).
Heavy industries operate equipment having a long life to generate revenue or perform a mission. These industries must invest in the specialized service parts needed to maintain their equipment, because unlike in other industries such as automotive, there is often no aftermarket supplier. If parts are not on the shelf when needed, equipment sits idle while replacements are manufactured. Stock levels are often set to achieve an off-the-shelf fill rate goal using commercial inventory optimization tools, while supply chain performance is instead measured against a speed of service metric such as order fulfillment lead time, the time from order placement to customer receipt. When some parts are more important than others, and shipping delays are accounted for, there is ostensibly little correlation between these two metrics and setting stock levels devolves into an inefficient and expensive guessing game. This thesis resolves the disconnect between stock levels and service metrics performance by linking an existing discrete-event simulation of warehouse operations to a new Monte Carlo demand categorization and metrics simulation, predicting tomorrow's supply chain performance from today's logistics data. The insights gained here through evaluating an industry representative dataset apply generally to supply chains for aftermarket service parts. The simulation predicts that the stocking policy recommended by a simple strategy for inventory segmentation for consumable parts will not achieve the desired service metrics. An internal review board that meets monthly, and a quarterly customer acquisition policy, each degrade performance by imposing a periodic review policy on stock levels developed assuming a continuous review policy. This thesis compares the simple strategy to a sophisticated strategy for inventory segmentation, using simulation to demonstrate that with the latter, metrics can be achieved in one year, inventory investment lowered 20%, and buys for parts in low annual usage categories automated.
by Randolph L. Bradley.
M.Eng.in Logistics
Uznanski, Slawosz. "Monte-Carlo simulation and contribution to understanding of Single-Event-Upset (SEU) mechanisms in CMOS technologies down to 20nm technological node." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10222/document.
Full textAggressive integrated circuit density increase and power supply scaling have propelled Single Event Effects to the forefront of reliability concerns in ground-based and space-bound electronic systems. This study focuses on modeling of Single Event physical phenomena. To enable performing reliability assessment, a complete simulation platform named Tool suIte for rAdiation Reliability Assessment (TIARA) has been developed that allows performing sensitivity prediction of different digital circuits (SRAM, Flip-Flops, etc.) in different radiation environments and at different operating conditions (power supply voltage,altitude, etc.) TIARA has been extensively validated with experimental data for space and terrestrial radiation environments using different test vehicles manufactured by STMicroelectronics. Finally, the platform has been used during rad-hard digital circuits design and to provide insights into radiation-induced upset mechanisms down to CMOS 20nm technological node
Mirletz, Brian Tietz. "Adaptive Central Pattern Generators for Control of Tensegrity Spines with Many Degrees of Freedom." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1438865567.
Full textBauer, Pavol. "Parallelism in Event-Based Computations with Applications in Biology." Doctoral thesis, Uppsala universitet, Tillämpad beräkningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332009.
Full textUPMARC
Lavarenne, Jean. "Modelling framework for assessing nuclear regulatory effectiveness." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277145.
Full textCalvez, Steven. "Development of reconstruction tools and sensitivity of the SuperNEMO demonstrator." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS285/document.
Full textSuperNEMO is an experiment looking for the neutrinoless double beta decay in an effort to unveil the Majorana nature of the neutrino. The first module, called the demonstrator, is under construction and commissioning in the Laboratoire Souterrain de Modane. Its unique design combines tracking and calorimetry techniques. The demonstrator can study 7 kg of ⁸²Se, shaped in thin source foils. These source foils are surrounded by a wire chamber, thus allowing a 3-dimensional reconstruction of the charged particles tracks. The individual particles energies are then measured by a segmented calorimeter, composed of plastic scintillators coupled with photomultipliers. A magnetic field can be applied to the tracking volume in order to identify the charge of the particles. SuperNEMO is thus able to perform a full reconstruction of the events kinematics and to identify the nature of the particles involved: electrons, positrons, α particles or γ particles. In practice, the particle and event reconstruction relies on a variety of algorithms, implemented in the dedicated SuperNEMO simulation and reconstruction software. The γ reconstruction is particularly challenging since γ particles do not leave tracks in the wire chamber and are only detected by the calorimeter, sometimes multiple times. Several γ reconstruction approaches were explored during this thesis. This work lead to the creation of a new algorithm optimizing the γ reconstruction efficiency and improving the γ energy reconstruction. Other programs allowing the particle identification and performing the topological measurements relevant to an event were also developed. The value of the magnetic field was optimized for the 0νββ decay search, based on Monte-Carlo simulations. The magnetic shieldings performances and their impact on the shape of the magnetic field were estimated with measurements performed on small scale magnetic coils. The SuperNEMO demonstrator is able to measure its own background contamination thanks to dedicated analysis channels. At the end of the first 2.5 years data taking phase, the main backgrounds target activities should be measured accurately. The ⁸²Se 2νββ half-life should be known with a 0.3 % total uncertainty. Unlike other double beta decay experiments relying solely on the two electrons energy sum, SuperNEMO has access to the full events kinematics and thus to more topological information. A multivariate analysis based on Boosted Decision Trees was shown to guarantee at least a 10 % increase of the sensitivity of the 0νββ decay search. After 2.5 years, and if no excess of 0νββ events is observed, the SuperNEMO demonstrator should be able to set a limit on the 0νββ half-life of T > 5.85 10²⁴ y, translating into a limit on the effective Majorana neutrino mass mββ < 0.2 − 0.55 eV. Extrapolating this result to the full-scale SuperNEMO experiment, i.e. 500 kg.y, the sensitivity would be raised to T > 10²⁶ y or mββ < 40 − 110 meV
Estecahandy, Maïder. "Méthodes accélérées de Monte-Carlo pour la simulation d'événements rares. Applications aux Réseaux de Petri." Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3008/document.
Full textThe dependability analysis of safety instrumented systems is an important industrial concern. To be able to carry out such safety studies, TOTAL develops since the eighties the dependability software GRIF. To take into account the increasing complexity of the operating context of its safety equipment, TOTAL is more frequently led to use the engine MOCA-RP of the GRIF Simulation package. Indeed, MOCA-RP allows to estimate quantities associated with complex aging systems modeled in Petri nets thanks to the standard Monte Carlo (MC) simulation. Nevertheless, deriving accurate estimators, such as the system unavailability, on very reliable systems involves rare event simulation, which requires very long computing times with MC. In order to address this issue, the common fast Monte Carlo methods do not seem to be appropriate. Many of them are originally defined to improve only the estimate of the unreliability and/or well-suited for Markovian processes. Therefore, the work accomplished in this thesis pertains to the development of acceleration methods adapted to the problematic of performing safety studies modeled in Petri nets and estimating in particular the unavailability. More specifically, we propose the Extension of the "Méthode de Conditionnement Temporel" to accelerate the individual failure of the components, and we introduce the Dissociation Method as well as the Truncated Fixed Effort Method to increase the occurrence of their simultaneous failures. Then, we combine the first technique with the two other ones, and we also associate them with the Randomized Quasi-Monte Carlo method. Through different sensitivities studies and benchmark experiments, we assess the performance of the acceleration methods and observe a significant improvement of the results compared with MC. Furthermore, we discuss the choice of the confidence interval method to be used when considering rare event simulation, which is an unfamiliar topic in the field of dependability. Last, an application to an industrial case permits the illustration of the potential of our solution methodology
Weulersse, Cécile. "Développement et validation d’outils Monte-Carlo pour la prédiction des basculements logiques induits par les radiations dans les mémoires Sram très largement submicroniques." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20221.
Full textParticles from natural radiation environment can cause malfunctions in electronic systems. In the case of critical applications involving a very high reliability, it is crucial to fulfill the requirements of dependability. To ensure this and, if necessary, to adequately design mitigations, it is important to get tools for the sensitivity assessment of electronics towards radiations.The purpose of this work is the development of prediction tools for radiation-induced soft errors, which are primarily intended for end users. In a first step, the nuclear reaction databases were built using the Geant4 toolkit. These databases were then used by a pre-existing Monte-Carlo tool which predictions were compared with experimental results performed on 90 and 65 nm SRAM devices. Finally, simplified criteria enabled us to propose an engineering tool for the prediction of the proton or neutron sensitivity from heavy ion data. This method was validated on deep submicron devices and allows the user to estimate multiple events, which are a crucial issue in space, avionic and ground applications
Walter, Clément. "Using Poisson processes for rare event simulation." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC304/document.
Full textThis thesis address the issue of extreme event simulation. From a original understanding of the Splitting methods, a new theoretical framework is proposed, regardless of any algorithm. This framework is based on a point process associated with any real-valued random variable and lets defined probability, quantile and moment estimators without any hypothesis on this random variable. The artificial selection of threshold in Splitting vanishes and the estimator of the probability of exceeding a threshold is indeed an estimator of the whole cumulative distribution function until the given threshold. These estimators are based on the simulation of independent and identically distributed replicas of the point process. So they allow for the use of massively parallel computer cluster. Suitable practical algorithms are thus proposed.Finally it can happen that these advanced statistics still require too much samples. In this context the computer code is considered as a random process with known distribution. The point process framework lets handle this additional source of uncertainty and estimate easily the conditional expectation and variance of the resulting random variable. It also defines new SUR enrichment criteria designed for extreme event probability estimation
Petersson, Hannah. "TWIST : Twelve Bar Intelligently Simulated Tensegrity." Thesis, Luleå tekniska universitet, Rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-72455.
Full textMartinez, Homero. "Measurement of the Z boson differential cross-section in transverse momentum in the electron-positron channel with the ATLAS detector at LHC." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00952940.
Full textCamacho, Valle Alfredo. "Credit risk modeling in a semi-Markov process environment." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/credit-risk-modeling-in-a-semimarkov-process-environment(ad56ed0b-047f-44df-be68-6accef7544ff).html.
Full textDewji, Shaheen Azim. "Assessing internal contamination after a radiological dispersion device event using a 2x2-inch sodium-iodide detector." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28092.
Full textReinhardt, Aleks. "Computer simulation of the homogeneous nucleation of ice." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:9ec0828b-df99-42e1-8694-14786d7578b9.
Full textBignami, Luca. "Le tecniche di Risk Management per i progetti di costruzione: il caso studio della riqualificazione e recupero funzionale dell'ex-Manifattura Tabacchi per la realizzazione del Tecnopolo di Bologna." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Find full textMalherbe, Victor. "Multi-scale modeling of radiation effects for emerging space electronics : from transistors to chips in orbit." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0753/document.
Full textThe effects of cosmic radiation on electronics have been studied since the early days of space exploration, given the severe reliability constraints arising from harsh space environments. However, recent evolutions in the space industry landscape are changing radiation effects practices and methodologies, with mainstream technologies becoming increasingly attractive for radiation-hardened integrated circuits. Due to their high operating frequencies, new transistor architectures, and short rad-hard development times, chips manufactured in latest CMOS processes pose a variety of challenges, both from an experimental standpoint and for modeling perspectives. This work thus focuses on simulating single-event upsets and transients in advanced FD-SOI and bulk silicon processes.The soft-error response of 28 nm FD-SOI transistors is first investigated through TCAD simulations, allowing to develop two innovative models for radiation-induced currents in FD-SOI. One of them is mainly behavioral, while the other captures complex phenomena, such as parasitic bipolar amplification and circuit feedback effects, from first semiconductor principles and in agreement with detailed TCAD simulations.These compact models are then interfaced to a complete Monte Carlo Soft-Error Rate (SER) simulation platform, leading to extensive validation against experimental data collected on several test vehicles under accelerated particle beams. Finally, predictive simulation studies are presented on bit-cells, sequential and combinational logic gates in 28 nm FD-SOI and 65 nm bulk Si, providing insights into the mechanisms that contribute to the SER of modern integrated circuits in orbit
Weydert, Carole. "Recherche d'un boson de Higgs chargé avec le détecteur ATLAS : de la théorie à l'expérience." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00629349.
Full textMathonat, Romain. "Rule discovery in labeled sequential data : Application to game analytics." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI080.
Full textIt is extremely useful to exploit labeled datasets not only to learn models and perform predictive analytics but also to improve our understanding of a domain and its available targeted classes. The subgroup discovery task has been considered for more than two decades. It concerns the discovery of rules covering sets of objects having interesting properties, e.g., they characterize a given target class. Though many subgroup discovery algorithms have been proposed for both transactional and numerical data, discovering rules within labeled sequential data has been much less studied. In that context, exhaustive exploration strategies can not be used for real-life applications and we have to look for heuristic approaches. In this thesis, we propose to apply bandit models and Monte Carlo Tree Search to explore the search space of possible rules using an exploration-exploitation trade-off, on different data types such as sequences of itemset or time series. For a given budget, they find a collection of top-k best rules in the search space w.r.t chosen quality measure. They require a light configuration and are independent from the quality measure used for pattern scoring. To the best of our knowledge, this is the first time that the Monte Carlo Tree Search framework has been exploited in a sequential data mining setting. We have conducted thorough and comprehensive evaluations of our algorithms on several datasets to illustrate their added-value, and we discuss their qualitative and quantitative results. To assess the added-value of one or our algorithms, we propose a use case of game analytics, more precisely Rocket League match analysis. Discovering interesting rules in sequences of actions performed by players and using them in a supervised classification model shows the efficiency and the relevance of our approach in the difficult and realistic context of high dimensional data. It supports the automatic discovery of skills and it can be used to create new game modes, to improve the ranking system, to help e-sport commentators, or to better analyse opponent teams, for example
Doan, Thi Kieu Oanh. "Mesure de la section efficace différentielle de production du boson Z se désintégrant en paires électron-position, dans l'expérience ATLAS." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00846877.
Full textLagnoux, Agnès. "Analyse des modeles de branchement avec duplication des trajectoires pour l'étude des événements rares." Toulouse 3, 2006. http://www.theses.fr/2006TOU30231.
Full textThis thesis deals with the splitting method first introduced in rare event analysis in order to speed-up simulation. In this technique, the sample paths are split into R multiple copies at various stages during the simulation. Given the cost, the optimization of the algorithm suggests to take the transition probabilities between stages equal to some constant and to resample the inverse of that constant subtrials, which may be non-integer and even unknown but estimated. First, we study the sensitivity of the relative error between the probability of interest P(A) and its estimator depending on the strategy that makes the resampling numbers integers. Then, since in practice the transition probabilities are generally unknown (and so the optimal resampling umbers), we propose a two-steps algorithm to face that problem. Several numerical applications and comparisons with other models are proposed
Zu, Xiaomin. "Monte-Carlo event generators at next-to-leading order." 2003. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-343/index.html.
Full textKolář, Karel. "Zkoumání závislosti výpočtů v konečném řádu poruchové QCD na faktorizačním schématu." Doctoral thesis, 2012. http://www.nusl.cz/ntk/nusl-312193.
Full textParker, Jason Mascagni Michael. "Extensions and optimizations to the scalable, parallel random number generators library." 2003. http://etd.lib.fsu.edu/theses/available/etd-11182003-021957.
Full textAdvisor: Michael Mascagni, Florida State University, College of Arts and Sciences, Dept. of Computer Science. Title and description from dissertation home page (viewed Mar. 2, 2004). Includes bibliographical references.
"Development of an event oriented Monte Carlo simulation and applications to problems of diffusion and reaction." Tulane University, 1988.
Find full textacase@tulane.edu
Nahrvar, Shayan. "Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization Project." Thesis, 2010. http://hdl.handle.net/1807/24612.
Full textGleisberg, Tanju [Verfasser]. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders / vorgelegt von Tanju Gleisberg." 2008. http://d-nb.info/989878171/34.
Full textSadeghi, Naimeh. "Combined Fuzzy and Probabilistic Simulation for Construction Management." Master's thesis, 2009. http://hdl.handle.net/10048/731.
Full textConstruction Engineering and Management
Μαρμίδης, Γρηγόριος. "Πολυκριτηριακή βελτιστοποίηση της εκμετάλλευσης του αιολικού δυναμικού για τη σύνδεση αιολικών συστημάτων/πάρκων στα δίκτυα υψηλών τάσεων." Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/4785.
Full textThis dissertation analyses and suggests some efficient solutions for the optimization problem of the available power in a wind system. The thesis is divided into two parts. In the first part, a new methodology is proposed for optimizing the manipulation of wind in a certain area, based on the optimal placement of wind turbines in wind parks. In the second part, the purpose of the optimization is to maximize the power production from the wind, through the development of new control techniques. In both cases, the analysis is made by using detailed nonlinear models, which were developed in computational environment of the Matlab program and all the computational tools that were available. The models that were developed lead us to a series of simulations that confirm the theoretical analysis and suggest original solutions and optimized results in comparison to known ones. Particularly, the first part of this thesis is concentrated in the optimal selection of the number and the positioning of the wind turbines in a certain geographic region. The criterion used for the optimization is the maximum power production with the minimum cost. In this analysis a novel procedure is introduced, based on the Monte Carlo Simulation Method. As in previous studies, we used the same optimization criterion and the other conditions in order the results to be comparable. The presented results are far better as long it concern the maximum power production and the optimization factor. Furthermore, important conclusions are made for the maximum number of the wind turbines and their location that confirm the proposed method as an efficient tool for the geographic distribution of wind mills in a wind park. In the second part of this dissertation, the goal is to design high efficient and simple controllers that achieve maximum power production for wind turbine systems. Particularly, a variable speed wind generation system is considered consisting by a squirrel cage induction generator connected to the grid by a fully controlled ac/dc/ac IGBT converter. To this end, first it is presented an analysis of the control method, namely the vector control method, which is used today for maximization and control of the active power. Based on this analysis, alternative solutions are proposed, that are less complicated, more accurate and easier to be accomplished, a fact that is proved through a theoretical analysis. Suitable nonlinear models have been developed and proposed for a wind turbine system, for the case of operating under variable, adjustable speed. Comprehensive mathematical analysis proves that the original nonlinear system has the fundamental property of Euler-Lagrange systems, which is passivity. This means that the system is damped in both the electric and the mechanical part. Also, an analysis of the existing, vector control based schemes is conducted. Then, by modifying properly the PI controllers used at the generator side and simplifying decisively the vector control scheme, we propose two different control schemes that simultaneously are free from the field oriented requirement of the vector control while they maintain and increase the damping of the initial system. The first control scheme is directly comparable with the existing vector control applications, while the second introduces an innovated technique that improves substantially the system response. Detailed simulations that carried out are based on realistic operation scenario, which mean that require demand for the adjustment of the speed in its optimal value, at the same time that a change at the supplied by the wind power and torque occurs. The simulation results confirm the theoretical analysis and show, that both the proposed control schemes achieve optimal power production in an efficient manner and in the expected response time.