To see the other types of publications on this topic, follow the link: Monte Carlo event generators.

Dissertations / Theses on the topic 'Monte Carlo event generators'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo event generators.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nail, Graeme. "Quantum chromodynamics : simulation in Monte Carlo event generators." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/quantum-chromodynamics-simulation-in-monte-carlo-event-generators(46dc6f2e-1552-4dfa-b435-9608932a3261).html.

Full text
Abstract:
This thesis contains the work of two recent developments in the Herwig general purpose event genrator. Firstly, the results from an new implementation of the KrkNLO method in the Herwig event generator are presented. This method allows enables the generation of matched next-to-leading order plus parton shower events through the application of simple positive weights to showered leading order events. This simplicity is achieved by the construction Monte Carlo scheme parton distribution functions. This implementation contains the necessary components to simulation Drell-Yan production as well as Higgs production via gluon fusion. This is used to generate the first differential Higgs results using this method. The results from this implementation are shown to be comparable with predictions from the well established approaches of POWHEG and MC@NLO. The predictions from KrkNLO are found to closely resemble the original configuration for POWHEG. Secondly, a benchmark study focussing on the source of perturbative uncertainties in parton showers is presented. The study employs leading order plus parton shower simulations as a starting point in order to establish a baseline set of controllable uncertainties. The aim of which is to build an understanding of the uncertainties associated with a full simulation which includes higher-order corrections and interplay with non- perturbative models. The uncertainty estimates for a number of benchmark processes are presented. The requirement that these estimates be consistent across the two distinct parton show implementations in Herwig provided an important measure to assess the quality of these uncertainty estimates. The profile scale choice is seen to be an important consideration with the power and hfact displaying inconsistencies between the showers. The resummation profile scale is shown to deliver consistent predictions for the central value and uncertainty bands.
APA, Harvard, Vancouver, ISO, and other styles
2

Schälicke, Andreas. "Event generation at hadron colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1122466458074-11492.

Full text
Abstract:
Diese Arbeit befasst sich mit der Simulation von hochenergetischen Hadron-Kollisionsexperimenten, wie sie im Moment am Tevatron (Fermilab) durchgeführt werden und in naher Zukunft am Large Hadron Collider (LHC) am CERN zu erwarten sind. Für die Beschreibung dieser Experimente wird ein Algorithmus untersucht, der es ermöglicht, exakte Multijet-Matrixelemente auf Baumgraphenniveau in die Simulation einzubeziehen und so die Qualität der Vorhersage deutlich zu verbessern. Die Implementierung dieses Algorithmus in den Eventgenerator "SHERPA" und die Erweiterung des Parton Showers in diesem Programm ist das Hauptthema dieser Arbeit. Die Ergebnisse werden mit experimentellen Daten und mit anderen Simulationen verglichen
This work deals with the accurate simulation of high energy hadron-hadron-collision experiments, as they are currently performed at Fermilab Tevatron or as they are expected at the Large Hadron Collider at CERN. For a precise description of these experiments an algorithm is investigated, which enables the inclusion of exact multi-jet matrix elements in the simulation. The implementation of this algorithm in the event generator "SHERPA" and the extension of its parton shower is the main topic of this work. The results are compared with those of other simulation programs and with experimental data
APA, Harvard, Vancouver, ISO, and other styles
3

Siódmok, Andrzej. "Theoretical predictions for the Drell-Yan process through a Monte Carlo event generator." Paris 6, 2010. http://www.theses.fr/2010PA066666.

Full text
Abstract:
Ce travail porte sur l'étude du processus de Drell-Yan (DY) dans les collisions hadroniques, qui est très important pour le programme expérimental du Grand Collisionneur de Hadrons (LHC). Nous présentons le calcul des effets de rayonnement multiphotonique dans les désintégrations leptoniques du boson Z dans le cadre de l'exponentiation exclusive de Yennie, Frautschi et Suura. Ce calcul est mis en œuvre dans le code ZINHAC, qui est un générateur de Monte Carlo en C++ implémentant une description précise du processus de DY par courant neutre avec états finaux leptoniques. Nous nous concentrons ensuite sur les corrections QCD au spectre en impulsion transverse (pT) des bosons vecteurs dans les processus DY. Nous présentons un nouveau modèle non perturbatif d'émission de gluons dans l'évolution d'un parton initial en gerbe de particules. Ce modèle donne une bonne description du spectre en pT des bosons Z pour les données des expériences passées, sur une large gamme d'énergie dans le centre de masse. La modélisation de la distribution en pT des bosons Z au LHC est également présentée et comparée avec d'autres approches. Nous nous attachons enfin à la mesure de la masse du boson W(MW). Nous trouvons que plusieurs sources d'erreurs importantes ont été négligées par les analyses antérieures effectuées par les collaborations expérimentales du LHC. Pour la première fois, nous évaluons la sensibilité de la masse du W à ces effets. Cette évaluation montre que pour atteindre la précision espérée sur MW au LHC, des stratégies novatrices de mesure doivent être développées. Nous présentons deux exemples de ces stratégies
This work concerns the study of the Drell-Yan (DY) process in hadronic collisions which is a very important process for the experimental program at the Large Hadron Collider (LHC). In the first chapter we present the calculation of multiphoton radiation effects in leptonic Z-boson decays in the framework of the Yennie-Frautschi-Suura exclusive exponentiation. This calculation is implemented in the ZINHAC program, written in C++, which is a dedicated Monte Carlo event generator for precision description of the neutral-current DY process, i. E. Z/gamma production with leptonic decays in hadronic collisions. In the second chapter we concentrate on the QCD corrections to the transverse momentum (pT) spectrum of vector bosons in the DY processes. We present a new model of non-perturbative gluon emission in an initial-state parton shower. This model gives a good description of pT spectrum of Z boson for the data taken in previous experiments over a wide range of CM energy. The model's prediction for the pT distribution of the Z bosons for the LHC is also presented and used for a comparison with other approaches. In the third chapter we focus our attention on the measurement of the W-boson mass (MW). The result of this investigation shows that several important sources of errors have been neglected in all the previous analyses performed by the LHC experimental Collaborations. For the very first time the precision of MW is evaluated in the presence of these effects. This evaluation shows that in order to reach a desired precision of MW at the LHC, novel measurement strategies must be developed. We provide two examples of such strategies
APA, Harvard, Vancouver, ISO, and other styles
4

Kuhn, Ralf. "Event generation at lepton colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2002. http://nbn-resolving.de/urn:nbn:de:swb:14-1033454996140-50042.

Full text
Abstract:
The Monte-Carlo simulation package APACIC++/AMEGIC++ is able to describe current and future electron-positron annihilation experiments, namely the LEP collider at CERN and the TESLA collider at DESY. APACIC++ is responsible for the complete generation of one event and AMEGIC++ deals with the exact calculation of matrix elements. The development of both programs was the major task of my thesis
Das Monte Carlo Simulationspaket APACIC++/AMEGIC++ ist in der Lage Elektron-Positron Annihilationsexperimente wie sie bei Lep am Cern stattfanden und zukuenftig an einem Linearbeschleuniger, z.B. Tesla am Desy durchgefuehrt werden zu beschreiben. Dabei ist APACIC++ verantwortlich fuer die gesamte Generierung eines Ereignisses und AMEGIC++ ein dedizierter Matrixelement-Generator. Die Entwicklung beider Programme war das Hauptthema meiner Dissertation
APA, Harvard, Vancouver, ISO, and other styles
5

Winter, Jan-Christopher. "QCD jet evolution at high and low scales." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1208912443778-27732.

Full text
Abstract:
This thesis deals with a broad range of aspects that concern the simulation of QCD jet physics by Monte Carlo event generators. Phenomenological work is presented in validating the CKKW approach for merging tree-level matrix elements and parton showers. In the second part the main project is documented comprising the definition, realization and verification of a new QCD colour-dipole cascade. Finally, a new cluster-hadronization model is introduced.
APA, Harvard, Vancouver, ISO, and other styles
6

Winter, Jan-Christopher. "QCD jet evolution at high and low scales." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23602.

Full text
Abstract:
This thesis deals with a broad range of aspects that concern the simulation of QCD jet physics by Monte Carlo event generators. Phenomenological work is presented in validating the CKKW approach for merging tree-level matrix elements and parton showers. In the second part the main project is documented comprising the definition, realization and verification of a new QCD colour-dipole cascade. Finally, a new cluster-hadronization model is introduced.
APA, Harvard, Vancouver, ISO, and other styles
7

Popov, Dmitry [Verfasser], Michael [Akademischer Betreuer] Schmelling, and Werner [Gutachter] Hofmann. "Development of the Monte Carlo event generator tuning software package Lagrange and its application to tune the PYTHIA model to the LHCb data / Dmitry Popov. Betreuer: Michael Schmelling. Gutachter: Werner Hofmann." Dortmund : Universitätsbibliothek Dortmund, 2014. http://d-nb.info/1107051576/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ramnath, Andrecia. "Exclusive J/Ψ Vector-Meson production in high-energy nuclear collisions: a cross-section determinaton in the Colour Glass Condensate effective field theory and a feasibility study using the STARlight Monte Carlo event generator." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/9214.

Full text
Abstract:
Includes bibliographical references.
The cross-section calculation for exclusive J /Ψ vector-meson production in ultra-peripheral heavy ion collisions is approached in two ways. First, the setup for a theoretical calculation is done in the context of the Colour Glass Condensate effective field theory. Rapidity-averaged n-point correlators are used to describe the strong interaction part of this process. The JIMWLK equation can be used to predict the energy evolution of a correlator. In order to facilitate practical calculations, an approximation scheme must be employed. The Gaussian Truncation is one such method, which approximates correlators in terms of new 2-point functions. This work takes the first step beyond this truncation scheme by considering higher-order n-point functions in the approximation. An expression for the cross-section is written, which takes parametrised 2- and 4-point correlators as input. This expression can be used as the basis for a full cross-section calculation. The second part of the thesis is a feasibility study using Monte Carlo simulations done by the STARlight event generator. A prediction is made for how many exclusive J /Ψ vector-mesons are expected to be detected by ATLAS in a data set corresponding to 160 μb−1 total integrated luminosity. It is found that the muon reconstruction efficiencies for low pT muons is too poor in ATLAS to do this analysis effectively. On the order of 150 candidate events are expected from all the Pb-Pb collision data collected in 2011. The feasibility study acts as a preliminary investigation for a full cross-section measurement using ATLAS collision data. Once this is completed, it can be compared with the theoretical prediction for the cross-section.
APA, Harvard, Vancouver, ISO, and other styles
9

Siegert, Frank. "Monte-Carlo event generation for the LHC." Thesis, Durham University, 2010. http://etheses.dur.ac.uk/484/.

Full text
Abstract:
This thesis discusses recent developments for the simulation of particle physics in the light of the start-up of the Large Hadron Collider. Simulation programs for fully exclusive events, dubbed Monte-Carlo event generators, are improved in areas related to the perturbative as well as non-perturbative regions of strong interactions. A short introduction to the main principles of event generation is given to serve as a basis for the following discussion. An existing algorithm for the correction of parton-shower emissions with the help of exact tree-level matrix elements is revisited and significantly improved as attested by first results. In a next step, an automated implementation of the POWHEG method is presented. It allows for the combination of parton showers with full next-to-leading order QCD calculations and has been tested in several processes. These two methods are then combined into a more powerful framework which allows to correct a parton shower with full next-to-leading order matrix elements and higher-order tree-level matrix elements at the same time. Turning to the non-perturbative aspects of event generation, a tuning of the Pythia event generator within the Monte-Carlo working group of the ATLAS experiment is presented. It is based on early ATLAS minimum bias measurements obtained with minimal model dependence. The parts of the detector relevant for these measurements are briefly explained. Throughout the thesis, results obtained with the improvements are compared to experimental measurements.
APA, Harvard, Vancouver, ISO, and other styles
10

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Full text
Abstract:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
APA, Harvard, Vancouver, ISO, and other styles
11

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Full text
Abstract:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

APA, Harvard, Vancouver, ISO, and other styles
12

Tolley, Emma Elizabeth. "Monte Carlo event reconstruction implemented with artificial neural networks." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65535.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 41).
I implemented event reconstruction of a Monte Carlo simulation using neural networks. The OLYMPUS Collaboration is using a Monte Carlo simulation of the OLYMPUS particle detector to evaluate systematics and reconstruct events. This simulation registers the passage of particles as 'hits' in the detector elements, which can be used to determine event parameters such as momentum and direction. However, these hits are often obscured by noise. Using Geant4 and ROOT, I wrote a program that uses artificial neural networks to separate track hits from noise and reconstruct event parameters. The classification network successfully discriminates between track hits and noise for 97.48% of events. The reconstruction networks determine the various event parameters to within 2-3%.
by Emma Elizabeth Tolley.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
13

Gudmundsson, Thorbjörn. "Markov chain Monte Carlo for rare-event simulation in heavy-tailed settings." Licentiate thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Docker, Jones Mykalin Ann. "Monte Carlo Simulation of High Energy Neutrino Event Transport in Cerenkov Detectors." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1327.

Full text
Abstract:
This body of work was aimed at the development of an application for simulating high energy neutrino events (up to 100 GeV) in GEANT4. While various Monte Carlo techniques for neutrino transport exist, this was motivated by current interest in expanding GEANT4’s application to neutrino physics and handling of neutrino related processes. Multiple neutrino observatories detect astrophysical neutrinos by measuring optical Cerenkov light produced from interactions; this application may be modified for a variety of experiments. The need for the generation of the computationally inexpensive model discussed in the paper arose from neutrino oscillation predictions for high energy cosmic neutrinos. A predicted flavour ratio of approximately 1:1:1 for nu_e:nu_mu:nu_tau within the IceCube array motivates work towards differentiating neutrino flavour across detected events. Simulation techniques can be implemented to provide information that may be used when designing data analysis algorithms. Tau neutrino transport simulation was explored in detail utilizing the developed application to probe properties of 100 GeV nu_tau events as a preliminary investigation into its many uses. Results from 10,000 nu-tau events showed agreement with preliminary calculations relating the time of flight to the contribution to the signal over time of the optical photons produced at the neutrino interaction and tau decay vertices.
APA, Harvard, Vancouver, ISO, and other styles
15

Robbiano, Vincent P. "Simulations of Organic Solar Cells with an Event-Driven Monte Carlo Algorithm." University of Akron / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=akron1311812696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1208999423010-50333.

Full text
Abstract:
This thesis concerns with numerical methods for a theoretical description of high energy particle scattering experiments. It focuses on fixed order perturbative calculations, i.e. on matrix elements and scattering cross sections at leading and next-to-leading order. For the leading order a number of algorithms for the matrix element generation and the numeric integration over the phase space are studied and implemented in a computer code, which allows to push the current limits on the complexity of the final state and the precision. For next-to-leading order calculations necessary steps towards a fully automated treatment are performed. A subtraction method that allows a process independent regularization of the divergent virtual and real corrections is implemented, and a new approach for a semi-numerically evaluation of one-loop amplitudes is investigated.
APA, Harvard, Vancouver, ISO, and other styles
17

Gleisberg, Tanju. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23900.

Full text
Abstract:
This thesis concerns with numerical methods for a theoretical description of high energy particle scattering experiments. It focuses on fixed order perturbative calculations, i.e. on matrix elements and scattering cross sections at leading and next-to-leading order. For the leading order a number of algorithms for the matrix element generation and the numeric integration over the phase space are studied and implemented in a computer code, which allows to push the current limits on the complexity of the final state and the precision. For next-to-leading order calculations necessary steps towards a fully automated treatment are performed. A subtraction method that allows a process independent regularization of the divergent virtual and real corrections is implemented, and a new approach for a semi-numerically evaluation of one-loop amplitudes is investigated.
APA, Harvard, Vancouver, ISO, and other styles
18

Jegourel, Cyrille. "Rare event simulation for statistical model checking." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.

Full text
Abstract:
Dans cette thèse, nous considérons deux problèmes auxquels le model checking statistique doit faire face. Le premier concerne les systèmes hétérogènes qui introduisent complexité et non-déterminisme dans l'analyse. Le second problème est celui des propriétés rares, difficiles à observer et donc à quantifier. Pour le premier point, nous présentons des contributions originales pour le formalisme des systèmes composites dans le langage BIP. Nous en proposons une extension stochastique, SBIP, qui permet le recours à l'abstraction stochastique de composants et d'éliminer le non-déterminisme. Ce double effet a pour avantage de réduire la taille du système initial en le remplaçant par un système dont la sémantique est purement stochastique sur lequel les algorithmes de model checking statistique sont définis. La deuxième partie de cette thèse est consacrée à la vérification de propriétés rares. Nous avons proposé le recours à un algorithme original d'échantillonnage préférentiel pour les modèles dont le comportement est décrit à travers un ensemble de commandes. Nous avons également introduit les méthodes multi-niveaux pour la vérification de propriétés rares et nous avons justifié et mis en place l'utilisation d'un algorithme multi-niveau optimal. Ces deux méthodes poursuivent le même objectif de réduire la variance de l'estimateur et le nombre de simulations. Néanmoins, elles sont fondamentalement différentes, la première attaquant le problème au travers du modèle et la seconde au travers des propriétés
In this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
APA, Harvard, Vancouver, ISO, and other styles
19

Regali, Christopher Ralph [Verfasser], and Horst [Akademischer Betreuer] Fischer. "Exclusive event generation for the COMPASS-II experiment at CERN and improvements for the Monte-Carlo chain." Freiburg : Universität, 2016. http://d-nb.info/1122831862/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bedair, Khaled Farag Emam. "Statistical Methods for Multi-type Recurrent Event Data Based on Monte Carlo EM Algorithms and Copula Frailties." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64981.

Full text
Abstract:
In this dissertation, we are interested in studying processes which generate events repeatedly over the follow-up time of a given subject. Such processes are called recurrent event processes and the data they provide are referred to as recurrent event data. Examples include the cancer recurrences, recurrent infections or disease episodes, hospital readmissions, the filing of warranty claims, and insurance claims for policy holders. In particular, we focus on the multi-type recurrent event times which usually arise when two or more different kinds of events may occur repeatedly over a period of observation. Our main objectives are to describe features of each marginal process simultaneously and study the dependence among different types of events. We present applications to a real dataset collected from the Nutritional Prevention of Cancer Trial. The objective of the clinical trial was to evaluate the efficacy of Selenium in preventing the recurrence of several types of skin cancer among 1312 residents of the Eastern United States. Four chapters are involved in this dissertation. Chapter 1 introduces a brief background to the statistical techniques used to develop the proposed methodology. We cover some concepts and useful functions related to survival data analysis and present a short introduction to frailty distributions. The Monte Carlo expectation maximization (MCEM) algorithm and copula functions for the multivariate variables are also presented in this chapter. Chapter 2 develops a multi-type recurrent events model with multivariate Gaussian random effects (frailties) for the intensity functions. In this chapter, we present nonparametric baseline intensity functions and a multivariate Gaussian distribution for the multivariate correlated random effects. An MCEM algorithm with MCMC routines in the E-step is adopted for the partial likelihood to estimate model parameters. Equations for the variances of the estimates are derived and variances of estimates are computed by Louis' formula. Predictions of the individual random effects are obtained because in some applications the magnitude of the random effects is of interest for a better understanding and interpretation of the variability in the data. The performance of the proposed methodology is evaluated by simulation studies, and the developed model is applied to the skin cancer dataset. Chapter 3 presents copula-based semiparametric multivariate frailty models for multi-type recurrent event data with applications to the skin cancer data. In this chapter, we generalize the multivariate Gaussian assumption of the frailty terms and allow the frailty distributions to have more features than the symmetric, unimodal properties of the Gaussian density. More flexible approaches to modeling the correlated frailty, referred to as copula functions, are introduced. Copula functions provide tremendous flexibility especially in allowing taking the advantages of a variety of choices for the marginal distributions and correlation structures. Semiparametric intensity models for multi-type recurrent events based on a combination of the MCEM with MCMC sampling methods and copula functions are introduced. The combination of the MCEM approach and copula function is flexible and is a generally applicable approach for obtaining inferences of the unknown parameters for high dimension frailty models. Estimation procedures for fixed effects, nonparametric baseline intensity functions, copula parameters, and predictions for the subject-specific multivariate frailties and random effects are obtained. Louis' formula for variance estimates are derived and calculated. We investigate the impact of the specification of the frailty and random effect models on the inference of covariate effects, cumulative baseline intensity functions, prediction of random effects and frailties, and the estimation of the variance-covariance components. Performances of proposed models are evaluated by simulation studies. Applications are illustrated through the dataset collected from the clinical trial of patients with skin cancer. Conclusions and some remarks for future work are presented in Chapter 4.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Järnberg, Emelie. "Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197322.

Full text
Abstract:
In this thesis, the credit worthiness of a company is modelled using a stochastic process. Two credit models are considered; Merton's model, which models the value of a firm's assets using geometric Brownian motion, and the distance to default model, which is driven by a two factor jump diffusion process. The probability of default and the default time are simulated using Monte Carlo and the number of scenarios needed to obtain convergence in the simulations is investigated. The simulations are performed using the probability matrix method (PMM), which means that a transition probability matrix describing the process is created and used for the simulations. Besides this, two variance reduction techniques are investigated; importance sampling and antithetic variates.
I den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
APA, Harvard, Vancouver, ISO, and other styles
22

Forssman, Niklas. "Monte Carlo simulation study of the e+e- →Lambda Lamba-Bar reaction with the BESIII experiment." Thesis, Uppsala universitet, Kärnfysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297851.

Full text
Abstract:
Studying the reactions where electrons and positrons collide and annihilate so that hadrons can be formed from their energy is an excellent tool when we try to improve our understanding of the standard model. Hadrons are composite quark systems held together by the strong force. By doing precise measurements of the, so called, cross section of the hadron production that was generated during the annihilation one can obtain information about the electromagnetic form factors, GE and GM, which describe the inner electromagnetic structure of hadrons. This will give us a better understanding of the strong force and the standard model.During my bachelor degree project I have been using data from the BESIII detector located at the Beijing Electron-Positron Collider (BEPC-II) in China. Uppsala university has several scientists working with the BESIII experiment. My task was to do a quality assurance of previous results for the reaction e+e-→ Lambda Lambda-Bar at a center of mass energy of 2.396 GeV. During a major part of the project I have been working with Monte Carlo data. Generating the reactions was done with two generators, ConExc and PHSP. The generators was used for different means. I have analyzed the simulated data to find a method of filtering out the background noise in order to extract a clean signal. Dr Cui Li at the hadron physics group at Uppsala university have worked with several selection criteria to extract these signals. The total efficiency of Cui Li's analysis was 14%. For my analysis I also obtained total efficiency of 14%. This gave me confidence that my analysis have been implemented in a correct fashion and that my analysis now can be transferred over to real data. It is also reassuring for Cui Li and the rest of the group that her analysis has been verified by and independently implemented selection algorithm.
Att studera vad som händer vid reaktioner där elektroner och positroner kolliderar och annihilerar så att hadroner kan bildas ur energin kan vara till stor hjälp när vi vill förstå standardmodellen och dess krafter, i synnerhet den starka kraften, som kan studeras i sådana reaktioner. Genom att utföra precisa mätningar av tvärsnitt för hadronproduktion får man fram de elektromagnetiska formfaktorerna GE och GM som beskriver hadronernas inre struktur. Hadroner är sammansatta system av kvarkar och den starka kraften binder dessa kvarkar.\\Under mitt examensarbete har jag använt mig av data från detektorn BESIII som finns vid BEPC-II (Beijing Electron-Positron Collider) i Kina. Uppsala universitet har flera forskare som jobbar med BESIII experimentet. Målet var att kvalitetssäkra den tidigare analys som gjorts för reaktionen e+e- → Lambda Lambda-Bar vid 2.396 GeV. Jag började med att göra Monte Carlo-simuleringar. Reaktionerna har genererats med två olika generatorer, ConExc och PHSP. Dessa generatorer har använts till olika ändamål. De genererade partiklarnas färd genom detektorn har sedan simulerats. Då bildas data av samma typ som dem man får från experiment. Jag har analyserat dessa simulerade data för att hitta en metod som kan filtrera bort bakgrundsstörningar samtidigt som intressanta data sparas. Kriterier utarbetade av Dr. Cui Li har använts för att skapa denna metod. Min algortim gav en total effektivitet på 14%, vilket stämmer bra med den tidigare algoritmen som Cui Li skapade, även där var effektiviteten 14%. Detta ger förtroende för min algortim och den stärker även Cui Lis resultat.
APA, Harvard, Vancouver, ISO, and other styles
23

Bradley, Randolph L. (Randolph Lewis). "Evaluating inventory segmentation strategies for aftermarket service parts in heavy industry using linked discrete-event and Monte Carlo simulations." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77459.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2012.
Vita. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 104-106).
Heavy industries operate equipment having a long life to generate revenue or perform a mission. These industries must invest in the specialized service parts needed to maintain their equipment, because unlike in other industries such as automotive, there is often no aftermarket supplier. If parts are not on the shelf when needed, equipment sits idle while replacements are manufactured. Stock levels are often set to achieve an off-the-shelf fill rate goal using commercial inventory optimization tools, while supply chain performance is instead measured against a speed of service metric such as order fulfillment lead time, the time from order placement to customer receipt. When some parts are more important than others, and shipping delays are accounted for, there is ostensibly little correlation between these two metrics and setting stock levels devolves into an inefficient and expensive guessing game. This thesis resolves the disconnect between stock levels and service metrics performance by linking an existing discrete-event simulation of warehouse operations to a new Monte Carlo demand categorization and metrics simulation, predicting tomorrow's supply chain performance from today's logistics data. The insights gained here through evaluating an industry representative dataset apply generally to supply chains for aftermarket service parts. The simulation predicts that the stocking policy recommended by a simple strategy for inventory segmentation for consumable parts will not achieve the desired service metrics. An internal review board that meets monthly, and a quarterly customer acquisition policy, each degrade performance by imposing a periodic review policy on stock levels developed assuming a continuous review policy. This thesis compares the simple strategy to a sophisticated strategy for inventory segmentation, using simulation to demonstrate that with the latter, metrics can be achieved in one year, inventory investment lowered 20%, and buys for parts in low annual usage categories automated.
by Randolph L. Bradley.
M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
24

Uznanski, Slawosz. "Monte-Carlo simulation and contribution to understanding of Single-Event-Upset (SEU) mechanisms in CMOS technologies down to 20nm technological node." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10222/document.

Full text
Abstract:
L’augmentation de la densité et la réduction de la tension d’alimentation des circuits intégrés rend la contribution des effets singuliers induits par les radiations majoritaire dans la diminution de la fiabilité des composants électroniques aussi bien dans l’environnement radiatif spatial que terrestre. Cette étude porte sur la modélisation des mécanismes physiques qui conduisent à ces aléas logiques (en anglais "Soft Errors"). Ces modèles sont utilisés dans une plateforme de simulation,appelée TIARA (Tool suIte for rAdiation Reliability Assessment), qui a été développée dans le cadre de cette thèse. Cet outil est capable de prédire la sensibilité de nombreuses architectures de circuits (SRAM,Flip-Flop, etc.) dans différents environnements radiatifs et sous différentes conditions de test (alimentation, altitude, etc.) Cette plateforme a été amplement validée grâce à la comparaison avec des mesures expérimentales effectuées sur différents circuits de test fabriqués par STMicroelectronics. La plateforme TIARA a ensuite été utilisée pour la conception de circuits durcis aux radiations et a permis de participer à la compréhension des mécanismes des aléas logiques jusqu’au noeud technologique 20nm
Aggressive integrated circuit density increase and power supply scaling have propelled Single Event Effects to the forefront of reliability concerns in ground-based and space-bound electronic systems. This study focuses on modeling of Single Event physical phenomena. To enable performing reliability assessment, a complete simulation platform named Tool suIte for rAdiation Reliability Assessment (TIARA) has been developed that allows performing sensitivity prediction of different digital circuits (SRAM, Flip-Flops, etc.) in different radiation environments and at different operating conditions (power supply voltage,altitude, etc.) TIARA has been extensively validated with experimental data for space and terrestrial radiation environments using different test vehicles manufactured by STMicroelectronics. Finally, the platform has been used during rad-hard digital circuits design and to provide insights into radiation-induced upset mechanisms down to CMOS 20nm technological node
APA, Harvard, Vancouver, ISO, and other styles
25

Mirletz, Brian Tietz. "Adaptive Central Pattern Generators for Control of Tensegrity Spines with Many Degrees of Freedom." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1438865567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bauer, Pavol. "Parallelism in Event-Based Computations with Applications in Biology." Doctoral thesis, Uppsala universitet, Tillämpad beräkningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332009.

Full text
Abstract:
Event-based models find frequent usage in fields such as computational physics and biology as they may contain both continuous and discrete state variables and may incorporate both deterministic and stochastic state transitions. If the state transitions are stochastic, computer-generated random numbers are used to obtain the model solution. This type of event-based computations is also known as Monte-Carlo simulation. In this thesis, I study different approaches to execute event-based computations on parallel computers. This ultimately allows users to retrieve their simulation results in a fraction of the original computation time. As system sizes grow continuously or models have to be simulated at longer time scales, this is a necessary approach for current computational tasks. More specifically, I propose several ways to asynchronously simulate such models on parallel shared-memory computers, for example using parallel discrete-event simulation or task-based computing. The particular event-based models studied herein find applications in systems biology, computational epidemiology and computational neuroscience. In the presented studies, the proposed methods allow for high efficiency of the parallel simulation, typically scaling well with the number of used computer cores. As the scaling typically depends on individual model properties, the studies also investigate which quantities have the greatest impact on the simulation performance. Finally, the presented studies include other insights into event-based computations, such as methods how to estimate parameter sensitivity in stochastic models and how to simulate models that include both deterministic and stochastic state transitions.
UPMARC
APA, Harvard, Vancouver, ISO, and other styles
27

Lavarenne, Jean. "Modelling framework for assessing nuclear regulatory effectiveness." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277145.

Full text
Abstract:
This thesis participates to the effort launched after the Fukushima-Daiichi disaster to improve the robustness of national institutions involved in nuclear safety because of the role that the failing nuclear regulator had in the accident. The driving idea is to investigate how engineering techniques used in high-risk industries can be applied to institutions involved in nuclear safety to improve their robustness. The thesis focuses specifically on the Office for Nuclear Regulation (ONR), the British nuclear regulator, and its process for structured inspections. The first part of the thesis demonstrates that the hazard and operability (HAZOP) technique, used in the nuclear industry to identify hazards associated with an activity, can be adapted to qualitatively assess the robustness of organisational processes. The HAZOP method was applied to the ONR inspection process and led to the identification of five significant failures or errors. These are: failure to focus on an area/topic deserving regulatory attention; failure to evaluate an area/topic of interest; failure to identify a non-compliance; failure to identify the underlying issue, its full extent and/or safety significance and failure to adequately share inspection findings. In addition, the study identified the main causal chains leading to each failure. The safeguards of the process, i.e. the mechanisms in place to prevent, detect, resolve and mitigate possible failures, were then analysed to assess the robustness of the inspection process. The principal safeguard found is the superintending inspector who performs reviews of inspection reports and debriefs inspectors after inspections. It was concluded that the inspection process is robust provided recruitment and training excellence. However, given the predominant role of the superintending inspector, the robustness of the process could be improved by increasing the diversity of safeguards. Finally, suggestions for improvement were made such as establishing a formal handover procedure between former and new site inspectors, formalising and generalising the shadowing scheme between inspectors and setting minimum standards for inspection debriefs. These results were shared with ONR, which had reached the same conclusions independently, thus validating the new application for the HAZOP method. The second part of the thesis demonstrates that computational modelling techniques can be used to build digital twins of institutions involved in safety which can then be used to assess their effectiveness. The knowledge learned thanks to the HAZOP study was used in association with computational modelling techniques to build a digital twin of the ONR and its structural inspection process along with a simple model of a nuclear plant. The model was validated using the face-validity and predictive validation processes. They respectively involved an experienced ONR inspector checking the validity of the model’s procedures and decision-making processes and comparing the model’s output for oversight work done to data provided by the ONR. The effectiveness of the ONR was then evaluated using a scenario where a hypothetical, newly discovered phenomenon threatens the integrity of the plant, with ONR inspectors gradually learning and sharing new information about it. Monte-Carlo simulation was used to estimate the cost of regulatory oversight and the probability that the ONR model detects and resolves the issue introduced before it causes an accident. Different arrangements were tested and in particular with a superintending inspector reviewing inspection reports and a formal information sharing process. For this scenario, these two improvements were found to have a similar impact on the success probability. However, the former achieves it for only half the cost.
APA, Harvard, Vancouver, ISO, and other styles
28

Calvez, Steven. "Development of reconstruction tools and sensitivity of the SuperNEMO demonstrator." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS285/document.

Full text
Abstract:
L’expérience SuperNEMO cherche à observer la double désintégration beta sans émission de neutrinos, uniquement possible si le neutrino est une particule de Majorana. Le premier module, aussi appelé démonstrateur, est en cours de construction au Laboratoire Souterrain de Modane. Sa capacité à détecter les particules individuelles en plus d’en mesurer l’énergie en fait un détecteur unique. Le démonstrateur peut contenir 7 kg de ⁸²Se sous forme de fines feuilles source. Ces feuilles source sont entourées par une chambre à fils, permettant ainsi la reconstruction en 3 dimensions des traces de particules chargées. Un calorimètre segmenté, composé de scintillateurs plastiques couplés à des photomultiplicateurs, assure quant à lui la mesure de l’énergie de chaque particule. De plus, la chambre à fils peut être soumise à un champ magnétique afin d’identifier la charge des particules. SuperNEMO est donc capable d’effectuer la reconstruction complète de la cinématique d’un événement ainsi que d’identifier la nature des particules impliquées dans ce dernier : électrons, positrons, particules α ou encore particules γ. En pratique, la reconstruction des particules repose sur divers algorithmes implémentés dans un logiciel de simulation et de reconstruction développé par et pour la collaboration SuperNEMO. La reconstruction des particules γ est particulièrement délicate puisque ces particules ne laissent pas de traces dans la chambre à fils et sont seulement détectées par le calorimètre, parfois même plusieurs fois. Différentes approches ont été explorées durant cette thèse. Ce travail a abouti à la création d’un nouvel algorithme permettant à la fois d’optimiser l’efficacité de reconstruction des particules γ mais aussi d’améliorer la reconstruction de leurs énergies. D'autres programmes assurant l’identification des particules et l’opération des mesures topologiques pertinentes à chaque événement ont aussi été développés. La valeur du champ magnétique a été optimisée pour la recherche de la désintégration 0νββ à l’aide de simulations Monte-Carlo. Les performances des blindages magnétiques ainsi que leur influence sur le champ magnétique ont été évaluées via des mesures effectuées grâce à des bobines magnétiques à échelle réduite. Le démonstrateur SuperNEMO est capable de mesurer ses propres contaminations en bruits de fond grâce à des canaux d’analyse dédiés. À l’issue d’une première prise de données de 2,5 ans, les activités visées pour les principaux bruits de fond devraient être connues précisément. En outre, la demi-vie du processus 2νββ pour le ⁸²Se devrait être mesurée avec une incertitude totale de 0,3 %.À la différence d’autres expériences double beta se basant uniquement sur la somme en énergie des deux électrons, SuperNEMO a accès à la totalité de la cinématique d’un événement et donc à de plus nombreuses informations topologiques. Une analyse multivariée reposant sur des arbres de décision boostés permet ainsi une amélioration d’au moins 10 % de la sensibilité pour la recherche de la désintégration 0νββ. Après 2,5 ans, et si aucun excès d'événements 0νββ n'est observé, le démonstrateur pourra établir une limite inférieure sur la demi-vie du processus 0νββ : T > 5.85 10²⁴ ans, équivalant à une limite supérieure sur la masse effective du neutrino mββ < 0.2 − 0.55 eV. En extrapolant ce résultat à une exposition de 500 kg.an, ces mêmes limites deviendraient T > 10²⁶ ans et mββ < 40 − 110 meV
SuperNEMO is an experiment looking for the neutrinoless double beta decay in an effort to unveil the Majorana nature of the neutrino. The first module, called the demonstrator, is under construction and commissioning in the Laboratoire Souterrain de Modane. Its unique design combines tracking and calorimetry techniques. The demonstrator can study 7 kg of ⁸²Se, shaped in thin source foils. These source foils are surrounded by a wire chamber, thus allowing a 3-dimensional reconstruction of the charged particles tracks. The individual particles energies are then measured by a segmented calorimeter, composed of plastic scintillators coupled with photomultipliers. A magnetic field can be applied to the tracking volume in order to identify the charge of the particles. SuperNEMO is thus able to perform a full reconstruction of the events kinematics and to identify the nature of the particles involved: electrons, positrons, α particles or γ particles. In practice, the particle and event reconstruction relies on a variety of algorithms, implemented in the dedicated SuperNEMO simulation and reconstruction software. The γ reconstruction is particularly challenging since γ particles do not leave tracks in the wire chamber and are only detected by the calorimeter, sometimes multiple times. Several γ reconstruction approaches were explored during this thesis. This work lead to the creation of a new algorithm optimizing the γ reconstruction efficiency and improving the γ energy reconstruction. Other programs allowing the particle identification and performing the topological measurements relevant to an event were also developed. The value of the magnetic field was optimized for the 0νββ decay search, based on Monte-Carlo simulations. The magnetic shieldings performances and their impact on the shape of the magnetic field were estimated with measurements performed on small scale magnetic coils. The SuperNEMO demonstrator is able to measure its own background contamination thanks to dedicated analysis channels. At the end of the first 2.5 years data taking phase, the main backgrounds target activities should be measured accurately. The ⁸²Se 2νββ half-life should be known with a 0.3 % total uncertainty. Unlike other double beta decay experiments relying solely on the two electrons energy sum, SuperNEMO has access to the full events kinematics and thus to more topological information. A multivariate analysis based on Boosted Decision Trees was shown to guarantee at least a 10 % increase of the sensitivity of the 0νββ decay search. After 2.5 years, and if no excess of 0νββ events is observed, the SuperNEMO demonstrator should be able to set a limit on the 0νββ half-life of T > 5.85 10²⁴ y, translating into a limit on the effective Majorana neutrino mass mββ < 0.2 − 0.55 eV. Extrapolating this result to the full-scale SuperNEMO experiment, i.e. 500 kg.y, the sensitivity would be raised to T > 10²⁶ y or mββ < 40 − 110 meV
APA, Harvard, Vancouver, ISO, and other styles
29

Estecahandy, Maïder. "Méthodes accélérées de Monte-Carlo pour la simulation d'événements rares. Applications aux Réseaux de Petri." Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3008/document.

Full text
Abstract:
Les études de Sûreté de Fonctionnement (SdF) sur les barrières instrumentées de sécurité représentent un enjeu important dans de nombreux domaines industriels. Afin de pouvoir réaliser ce type d'études, TOTAL développe depuis les années 80 le logiciel GRIF. Pour prendre en compte la complexité croissante du contexte opératoire de ses équipements de sécurité, TOTAL est de plus en plus fréquemment amené à utiliser le moteur de calcul MOCA-RP du package Simulation. MOCA-RP permet d'analyser grâce à la simulation de Monte-Carlo (MC) les performances d'équipements complexes modélisés à l'aide de Réseaux de Petri (RP). Néanmoins, obtenir des estimateurs précis avec MC sur des équipements très fiables, tels que l'indisponibilité, revient à faire de la simulation d'événements rares, ce qui peut s'avérer être coûteux en temps de calcul. Les méthodes standard d'accélération de la simulation de Monte-Carlo, initialement développées pour répondre à cette problématique, ne semblent pas adaptées à notre contexte. La majorité d'entre elles ont été définies pour améliorer l'estimation de la défiabilité et/ou pour les processus de Markov. Par conséquent, le travail accompli dans cette thèse se rapporte au développement de méthodes d'accélération de MC adaptées à la problématique des études de sécurité se modélisant en RP et estimant notamment l'indisponibilité. D'une part, nous proposons l'Extension de la Méthode de Conditionnement Temporel visant à accélérer la défaillance individuelle des composants. D'autre part, la méthode de Dissociation ainsi que la méthode de ``Truncated Fixed Effort'' ont été introduites pour accroitre l'occurrence de leurs défaillances simultanées. Ensuite, nous combinons la première technique avec les deux autres, et nous les associons à la méthode de Quasi-Monte-Carlo randomisée. Au travers de diverses études de sensibilité et expériences numériques, nous évaluons leur performance, et observons une amélioration significative des résultats par rapport à MC. Par ailleurs, nous discutons d'un sujet peu familier à la SdF, à savoir le choix de la méthode à utiliser pour déterminer les intervalles de confiance dans le cas de la simulation d'événements rares. Enfin, nous illustrons la faisabilité et le potentiel de nos méthodes sur la base d'une application à un cas industriel
The dependability analysis of safety instrumented systems is an important industrial concern. To be able to carry out such safety studies, TOTAL develops since the eighties the dependability software GRIF. To take into account the increasing complexity of the operating context of its safety equipment, TOTAL is more frequently led to use the engine MOCA-RP of the GRIF Simulation package. Indeed, MOCA-RP allows to estimate quantities associated with complex aging systems modeled in Petri nets thanks to the standard Monte Carlo (MC) simulation. Nevertheless, deriving accurate estimators, such as the system unavailability, on very reliable systems involves rare event simulation, which requires very long computing times with MC. In order to address this issue, the common fast Monte Carlo methods do not seem to be appropriate. Many of them are originally defined to improve only the estimate of the unreliability and/or well-suited for Markovian processes. Therefore, the work accomplished in this thesis pertains to the development of acceleration methods adapted to the problematic of performing safety studies modeled in Petri nets and estimating in particular the unavailability. More specifically, we propose the Extension of the "Méthode de Conditionnement Temporel" to accelerate the individual failure of the components, and we introduce the Dissociation Method as well as the Truncated Fixed Effort Method to increase the occurrence of their simultaneous failures. Then, we combine the first technique with the two other ones, and we also associate them with the Randomized Quasi-Monte Carlo method. Through different sensitivities studies and benchmark experiments, we assess the performance of the acceleration methods and observe a significant improvement of the results compared with MC. Furthermore, we discuss the choice of the confidence interval method to be used when considering rare event simulation, which is an unfamiliar topic in the field of dependability. Last, an application to an industrial case permits the illustration of the potential of our solution methodology
APA, Harvard, Vancouver, ISO, and other styles
30

Weulersse, Cécile. "Développement et validation d’outils Monte-Carlo pour la prédiction des basculements logiques induits par les radiations dans les mémoires Sram très largement submicroniques." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20221.

Full text
Abstract:
Les particules de l'environnement radiatif naturel sont responsables de dysfonctionnements dans les systèmes électroniques. Dans le cas d'applications critiques nécessitant une très haute fiabilité, il est primordial de répondre aux impératifs de sûreté de fonctionnement. Pour s'en assurer et, le cas échéant, dimensionner les protections de manière adéquate, il est nécessaire de disposer d'outils permettant d'évaluer la sensibilité de l'électronique vis-à-vis de ces perturbations.L'objectif de ce travail est le développement d'outils à destination des ingénieurs pour la prédiction des aléas logiques induits par les radiations dans les mémoires SRAM. Dans un premier temps, des bases de données de réactions nucléaires sont construites à l'aide du code de simulation Geant4. Ces bases de données sont ensuite utilisées par un outil Monte-Carlo dont les prédictions sont comparées avec des résultats d'irradiations que nous avons effectuées sur des mémoires SRAM en technologie 90 et 65 nm. Enfin, des critères simplifiés reposant sur une amélioration de la méthode SIMPA nous permettent de proposer un outil d'ingénieur pour la prédiction de la sensibilité aux protons ou aux neutrons à partir des données expérimentales ions lourds. Cette méthode est validée sur des technologies de SRAM très largement submicroniques et permet l'estimation des évènements multiples, une problématique croissante pour les applications spatiales, avioniques et terrestres
Particles from natural radiation environment can cause malfunctions in electronic systems. In the case of critical applications involving a very high reliability, it is crucial to fulfill the requirements of dependability. To ensure this and, if necessary, to adequately design mitigations, it is important to get tools for the sensitivity assessment of electronics towards radiations.The purpose of this work is the development of prediction tools for radiation-induced soft errors, which are primarily intended for end users. In a first step, the nuclear reaction databases were built using the Geant4 toolkit. These databases were then used by a pre-existing Monte-Carlo tool which predictions were compared with experimental results performed on 90 and 65 nm SRAM devices. Finally, simplified criteria enabled us to propose an engineering tool for the prediction of the proton or neutron sensitivity from heavy ion data. This method was validated on deep submicron devices and allows the user to estimate multiple events, which are a crucial issue in space, avionic and ground applications
APA, Harvard, Vancouver, ISO, and other styles
31

Walter, Clément. "Using Poisson processes for rare event simulation." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC304/document.

Full text
Abstract:
Cette thèse est une contribution à la problématique de la simulation d'événements rares. A partir de l'étude des méthodes de Splitting, un nouveau cadre théorique est développé, indépendant de tout algorithme. Ce cadre, basé sur la définition d'un processus ponctuel associé à toute variable aléatoire réelle, permet de définir des estimateurs de probabilités, quantiles et moments sans aucune hypothèse sur la variable aléatoire. Le caractère artificiel du Splitting (sélection de seuils) disparaît et l'estimateur de la probabilité de dépasser un seuil est en fait un estimateur de la fonction de répartition jusqu'au seuil considéré. De plus, les estimateurs sont basés sur des processus ponctuels indépendants et identiquement distribués et permettent donc l'utilisation de machine de calcul massivement parallèle. Des algorithmes pratiques sont ainsi également proposés.Enfin l'utilisation de métamodèles est parfois nécessaire à cause d'un temps de calcul toujours trop important. Le cas de la modélisation par processus aléatoire est abordé. L'approche par processus ponctuel permet une estimation simplifiée de l'espérance et de la variance conditionnelles de la variable aléaoire résultante et définit un nouveau critère d'enrichissement SUR adapté aux événements rares
This thesis address the issue of extreme event simulation. From a original understanding of the Splitting methods, a new theoretical framework is proposed, regardless of any algorithm. This framework is based on a point process associated with any real-valued random variable and lets defined probability, quantile and moment estimators without any hypothesis on this random variable. The artificial selection of threshold in Splitting vanishes and the estimator of the probability of exceeding a threshold is indeed an estimator of the whole cumulative distribution function until the given threshold. These estimators are based on the simulation of independent and identically distributed replicas of the point process. So they allow for the use of massively parallel computer cluster. Suitable practical algorithms are thus proposed.Finally it can happen that these advanced statistics still require too much samples. In this context the computer code is considered as a random process with known distribution. The point process framework lets handle this additional source of uncertainty and estimate easily the conditional expectation and variance of the resulting random variable. It also defines new SUR enrichment criteria designed for extreme event probability estimation
APA, Harvard, Vancouver, ISO, and other styles
32

Petersson, Hannah. "TWIST : Twelve Bar Intelligently Simulated Tensegrity." Thesis, Luleå tekniska universitet, Rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-72455.

Full text
Abstract:
One of the biggest challenges of putting a robot on an extraterrestrial surface is the entry, descent and landing. By making the robot impact resistant, the need for landing thrusters and parachutes is reduced, lowering the weight and cost of an interplanetary robotic mission. The Dynamic Tensegrity Research Lab at the NASA Ames Research Center in Mountain View, California, is currently doing research in tensegrity robots, consisting of bars and cables creating a complex dynamic system. With motors on the cables, the system can shift its center of mass to create a ”rolling” locomotion and explore remote and dangerous areas. For octahedron tensegrities with 12 bars, intuitive locomotion patterns have been explored previously. The findings included difficulties in keeping the momentum, resulting in the robot getting stuck. In this thesis, over 7.000 configurations of a central pattern generator were tested. The parameters were generated with the Monte Carlo method, with the aim to allow the robot of keeping its momentum in the motion. The resulting locomotion behavior was simulated in NASA Tensegrity Robotic Toolkit. With the method described above, a central pattern generator for a 0.5 m in diameter 12 bar tensegrity of octahedron shape was found, capable of moving 10 m during the course of a 60 s simulation. This is about four times faster than traditional rovers such as NASA’s Curiosity, indicating the need of smaller, faster robots in addition to traditional types. This, together with the impact resistance resulting in a capability of moving in difficult terrain, makes this type of robot an integral part in any future space exploration mission.
APA, Harvard, Vancouver, ISO, and other styles
33

Martinez, Homero. "Measurement of the Z boson differential cross-section in transverse momentum in the electron-positron channel with the ATLAS detector at LHC." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00952940.

Full text
Abstract:
Ce travail présente la mesure de la section efficace différentielle du boson Z en impulsion transverse (ptz), dans le canal de désintégration electron-positron, avec le détecteur ATLAS au LHC. La mesure utilise 4.64 inverse fb de données de collisions proton-proton, prises en 2011 avec une énergie du centre de masse de 7 TeV. Le résultat est combiné avec une mesure indépendante faite dans le canal muon-antimuon. La mesure est faite jusqu'à ptz = 800 GeV, et a une incertitude typique de 0.5 % pour ptz < 60 GeV, atteignant jusqu'à 5 % vers la fin du spectre. La mesure est comparée avec modèles théoriques et prédictions des générateurs Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
34

Camacho, Valle Alfredo. "Credit risk modeling in a semi-Markov process environment." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/credit-risk-modeling-in-a-semimarkov-process-environment(ad56ed0b-047f-44df-be68-6accef7544ff).html.

Full text
Abstract:
In recent times, credit risk analysis has grown to become one of the most important problems dealt with in the mathematical finance literature. Fundamentally, the problem deals with estimating the probability that an obligor defaults on their debt in a certain time. To obtain such a probability, several methods have been developed which are regulated by the Basel Accord. This establishes a legal framework for dealing with credit and market risks, and empowers banks to perform their own methodologies according to their interests under certain criteria. Credit risk analysis is founded on the rating system, which is an assessment of the capability of an obligor to make its payments in full and on time, in order to estimate risks and make the investor decisions easier.Credit risk models can be classified into several different categories. In structural form models (SFM), that are founded on the Black & Scholes theory for option pricing and the Merton model, it is assumed that default occurs if a firm's market value is lower than a threshold, most often its liabilities. The problem is that this is clearly is an unrealistic assumption. The factors models (FM) attempt to predict the random default time by assuming a hazard rate based on latent exogenous and endogenous variables. Reduced form models (RFM) mainly focus on the accuracy of the probability of default (PD), to such an extent that it is given more importance than an intuitive economical interpretation. Portfolio reduced form models (PRFM) belong to the RFM family, and were developed to overcome the SFM's difficulties.Most of these models are based on the assumption of having an underlying Markovian process, either in discrete or continuous time. For a discrete process, the main information is containted in a transition matrix, from which we obtain migration probabilities. However, according to previous analysis, it has been found that this approach contains embedding problems. The continuous time Markov process (CTMP) has its main information contained in a matrix Q of constant instantaneous transition rates between states. Both approaches assume that the future depends only on the present, though previous empirical analysis has proved that the probability of changing rating depends on the time a firm maintains the same rating. In order to face this difficulty we approach the PD with the continuous time semi-Markov process (CTSMP), which relaxes the exponential waiting time distribution assumption of the Markovian analogue.In this work we have relaxed the constant transition rate assumption and assumed that it depends on the residence time, thus we have derived CTSMP forward integral and differential equations respectively and the corresponding equations for the particular cases of exponential, gamma and power law waiting time distributions, we have also obtained a numerical solution of the migration probability by the Monte Carlo Method and compared the results with the Markovian models in discrete and continuous time respectively, and the discrete time semi-Markov process. We have focused on firms from U.S.A. and Canada classified as financial sector according to Global Industry Classification Standard and we have concluded that the gamma and Weibull distribution are the best adjustment models.
APA, Harvard, Vancouver, ISO, and other styles
35

Dewji, Shaheen Azim. "Assessing internal contamination after a radiological dispersion device event using a 2x2-inch sodium-iodide detector." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Reinhardt, Aleks. "Computer simulation of the homogeneous nucleation of ice." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:9ec0828b-df99-42e1-8694-14786d7578b9.

Full text
Abstract:
In this work, we wish to determine the free energy landscape and the nucleation rate associated with the process of homogeneous ice nucleation. To do this, we simulate the homogeneous nucleation of ice with the mW monatomic model of water and with all-atom models of water using primarily the umbrella sampling rare event method. We find that the use of the mW model of water, which has simpler dynamics compared to all-atom models of water, but is nevertheless surprisingly good at reproducing experimental data, results in very reasonable agreement with classical nucleation theory, in contrast to some previous simulations of homogeneous ice nucleation. We suggest that previous simulations did not observe the lowest free energy pathway in order parameter space because of their use of global order parameters, leading to a deviation from classical nucleation theory predictions. Whilst monatomic water can nucleate reasonably quickly, all-atom models of water are considerably more difficult to simulate, primarily because of their slow dynamics of ice growth and the fact that standard order parameters do not work well in driving nucleation when such models are being used. In this thesis, we describe a local, rotationally invariant order parameter that is capable of growing ice homogeneously in a biassed simulation without the unnatural effects introduced by global order parameters, and without leading to non-physical chain-like growth of 'ice' clusters that results from a naïve implementation of the standard Steinhardt-Ten Wolde order parameter. We have successfully used this order parameter to force the growth of ice clusters in simulations of all-atom models of water. However, although ice growth can be achieved, equilibrating simulations with all-atom models of water is extremely difficult. We describe several approaches to speeding up the equilibration in all-atom models of water to enable the computation of free energy profiles for homogeneous ice nucleation.
APA, Harvard, Vancouver, ISO, and other styles
37

Bignami, Luca. "Le tecniche di Risk Management per i progetti di costruzione: il caso studio della riqualificazione e recupero funzionale dell'ex-Manifattura Tabacchi per la realizzazione del Tecnopolo di Bologna." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
La disciplina del Risk Management assume recentemente un significato ed un peso crescenti nel panorama delle organizzazioni pubbliche e private. Nel campo delle costruzioni pubbliche, in particolare, l’attuazione di processi strutturati di Gestione del Rischio potrebbe portare ad un efficientamento significativo del processo di costruzione e gestione. Obiettivo di questa tesi è verificare in che modo i risultati di un’applicazione strutturata di un processo di Gestione del Rischio possono essere impiegati dal gruppo di management per perseguire scelte più consapevoli, precise e circostanziate rispetto ai metodi tradizionali di gestione del processo. L’analisi parte da uno studio comparativo dei metodi e delle norme tecniche di Risk Management proposte in ambito internazionale. I risultati ottenuti vengono poi applicati al caso studio relativo al progetto di insediamento del Tecnopolo di Bologna presso l’area nota come Ex-Manifattura Tabacchi. L’applicazione delle tecniche al caso di studio è strutturata come una esecuzione completa del processo di Valutazione del Rischio. La fase di Identificazione viene svolta tramite un’analisi della letteratura, la sottoposizione al giudizio degli esperti, e si conclude con una categorizzazione dei rischi mediante Risk Breakdown Structure. La fase di Quantificazione del Rischio è attuata tramite una prima fase di analisi qualitativa con la somministrazione di un questionario on-line ad una platea di soggetti competenti; seguita da un’analisi quantitativa svolta con il software “RiskyProject®” per realizzare una analisi di Montecarlo ed analisi di sensitività. Al termine vengono esaminate alcune possibili misure di trattamento specifiche per un rischio definito prioritario. I risultati proposti mostrano come sia possibile ottenere in fase preliminare una descrizione consapevole delle incertezze del progetto, e che tale consapevolezza può essere utilizzata con lo scopo di migliorare la qualità e l’efficacia dell’intero processo.
APA, Harvard, Vancouver, ISO, and other styles
38

Malherbe, Victor. "Multi-scale modeling of radiation effects for emerging space electronics : from transistors to chips in orbit." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0753/document.

Full text
Abstract:
En raison de leur impact sur la fiabilité des systèmes, les effets du rayonnement cosmique sur l’électronique ont été étudiés dès le début de l’exploration spatiale. Néanmoins, de récentes évolutions industrielles bouleversent les pratiques dans le domaine, les technologies standard devenant de plus en plus attrayantes pour réaliser des circuits durcis aux radiations. Du fait de leurs fréquences élevées, des nouvelles architectures de transistor et des temps de durcissement réduits, les puces fabriquées suivant les derniers procédés CMOS posent de nombreux défis. Ce travail s’attelle donc à la simulation des aléas logiques permanents (SEU) et transitoires (SET), en technologies FD-SOI et bulk Si avancées. La réponse radiative des transistors FD-SOI 28 nm est tout d’abord étudiée par le biais de simulations TCAD, amenant au développement de deux modèles innovants pour décrire les courants induits par particules ionisantes en FD-SOI. Le premier est principalement comportemental, tandis que le second capture des phénomènes complexes tels que l’amplification bipolaire parasite et la rétroaction du circuit, à partir des premiers principes de semi-conducteurs et en accord avec les simulations TCAD poussées.Ces modèles compacts sont alors couplés à une plateforme de simulation Monte Carlo du taux d’erreurs radiatives (SER) conduisant à une large validation sur des données expérimentales recueillies sous faisceau de particules. Enfin, des études par simulation prédictive sont présentées sur des cellules mémoire et portes logiques en FD-SOI 28 nm et bulk Si 65 nm, permettant d’approfondir la compréhension des mécanismes contribuant au SER en orbite des circuits intégrés modernes
The effects of cosmic radiation on electronics have been studied since the early days of space exploration, given the severe reliability constraints arising from harsh space environments. However, recent evolutions in the space industry landscape are changing radiation effects practices and methodologies, with mainstream technologies becoming increasingly attractive for radiation-hardened integrated circuits. Due to their high operating frequencies, new transistor architectures, and short rad-hard development times, chips manufactured in latest CMOS processes pose a variety of challenges, both from an experimental standpoint and for modeling perspectives. This work thus focuses on simulating single-event upsets and transients in advanced FD-SOI and bulk silicon processes.The soft-error response of 28 nm FD-SOI transistors is first investigated through TCAD simulations, allowing to develop two innovative models for radiation-induced currents in FD-SOI. One of them is mainly behavioral, while the other captures complex phenomena, such as parasitic bipolar amplification and circuit feedback effects, from first semiconductor principles and in agreement with detailed TCAD simulations.These compact models are then interfaced to a complete Monte Carlo Soft-Error Rate (SER) simulation platform, leading to extensive validation against experimental data collected on several test vehicles under accelerated particle beams. Finally, predictive simulation studies are presented on bit-cells, sequential and combinational logic gates in 28 nm FD-SOI and 65 nm bulk Si, providing insights into the mechanisms that contribute to the SER of modern integrated circuits in orbit
APA, Harvard, Vancouver, ISO, and other styles
39

Weydert, Carole. "Recherche d'un boson de Higgs chargé avec le détecteur ATLAS : de la théorie à l'expérience." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00629349.

Full text
Abstract:
Cette thèse se situe à mi-chemin entre la phénoménologie et la physique de particules expérimentale. Dans la première partie, nous décrivons un calcul de section efficace à order supérieur en développement perturbatif, ainsi que son implementation dans un générateur d'événements Monte Carlo. Nous présentons les corrections au premier order en chromodynamique quantique pour la production de boson de Higgs chargé en association avec un quark top au LHC, en utilisant le formalisme de soustraction de Catani et Seymour. Notre code indépendant nous a permis de valider les résultats donnés par MC@NLO, et nous avons réalisé des études concernant diverses contributions aux erreurs systématiques dues à la simulation d'événements. L'implémention du processus a été réalisée pour le générateur POWHEG. En raison de la quantité de données insuffisante disponible fin 2010 (le détecteur ATLAS a accumulé 35 pb-1 de données de collisions proton-proton), le processus de production de Higgs chargé n'a pas pu être étudié et nous nous sommes tournés vers la caractérisation de bruits de fonds. Dans ce contexte, il s'avère que la production de boson W en association avec un quark top est importante à connaître. Dans la seconde partie de cette thèse, nous mettons en place une analyse spécifique au canal Wt semileptonique, en incluant les effets statistiques et systématiques, pour lesquels nous nous concentrons plus particulièrement sur l'effet dû aux différentes paramétrisations du contenu des protons. Le processus Wt étant inobservable au Tévatron, nous pouvons pour la première fois donner une limite à la setion efficace de production.
APA, Harvard, Vancouver, ISO, and other styles
40

Mathonat, Romain. "Rule discovery in labeled sequential data : Application to game analytics." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI080.

Full text
Abstract:
Exploiter des jeux de données labelisés est très utile, non seulement pour entrainer des modèles et mettre en place des procédures d'analyses prédictives, mais aussi pour améliorer la compréhension d'un domaine. La découverte de sous-groupes a été l'objet de recherches depuis deux décennies. Elle consiste en la découverte de règles couvrants des ensembles d'objets ayant des propriétés intéressantes, qui caractérisent une classe cible donnée. Bien que de nombreux algorithmes de découverte de sous-groupes aient été proposés à la fois dans le cas des données transactionnelles et numériques, la découverte de règles dans des données séquentielles labelisées a été bien moins étudiée. Dans ce contexte, les stratégies d'exploration exhaustives ne sont pas applicables à des cas d'application rééls, nous devons donc nous concentrer sur des approches heuristiques. Dans cette thèse, nous proposons d'appliquer des modèles de bandit manchot ainsi que la recherche arborescente de Monte Carlo à l'exploration de l'espace de recherche des règles possibles, en utilisant un compromis exploration-exploitation, sur différents types de données tels que les sequences d'ensembles d'éléments, ou les séries temporelles. Pour un budget temps donné, ces approches trouvent un ensemble des top-k règles decouvertes, vis-à-vis de la mesure de qualité choisie. De plus, elles ne nécessitent qu'une configuration légère, et sont indépendantes de la mesure de qualité utilisée. A notre connaissance, il s'agit de la première application de la recherche arborescente de Monte Carlo au cas de la fouille de données séquentielles labelisées. Nous avons conduit des études appronfondies sur différents jeux de données pour illustrer leurs plus-values, et discuté leur résultats quantitatifs et qualitatifs. Afin de valider le bon fonctionnement d'un de nos algorithmes, nous proposons un cas d'utilisation d'analyse de jeux vidéos, plus précisémment de matchs de Rocket League. La decouverte de règles intéressantes dans les séquences d'actions effectuées par les joueurs et leur exploitation dans un modèle de classification supervisée montre l'efficacité et la pertinence de notre approche dans le contexte difficile et réaliste des données séquentielles de hautes dimensions. Elle permet la découverte automatique de techniques de jeu, et peut être utilisée afin de créer de nouveaux modes de jeu, d'améliorer le système de classement, d'assister les commentateurs de "e-sport", ou de mieux analyser l'équipe adverse en amont, par exemple
It is extremely useful to exploit labeled datasets not only to learn models and perform predictive analytics but also to improve our understanding of a domain and its available targeted classes. The subgroup discovery task has been considered for more than two decades. It concerns the discovery of rules covering sets of objects having interesting properties, e.g., they characterize a given target class. Though many subgroup discovery algorithms have been proposed for both transactional and numerical data, discovering rules within labeled sequential data has been much less studied. In that context, exhaustive exploration strategies can not be used for real-life applications and we have to look for heuristic approaches. In this thesis, we propose to apply bandit models and Monte Carlo Tree Search to explore the search space of possible rules using an exploration-exploitation trade-off, on different data types such as sequences of itemset or time series. For a given budget, they find a collection of top-k best rules in the search space w.r.t chosen quality measure. They require a light configuration and are independent from the quality measure used for pattern scoring. To the best of our knowledge, this is the first time that the Monte Carlo Tree Search framework has been exploited in a sequential data mining setting. We have conducted thorough and comprehensive evaluations of our algorithms on several datasets to illustrate their added-value, and we discuss their qualitative and quantitative results. To assess the added-value of one or our algorithms, we propose a use case of game analytics, more precisely Rocket League match analysis. Discovering interesting rules in sequences of actions performed by players and using them in a supervised classification model shows the efficiency and the relevance of our approach in the difficult and realistic context of high dimensional data. It supports the automatic discovery of skills and it can be used to create new game modes, to improve the ranking system, to help e-sport commentators, or to better analyse opponent teams, for example
APA, Harvard, Vancouver, ISO, and other styles
41

Doan, Thi Kieu Oanh. "Mesure de la section efficace différentielle de production du boson Z se désintégrant en paires électron-position, dans l'expérience ATLAS." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00846877.

Full text
Abstract:
La première mesure du spectre en phi*_eta du boson Z à 7 TeV a été réalisée dans cette thèse. Cette variable permet de sonder la dynamique de production des Z de façon fine. L'échantillon complet des données enregistrées par ATLAS en 2011 a été utilisé ce qui correspond à 4.7/fb de luminosité intégrée. Les résultats de cette mesure sont publiés dans la Ref. [18] fondé sur la note interne Ref. [69]. La section efficace différentielle de Z->ee en fonction phi*_eta a été mesurée et comparée aux calculs perturbatifs à ordre fixé, avec/sans resommation pour la région des petits phi*_eta. Le code RESBOS fournit la meilleure description des données, cependant il est incapable de reproduire, à mieux de 4%, la forme détaillée de la section efficace mesurée. La section efficace différentielle a également été comparée aux prédictions de différents générateurs Monte Carlo interfacés avec un algorithme de parton shower. Les meilleures descriptions du spectre en phi*_eta mesuré sont données par les générateurs SHERPA et POWHEG+PYTHIA8. La mesure précise de la section efficace différentielle en phi*_eta fournit des informations précieuses pour l'ajustement des codes Monte Carlo. La précision expérimentale typique de cette mesure (~0.5%) est dix fois meilleure que la précision des calculs théoriques et elle est donc aussi précieuse pour contraindre la théorie. La mesure du spectre en ptZ a également été faite pour quantifier l'incertitude systématique de cette mesure en utilisant la grande statistique de l'échantillon de données. Cela permet de comparer deux mesures qui traitent de l'impulsion transverse du boson Z. Dans la plupart du domaine en phi*_eta l'incertitude systématique de la mesure de ptZ est deux fois plus grande que celle de la mesure de phi*_eta. Cette comparaison confirme l'intérêt de la variable phi*_eta. Les résultats présentés dans cette thèse ont beaucoup d'implications pour les études futures. Ajustant les générateurs Monte Carlo en utilisant les résultats de la mesure précise du spectre en phi*_eta minimisera l'incertitude sur leurs paramètres. Une mesure de la section efficace doublement différentielle en ptZ et phi*_eta est intéressante pour mieux comprendre la corrélation entre ces deux variables. La mesure précise du spectre en ptZ utilisant la variable phi*_eta peut être appliquée au spectre en ptW et on sait que des mesures plus fines du ptW sont importante pour une détermination précise de la masse du boson W. De plus, une compréhension précise du spectre en ptZ est importante pour comprendre les propriétés cinématiques de la production du boson de Higgs.
APA, Harvard, Vancouver, ISO, and other styles
42

Lagnoux, Agnès. "Analyse des modeles de branchement avec duplication des trajectoires pour l'étude des événements rares." Toulouse 3, 2006. http://www.theses.fr/2006TOU30231.

Full text
Abstract:
Nous étudions, dans cette thèse, le modèle de branchement avec duplication des trajectoires d'abord introduit pour l'étude des événements rares destiné à accélérer la simulation. Dans cette technique, les échantillons sont dupliqués en R copies à différents niveaux pendant la simulation. L'optimisation de l'algorithme à coût fixé suggère de prendre les probabilités de transition entre les niveaux égales à une constante et de dupliquer un nombre égal à l'inverse de cette constante, nombre qui peut être non entier. Nous étudions d'abord la sensibilité de l'erreur relative entre la probabilité d'intérêt P(A) et son estimateur en fonction de la stratégie adoptée pour avoir des nombres de retirage entiers. Ensuite, puisqu'en pratique les probabilités de transition sont généralement inconnues (et de même pour les nombres de retirages), nous proposons un algorithme en deux étapes pour contourner ce problème. Des applications numériques et comparaisons avec d'autres modèles sont proposés
This thesis deals with the splitting method first introduced in rare event analysis in order to speed-up simulation. In this technique, the sample paths are split into R multiple copies at various stages during the simulation. Given the cost, the optimization of the algorithm suggests to take the transition probabilities between stages equal to some constant and to resample the inverse of that constant subtrials, which may be non-integer and even unknown but estimated. First, we study the sensitivity of the relative error between the probability of interest P(A) and its estimator depending on the strategy that makes the resampling numbers integers. Then, since in practice the transition probabilities are generally unknown (and so the optimal resampling umbers), we propose a two-steps algorithm to face that problem. Several numerical applications and comparisons with other models are proposed
APA, Harvard, Vancouver, ISO, and other styles
43

Zu, Xiaomin. "Monte-Carlo event generators at next-to-leading order." 2003. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-343/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kolář, Karel. "Zkoumání závislosti výpočtů v konečném řádu poruchové QCD na faktorizačním schématu." Doctoral thesis, 2012. http://www.nusl.cz/ntk/nusl-312193.

Full text
Abstract:
Title: Investigation of the factorization scheme dependence of finite order per- turbative QCD calculations Author: Karel Kolář Institute: Institute of Particle and Nuclear Physics Supervisor of the doctoral thesis: prof. Jiří Chýla, CSc., Institute of Physics of the Academy of Sciences of the Czech Republic Abstract: The main aim of this thesis is the investigation of phenomenological implications of the freedom in the choice of the factorization scheme for the de- scription of hard collisions with the potential application for an improvement of current NLO Monte Carlo event generators. We analyze the freedom associated with the definition of parton distribution functions and we derive general formu- lae governing the dependence of parton distribution functions and hard scattering cross-sections on unphysical quantities specifying the renormalization and factor- ization procedure. The issue of the specification of factorization schemes via the corresponding higher order splitting functions is discussed in detail. The main attention is paid to the so called ZERO factorization scheme, which allows the construction of consistent NLO Monte Carlo event generators in which initial state parton showers can be taken formally at the LO. Unfortunately, it has turned out that the practical applicability of the ZERO...
APA, Harvard, Vancouver, ISO, and other styles
45

Parker, Jason Mascagni Michael. "Extensions and optimizations to the scalable, parallel random number generators library." 2003. http://etd.lib.fsu.edu/theses/available/etd-11182003-021957.

Full text
Abstract:
Thesis (M.S.)--Florida State University, 2003.
Advisor: Michael Mascagni, Florida State University, College of Arts and Sciences, Dept. of Computer Science. Title and description from dissertation home page (viewed Mar. 2, 2004). Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
46

"Development of an event oriented Monte Carlo simulation and applications to problems of diffusion and reaction." Tulane University, 1988.

Find full text
Abstract:
This work presents the development of an event oriented simulation method. The thesis describes a new procedure to manage the set of pending events which is an integral part of an event oriented discrete simulation. As an application of the method, the problem of reaction and diffusion in a specific zeolite structure is treated in detail An event oriented model was developed, as this allows for greater flexibility and increases accuracy as compared to a fixed interval simulation method. As most future event set algorithms are not adequate to handle the large numbers of events generated by this simulation, a new event list algorithm, based on a B-tree, has been devised that offers superior insertion and retrieval capabilities The simulation method was applied to the cause of a reversible first order reaction occurring within a particle with a pore geometry conforming to a three dimensional rectangular grid. It was found that the product formation rate passes through a maximum as the pressure of surroundings is increased. This finding was substantiated by analytical solution of differential equations describing the reaction-diffusion model Two different modes of catalyst deactivation, site suppression and site blockage, were examined. Site suppression deactivates sites without affecting diffusional access to deactivated sites, while site blockage prevents further access to deactivated sites. For both modes, series and parallel deactivation mechanisms were considered. In the parallel site blockage mode, the edges of the particle are blocked first, thus deactivating the particle faster than the parallel site suppression mode. However, the series site blockage mode blocks the center of the particle, increasing the production as compared to the series site suppression A more complex system, the reaction of toluene and methanol to form xylene isomers was modeled. A plug flow heterogeneous reactor was simulated by using a series of single particle simulations. In these simulations, the interior diffusion was decoupled from the transport of molecules at the surface of the particle into the bulk phase. This model has produced results consistent with various reported effects. The simulation correctly predicts increased para-xylene selectivity with unmodified particles. Also, the approach to thermodynamic equilibrium follows the correct reaction path
acase@tulane.edu
APA, Harvard, Vancouver, ISO, and other styles
47

Nahrvar, Shayan. "Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization Project." Thesis, 2010. http://hdl.handle.net/1807/24612.

Full text
Abstract:
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
APA, Harvard, Vancouver, ISO, and other styles
48

Gleisberg, Tanju [Verfasser]. "Automating methods to improve precision in Monte-Carlo event generation for particle colliders / vorgelegt von Tanju Gleisberg." 2008. http://d-nb.info/989878171/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sadeghi, Naimeh. "Combined Fuzzy and Probabilistic Simulation for Construction Management." Master's thesis, 2009. http://hdl.handle.net/10048/731.

Full text
Abstract:
Simulation has been used extensively for addressing probabilistic uncertainty in range estimating for construction projects. However, subjective and linguistically expressed information results in added non-probabilistic uncertainty in construction management. Fuzzy logic has been used successfully for representing such uncertainties in construction projects. In practice, an approach that can handle both random and fuzzy uncertainties in a risk assessment model is necessary. In this thesis, first, a Fuzzy Monte Carlo Simulation (FMCS) framework is proposed for risk analysis of construction projects. To verify the feasibility of the FMCS framework and demonstrate its main features, a cost range estimating template is developed and employed to estimate the cost of a highway overpass project. Second, a hybrid framework that considers both fuzzy and probabilistic uncertainty for discrete event simulation of construction projects is suggested. The application of the proposed framework is discussed using a real case study of a pipe spool fabrication shop.
Construction Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
50

Μαρμίδης, Γρηγόριος. "Πολυκριτηριακή βελτιστοποίηση της εκμετάλλευσης του αιολικού δυναμικού για τη σύνδεση αιολικών συστημάτων/πάρκων στα δίκτυα υψηλών τάσεων." Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/4785.

Full text
Abstract:
Η παρούσα διδακτορική διατριβή αναλύει και προτείνει κάποιες νέες λύσεις και μεθοδολογίες όσον αφορά στο πρόβλημα της βέλτιστης απομάστευσης ενέργειας από ένα αιολικό σύστημα. Στο πρώτο μέρος, προτείνεται μια νέα μεθοδολογία για τη βελτιστοποίηση της εκμετάλλευσης του ανέμου σε μια συγκεκριμένη περιοχή, καταλήγοντας στη βέλτιστη χωροταξική τοποθέτηση των ανεμογεννητριών σε ένα αιολικό πάρκο. Στο δεύτερο μέρος, ο σκοπός της βελτιστοποίησης είναι η μεγιστοποίηση της παραγόμενης ισχύος για κάθε τυχαία ταχύτητα ανέμου, μέσω της ανάπτυξης βέλτιστων τεχνικών ελέγχου. Και στις δύο περιπτώσεις, η ανάλυση γίνεται με τη χρήση μοντέλων τα οποία αναπτύχθηκαν στο περιβάλλον του προγράμματος Matlab και τον υπολογιστικών εργαλείων που παρέχει αυτό. Τα μοντέλα που δημιουργήθηκαν, μας οδήγησαν σε μια σειρά προσομοιώσεων οι οποίες επαλήθευσαν τη θεωρητική ανάλυση και επιβεβαίωσαν ότι οι πρωτότυπες λύσεις που προτείνονται δίνουν βελτιστοποιημένα αποτελέσματα σε σχέση με αντίστοιχες γνωστές αναλύσεις. Πιο συγκεκριμένα, το πρώτο μέρος αυτής της διατριβής επικεντρώνεται στη βέλτιστη επιλογή τόσο του αριθμού, όσο και της θέσης των ανεμογεννητριών σε μια συγκεκριμένη γεωγραφική περιοχή. Το κριτήριο που χρησιμοποιείται για την βελτιστοποίηση είναι η μέγιστη παραγωγή ενέργειας με το χαμηλότερο κόστος. Στην ανάλυση αυτή, εισάγεται μια νέα διαδικασία προσέγγισης αυτού του προβλήματος με τη χρήση της Μεθόδου Προσομοίωσης Μόντε Κάρλο. Στην παρούσα ανάλυση, όπως και σε παλιότερες αντίστοιχες, χρησιμοποιείται το ίδιο κριτήριο βελτιστοποίησης και γίνονται οι ίδιες παραδοχές, προκειμένου τα αποτελέσματα να είναι συγκρίσιμα. Τα αποτελέσματα που παρουσιάζονται είναι πολύ καλύτερα και όσον αφορά στην παραγόμενη ισχύ αλλά και το κριτήριο βελτιστοποίησης. Επιπρόσθετα, εξάγονται σημαντικά συμπεράσματα σχετικά με το μέγιστο αριθμό των ανεμογεννητριών και την χωροταξική τους τοποθέτηση, τα οποία επαληθεύουν ότι η προτεινόμενη μέθοδος είναι ένα λειτουργικό εργαλείο για την απόφαση τοποθέτησης ανεμογεννητριών σε ένα αιολικό πάρκο. Στο δεύτερο μέρος, που είναι και πιο εκτεταμένο, σκοπός είναι η σχεδίαση νέων ελεγκτών υψηλής απόδοσης, οι οποίοι επιτυγχάνουν τη μέγιστη παραγωγή ισχύος για συστήματα ανεμογεννητριών μεταβλητών στροφών. Πιο συγκεκριμένα, αναλύεται ένα σύστημα ανεμογεννήτριας το οποίο αποτελείται από επαγωγική γεννήτρια βραχυκυκλωμένου κλωβού που συνδέεται στο δίκτυο με πλήρως ελεγχόμενο ac/dc/ac μετατροπέα με IGBT στοιχεία. Αρχικά παρουσιάζεται μια ανάλυση των μεθόδων που χρησιμοποιούνται σήμερα για τη μεγιστοποίηση και τον έλεγχο της ισχύος. Με βάση αυτή την ανάλυση, προτείνοντες εναλλακτικές λύσεις που είναι λιγότερο πολύπλοκες, πιο ακριβείς και πιο εύκολα εφαρμόσιμες, ένα γεγονός που αποδεικνύεται μέσω της θεωρητικής ανάλυσης. Για το σκοπό αυτό αναπτύχθηκαν και προτάθηκαν κατάλληλα μη γραμμικά μοντέλα για το σύστημα της ανεμογεννήτριας, για την περίπτωση της λειτουργίας υπό μεταβλητή, προσαρμοζόμενη ταχύτητα. Η εκτενής μαθηματική ανάλυση απέδειξε ότι το αρχικό μη γραμμικό σύστημα τηρεί τη θεμελιώδη ιδιότητα των Euler-Lagrange συστημάτων, την παθητικότητα. Αυτό σημαίνει πως το σύστημα έχει απόσβεση τόσο στο ηλεκτρικό όσο και στο μηχανικό μέρος. Επίσης, γίνεται ανάλυση του εφαρμοζόμενου σχήματος του διανυσματικού ελέγχου. Η ανάλυση αυτή αποδεικνύει ότι με κατάλληλη μετατροπή και προσαρμογή των PI ελεγκτών που χρησιμοποιούνται στην πλευρά της γεννήτριας και σημαντικές απλοποιήσεις του σχήματος του παραδοσιακού διανυσματικού ελέγχου, προκύπτουν δύο διαφορετικά σχήματα ελέγχου τα οποία επιτυγχάνουν αυτόματα τον προσανατολισμό στο πεδίο που απαιτείται από το διανυσματικό έλεγχο, ενώ διατηρούν και αυξάνουν την απόσβεση του αρχικού συστήματος. Οι έλεγχοι αυτοί επιτυγχάνουν εξαιρετική ρύθμιση των στροφών λειτουργίας της γεννήτριας σύμφωνα με την ταχύτητα του ανέμου, έτσι ώστε να οδηγούν στη βέλτιστη λειτουργία, δηλαδή αυτή της μέγιστης δυνατής εξαγόμενης ισχύος. Το πρώτο σχήμα ελέγχου είναι απευθείας συγκρίσιμο με τις υπάρχουσες εφαρμογές του διανυσματικού ελέγχου, ενώ το δεύτερο εισάγει μια πρωτοποριακή τεχνική, η οποία βελτιώνει σημαντικά την απόκριση του συστήματος. Ανάλογες βελτιώσεις προτείνονται και για την πλευρά του μετατροπέα προς την πλευρά της σύνδεσης με το δίκτυο της ηλεκτρικής ενέργειας. Και για αυτή την περίπτωση αποδεικνύεται ότι ο προτεινόμενος σχεδιασμός μπορεί να δώσει απλοποιημένα σχήματα ελέγχου πολύ αποδοτικά όμως για τη διαμόρφωση της ποιότητας της παραγόμενης ισχύος Έτσι, επιτυγχάνεται πλήρης αντιστάθμιση της αέργου ισχύος με λειτουργία μοναδιαίου συντελεστή ισχύος της ανεμογεννήτριας. Επιπρόσθετα, επιτυγχάνεται η σαφής σταθεροποίηση της λειτουργίας της διασύνδεσης συνεχούς ρεύματος σε λειτουργία σταθερής dc τάσης. Οι αναλυτικές προσομοιώσεις οι οποίες διεξήχθησαν, βασίζονται σε ρεαλιστικά σενάρια λειτουργίας. Σύμφωνα με αυτά η απαίτηση για ρύθμιση της ταχύτητας στην βέλτιστη τιμή της συμβαίνει την ίδια χρονική στιγμή που εμφανίζεται αλλαγή της παρεχόμενης ισχύος και ροπής από τον άνεμο. Τα αποτελέσματα της προσομοίωσης επαληθεύουν τη θεωρητική ανάλυση και δείχνουν ότι και τα δύο προτεινόμενα σχήματα ελέγχου επιτυγχάνουν μεγιστοποίηση στην παραγωγή ισχύος με τρόπο αποδοτικό και με τους αναμενόμενους χρόνους απόκρισης. Συγκρίσεις με υπάρχουσες αντίστοιχες τεχνικές ελέγχου δείχνουν την υπεροχή των προτεινόμενων σχημάτων ελέγχου και την ικανοποιητική λειτουργία τους.
This dissertation analyses and suggests some efficient solutions for the optimization problem of the available power in a wind system. The thesis is divided into two parts. In the first part, a new methodology is proposed for optimizing the manipulation of wind in a certain area, based on the optimal placement of wind turbines in wind parks. In the second part, the purpose of the optimization is to maximize the power production from the wind, through the development of new control techniques. In both cases, the analysis is made by using detailed nonlinear models, which were developed in computational environment of the Matlab program and all the computational tools that were available. The models that were developed lead us to a series of simulations that confirm the theoretical analysis and suggest original solutions and optimized results in comparison to known ones. Particularly, the first part of this thesis is concentrated in the optimal selection of the number and the positioning of the wind turbines in a certain geographic region. The criterion used for the optimization is the maximum power production with the minimum cost. In this analysis a novel procedure is introduced, based on the Monte Carlo Simulation Method. As in previous studies, we used the same optimization criterion and the other conditions in order the results to be comparable. The presented results are far better as long it concern the maximum power production and the optimization factor. Furthermore, important conclusions are made for the maximum number of the wind turbines and their location that confirm the proposed method as an efficient tool for the geographic distribution of wind mills in a wind park. In the second part of this dissertation, the goal is to design high efficient and simple controllers that achieve maximum power production for wind turbine systems. Particularly, a variable speed wind generation system is considered consisting by a squirrel cage induction generator connected to the grid by a fully controlled ac/dc/ac IGBT converter. To this end, first it is presented an analysis of the control method, namely the vector control method, which is used today for maximization and control of the active power. Based on this analysis, alternative solutions are proposed, that are less complicated, more accurate and easier to be accomplished, a fact that is proved through a theoretical analysis. Suitable nonlinear models have been developed and proposed for a wind turbine system, for the case of operating under variable, adjustable speed. Comprehensive mathematical analysis proves that the original nonlinear system has the fundamental property of Euler-Lagrange systems, which is passivity. This means that the system is damped in both the electric and the mechanical part. Also, an analysis of the existing, vector control based schemes is conducted. Then, by modifying properly the PI controllers used at the generator side and simplifying decisively the vector control scheme, we propose two different control schemes that simultaneously are free from the field oriented requirement of the vector control while they maintain and increase the damping of the initial system. The first control scheme is directly comparable with the existing vector control applications, while the second introduces an innovated technique that improves substantially the system response. Detailed simulations that carried out are based on realistic operation scenario, which mean that require demand for the adjustment of the speed in its optimal value, at the same time that a change at the supplied by the wind power and torque occurs. The simulation results confirm the theoretical analysis and show, that both the proposed control schemes achieve optimal power production in an efficient manner and in the expected response time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography