Dissertations / Theses on the topic 'Markov chain simulation'

To see the other types of publications on this topic, follow the link: Markov chain simulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov chain simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
2

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

3

Fan, Yanan. "Efficient implementation of Markov chain Monte Carlo." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cheal, Ryan. "Markov Chain Monte Carlo methods for simulation in pedigrees." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

BALDIOTI, HUGO RIBEIRO. "MARKOV CHAIN MONTE CARLO FOR NATURAL INFLOW ENERGY SCENARIOS SIMULATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36058@1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Constituído por uma matriz eletro-energética predominantemente hídrica e território de proporções continentais, o Brasil apresenta características únicas, sendo possível realizar o aproveitamento dos fartos recursos hídricos presentes no território nacional. Aproximadamente 65 por cento da capacidade de geração de energia elétrica advém de recursos hidrelétricos enquanto 28 por cento de recursos termelétricos. Sabe-se que regimes hidrológicos de vazões naturais são de natureza estocástica e em função disso é preciso tratá-los para que se possa planejar a operação do sistema, sendo assim, o despacho hidrotérmico é de suma importância e caracterizado por sua dependência estocástica. A partir das vazões naturais é possível calcular a Energia Natural Afluente (ENA) que será utilizada diretamente no processo de simulação de séries sintéticas que, por sua vez, são utilizadas no processo de otimização, responsável pelo cálculo da política ótima visando minimizar os custos de operação do sistema. Os estudos referentes a simulação de cenários sintéticos de ENA vêm se desenvolvendo com novas propostas metodológicas ao longo dos anos. Tais desenvolvimentos muitas vezes pressupõem Gaussianidade dos dados, de forma que seja possível ajustar uma distribuição paramétrica nos mesmos. Percebeu-se que na maioria dos casos reais, no contexto do Setor Elétrico Brasileiro, os dados não podem ser tratados desta forma, uma vez que apresentam em sua densidade comportamentos de cauda relevantes e uma acentuada assimetria. É necessário para o planejamento da operação do Sistema Interligado Nacional (SIN) que a assimetria intrínseca a este comportamento seja passível de reprodução. Dessa forma, este trabalho propõe duas abordagens não paramétricas para simulação de cenários. A primeira refere-se ao processo de amostragem dos resíduos das séries de ENA, para tanto, utiliza-se a técnica Markov Chain Monte Carlo (MCMC) e o Kernel Density Estimation. A segunda metodologia proposta aplica o MCMC Interconfigurações diretamente nas séries de ENA para simulação de cenários sintéticos a partir de uma abordagem inovadora para transição entre as matrizes e períodos. Os resultados da implementação das metodologias, observados graficamente e a partir de testes estatísticos de aderência ao histórico de dados, apontam que as propostas conseguem reproduzir com uma maior acurácia as características assimétricas sem perder a capacidade de reproduzir estatísticas básicas. Destarte, pode-se afirmar que os modelos propostos são boas alternativas em relação ao modelo vigente utilizado pelo setor elétrico brasileiro.
Consisting of an electro-energetic matrix with hydro predominance and a continental proportion territory, Brazil presents unique characteristics, being able to make use of the abundant water resources in the national territory. Approximately 65 percent of the electricity generation capacity comes from hydropower while 28 percent from thermoelectric plants. It is known that hydrological regimes have a stochastic nature and it is necessary to treat them so the energy system can be planned, thus the hydrothermal dispatch is extremely important and characterized by its stochastic dependence. From the natural streamflows it is possible to calculate the Natural Inflow Energy (NIE) that will be used directly in the synthetic series simulation process, which, in turn, are used on the optimization process, responsible for optimal policy calculation in order to minimize the system operational costs. The studies concerning the simulation of synthetic scenarios of NIE have been developing with new methodological proposals over the years. Such developments often presuppose data Gaussianity, so that a parametric distribution can be fitted to them. It was noticed that in the majority of real cases, in the context of the Brazilian Electrical Sector, the data cannot be treated like that, since they present in their density relevant tail behavior and skewness. It is necessary for the National Interconnected System (SIN) operational planning that the intrinsic skewness behavior is amenable to reproduction. Thus, this paper proposes two non-parametric approaches to scenarios simulation. The first one refers to the process of NIE series residues sampling, using a Markov Chain Monte Carlo (MCMC) technique and the Kernel Density Estimation. The second methodology is also proposed where the MCMC is applied periodically and directly in the NIE series to simulate synthetic scenarios using an innovative approach for transitions between matrices. The methodologies implementation results, observed graphically and based on statistical tests of adherence to the historical data, indicate that the proposals can reproduce with greater accuracy the asymmetric characteristics without losing the ability to reproduce basic statistics. Thus, one can conclude that the proposed models are good alternatives in relation to the current model of the Brazilian Electric Sector.
6

Mehl, Christopher. "Bayesian Hierarchical Modeling and Markov Chain Simulation for Chronic Wasting Disease." Diss., University of Colorado at Denver, 2004. http://hdl.handle.net/10919/71563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, a dynamic spatial model for the spread of Chronic Wasting Disease in Colorado mule deer is derived from a system of differential equations that captures the qualitative spatial and temporal behaviour of the disease. These differential equations are incorporated into an empirical Bayesian hierarchical model through the unusual step of deterministic autoregressive updates. Spatial effects in the model are described directly in the differential equations rather than through the use of correlations in the data. The use of deterministic updates is a simplification that reduces the number of parameters that must be estimated, yet still provides a flexible model that gives reasonable predictions for the disease. The posterior distribution generated by the data model hierarchy possesses characteristics that are atypical for many Markov chain Monte Carlo simulation techniques. To address these difficulties, a new MCMC technique is developed that has qualities similar to recently introduced tempered Langevin type algorithms. The methodology is used to fit the CWD model, and posterior parameter estimates are then used to obtain predictions about Chronic Wasting Disease.
7

Zhou, Yi. "Simulation and Performance Analysis of Strategic Air Traffic Management under Weather Uncertainty." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc68071/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, I introduce a promising framework for representing an air traffic flow (stream) and flow-management action operating under weather uncertainty. I propose to use a meshed queuing and Markov-chain model---specifically, a queuing model whose service-rates are modulated by an underlying Markov chain describing weather-impact evolution---to capture traffic management in an uncertain environment. Two techniques for characterizing flow-management performance using the model are developed, namely 1) a master-Markov-chain representation technique that yields accurate results but at relatively high computational cost, and 2) a jump-linear system-based approximation that has promising scalability. The model formulation and two analysis techniques are illustrated with numerous examples. Based on this initial study, I believe that the interfaced weather-impact and traffic-flow model analyzed here holds promise to inform strategic flow contingency management in NextGen.
8

Gudmundsson, Thorbjörn. "Markov chain Monte Carlo for rare-event simulation in heavy-tailed settings." Licentiate thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ren, Ruichao. "Accelerating Markov chain Monte Carlo simulation through sequential updating and parallel computing." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1428844711&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pitt, Michael K. "Bayesian inference for non-Gaussian state space model using simulation." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Harkness, Miles Adam. "Parallel simulation, delayed rejection and reversible jump MCMC for object recognition." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fu, Jianlin. "A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Unlike the traditional two-stage methods, a conditional and inverse-conditional simulation approach may directly generate independent, identically distributed realizations to honor both static data and state data in one step. The Markov chain Monte Carlo (McMC) method was proved a powerful tool to perform such type of stochastic simulation. One of the main advantages of the McMC over the traditional sensitivity-based optimization methods to inverse problems is its power, flexibility and well-posedness in incorporating observation data from different sources. In this work, an improved version of the McMC method is presented to perform the stochastic simulation of reservoirs and aquifers in the framework of multi-Gaussian geostatistics. First, a blocking scheme is proposed to overcome the limitations of the classic single-component Metropolis-Hastings-type McMC. One of the main characteristics of the blocking McMC (BMcMC) scheme is that, depending on the inconsistence between the prior model and the reality, it can preserve the prior spatial structure and statistics as users specified. At the same time, it improves the mixing of the Markov chain and hence enhances the computational efficiency of the McMC. Furthermore, the exploration ability and the mixing speed of McMC are efficiently improved by coupling the multiscale proposals, i.e., the coupled multiscale McMC method. In order to make the BMcMC method capable of dealing with the high-dimensional cases, a multi-scale scheme is introduced to accelerate the computation of the likelihood which greatly improves the computational efficiency of the McMC due to the fact that most of the computational efforts are spent on the forward simulations. To this end, a flexible-grid full-tensor finite-difference simulator, which is widely compatible with the outputs from various upscaling subroutines, is developed to solve the flow equations and a constant-displacement random-walk particle-tracking method, which enhances the com
Fu, J. (2008). A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1969
Palancia
13

Zhang, Xiaojing. "A simulation study of confidence intervals for the transition matrix of a reversible Markov chain." Kansas State University, 2016. http://hdl.handle.net/2097/32737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Karawatzki, Roman, and Josef Leydold. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/294/1/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
15

Scheibenpflug, Sara, and Jessica Schering. "The Price is Right : Project valuation for Project Portfolio Management using Markov Chain Monte Carlo Simulation." Thesis, KTH, Optimeringslära och systemteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A common managerial problem in the project-based organization is the problem of resource allocation. In practice this problem is addressed by applying project portfolio management. In this study we examine project portfolio management in a consultancy firm by applying a mathematical model. The information produced by this model could enable rational decision making and thus improve the economic resilience and reduce internal uncertainty in the firm so that it may live long and prosper. The proposed model is based on a Markov process that represents the projects in the firm. The parameters are estimated by Maximum Likelihood and the results are estimated through Monte Carlo Simulation. This study initially shows that it is possible to model the project portfolio as a Markov process. This was supported by the conducted literature review and illustrated by the presented model. Furthermore, we conclude that the value of accepting a project is dependent on the current state of the firm in terms of available capacity and firm characteristics regarding the processes of project arrival and completion. Last we show how the acceptance of a new project leads to a decrease in the cost-based price. However, large uncertainties stemming from the lack of relevant data and model simplifications limits the use of this model to that of an indicative aid in decision processes.
Resursallokering är ett vanligt problem i projektbaserade organisationer och löses i praktiken genom medveten styrning av projekten i företagets projektportfölj. I denna studie undersöks hanteringen av projektportföljen i en konsultfirma genom en matematisk modell. Vi anser det möjligt att en matematisk modell kan möjliggöra rationellt beslutsfattande vilken i sin tur kan förbättra firmans ekonomiska resultat och minska den interna osäkerheten. Den föreslagna modellen är baserad på en Markovprocess som beskriver projektportföljen på företaget. Parametrar bestäms med Maximum Likelihood metoden och resultatet skattas genom Monte Carlo-simulering. Studien visar initialt att det är möjligt att modellera en projektportfölj som en Markovprocess. Detta stöttas av relevant litteratur och illustreras av den presenterade modellen. Vidare visas det att värdet av ett nytt projekt är beroende av firmans nuvarande tillstånd i form av tillgänglig kapacitet samt dess egenskaper gällande hur projekt accepteras och avslutas. Slutligen visar studien hur det kostnadsbaserade priset minskar när ett nytt projekt accepteras till projektportföljen. En stor osäkerhet i den föreslagna modellen grundar sig i avsaknaden av relevant data samt de förenklingar som gjorts för att reducera modellens komplexitet. Därför är modellen begränsat applicerbar i verkligheten varför vi rekommenderar att den endast används som ett indikativt beslutsstöd av företaget.
16

Zhu, Qingyun. "Product Deletion and Supply Chain Management." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One of the most significant changes in the evolution of modern business management is that organizations no longer compete as individual entities in the market, but as interlocking supply chains. Markets are no longer simply trading desks but dynamic ecosystems where people, organizations and the environment interact. Products and associated materials and resources are links that bridge supply chains from upstream (sourcing and manufacturing) to downstream (delivering and consuming). The lifecycle of a product plays a critical role in supply chains. Supply chains may be composed by, designed around, and modified for products. Product-related issues greatly impact supply chains. Existing studies have advanced product management and product lifecycle management literature through dimensions of product innovation, product growth, product line extensions, product efficiencies, and product acquisition. Product deletion, rationalization, or reduction research is limited but is a critical issue for many reasons. Sustainability is an important reason for this managerial decision. This study, grounded from multiple literature streams in both marketing and supply chain fields, identified relations and propositions to form a firm-level analysis on the role of supply chains in organizational product deletion decisions. Interviews, observational and archival data from international companies (i.e.: Australia, China, India, and Iran) contributed to the empirical support as case studies through a grounded theory approach. Bayesian analysis, an underused empirical analysis tool, was utilized to provide insights into this underdeveloped research stream; and its relationship to qualitative research enhances broader methodological understanding. Gibbs sampler and reversible jump Markov chain Monte Carlo (MCMC) simulation were used for Bayesian analysis based on collected data. The integrative findings are exploratory but provide insights for a number of research propositions.
17

Servitja, Robert Maria. "A First Study on Hidden Markov Models and one Application in Speech Recognition." Thesis, Linköpings universitet, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-123912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Speech is intuitive, fast and easy to generate, but it is hard to index and easy to forget. What is more, listening to speech is slow. Text is easier to store, process and consume, both for computers and for humans, but writing text is slow and requires some intention. In this thesis, we study speech recognition which allows converting speech into text, making it easier both to create and to use information. Our tool of study is Hidden Markov Models which is one of the most important machine learning models in speech and language processing. The aim of this thesis is to do a rst study in Hidden Markov Models and understand their importance, particularly in speech recognition. We will go through three fundamental problems that come up naturally with Hidden Markov Models: to compute a likelihood of an observation sequence, to nd an optimal state sequence given an observation sequence and the model, and to adjust the model parameters. A solution to each problem will be given together with an example and the corresponding simulations using MatLab. The main importance lies in the last example, in which a rst approach to speech recognition will be done.
18

Castoe, Minna, and Teo Raspudic. "Option Pricing Under the Markov-switching Framework Defined by Three States." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An exact solution for the valuation of the options of the European style can be obtained using the Black-Scholes model. However, some of the limitations of the Black-Scholes model are said to be inconsistent such as the constant volatility of the stock price which is not the case in real life. In this thesis, the Black-Scholes model is extended to a model where the volatility is fully stochastic and changing over time, modelled by Markov chain with three states - high, medium and low. Under this model, we price options of both types, European and American, using Monte Carlo simulation.
19

Liu, Gang. "Rare events simulation by shaking transformations : Non-intrusive resampler for dynamic programming." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX043/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse contient deux parties: la simulation des événements rares et le rééchantillonnage non-intrusif stratifié pour la programmation dynamique. La première partie consiste à quantifier des statistiques liées aux événements très improbables mais dont les conséquences sont sévères. Nous proposons des transformations markoviennes sur l'espace des trajectoires et nous les combinons avec les systèmes de particules en interaction et l'ergodicité de chaîne de Markov, pour proposer des méthodes performantes et applicables en grande généralité. La deuxième partie consiste à résoudre numériquement le problème de programmation dynamique dans un contexte où nous avons à disposition seulement des données historiques en faible nombre et nous ne connaissons pas les valeurs des paramètres du modèle. Nous développons et analysons un nouveau schéma composé de stratification et rééchantillonnage
This thesis contains two parts: rare events simulation and non-intrusive stratified resampler for dynamic programming. The first part consists of quantifying statistics related to events which are unlikely to happen but which have serious consequences. We propose Markovian transformation on path spaces and combine them with the theories of interacting particle system and of Markov chain ergodicity to propose methods which apply very generally and have good performance. The second part consists of resolving dynamic programming problem numerically in a context where we only have historical observations of small size and we do not know the values of model parameters. We propose and analyze a new scheme with stratification and resampling techniques
20

Trabelsi, Brahim. "Simulation numérique de l’écoulement et mélange granulaires par des éléments discrets ellipsoïdaux." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/9300/1/trabelsi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les matériaux granulaires sont omniprésents, ils se trouvent aussi bien dans la nature que dans quelques applications industrielles. Parmi les applications industrielles utilisant les matériaux granulaires, on cite le mélange des poudres dans les industries agro-alimentaires, chimiques, métallurgiques et pharmaceutiques. La caractérisation et l'étude du comportement de ces matériaux sont nécessaires pour la compréhension de plusieurs phénomènes naturels comme le mouvement des dunes et les avalanches de neige, et de processus industriels tel que l'écoulement et le mélange des grains dans un mélangeur. Le comportement varié des matériaux granulaires les rend inclassables parmi les trois états de la matière : solide, liquide et gazeux. Ceci a fait dire qu'il s'agit d'un ``quatrième état'' de la matière, situé entre solide et liquide. L'objectif de ce travail est de concevoir et de mettre en oeuvre des méthodes efficaces d'éléments discrets pour la simulation et l'analyse des processus de mélange et de ségrégation des particules ellipsoïdales dans des mélangeurs culbutants industriels tels que le mélangeur à cerceaux. Dans la DEM l'étape la plus critique en terme de temps CPU est celle de la détection et de résolution de contact. Donc pour que la DEM soit efficace il faut optimiser cette étape. On se propose de combiner le modèle du potentiel géométrique et la condition algébrique de contact entre deux ellipsoïdes proposée par Wang et al., pour l'élaboration d'un algorithme efficace de détection de contact externe entre particules ellipsoïdales. Puis de de prouver un résultat théorique et d'élaborer un algorithme pour le contact interne. D'autre part, le couplage DEM-chaîne de Markov permet de diminuer très sensiblement le temps de simulation en déterminant la matrice de transition à partir d'une simulation à courte durée puis en calculant l'état du système à l'aide du modèle de chaîne de Markov. En effet, en utilisant la théorie des matrices strictement positives et en se basant sur le théorème de Perron-Frobenius on peut approximer le nombre de transitions nécessaires pour la convergence vers un état donné.
21

Minvielle-Larrousse, Pierre. "Méthodes de simulation stochastique pour le traitement de l’information." Thesis, Pau, 2019. http://www.theses.fr/2019PAUU3005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lorsqu’une grandeur d’intérêt ne peut être directement mesurée, il est fréquent de procéder à l’observation d’autres quantités qui lui sont liées par des lois physiques. Ces quantités peuvent contenir de l’information sur la grandeur d’intérêt si l’on sait résoudre le problème inverse, souvent mal posé, et inférer la valeur. L’inférence bayésienne constitue un outil statistique puissant pour l’inversion, qui requiert le calcul d’intégrales en grande dimension. Les méthodes Monte Carlo séquentielles (SMC), aussi dénommées méthodes particulaires, sont une classe de méthodes Monte Carlo permettant d’échantillonner selon une séquence de densités de probabilité de dimension croissante. Il existe de nombreuses applications, que ce soit en filtrage, en optimisation globale ou en simulation d’évènement rare. Les travaux ont porté notamment sur l’extension des méthodes SMC dans un contexte dynamique où le système, régi par un processus de Markov caché, est aussi déterminé par des paramètres statiques que l’on cherche à estimer. En estimation bayésienne séquentielle, la détermination de paramètres fixes provoque des difficultés particulières : un tel processus est non-ergodique, le système n’oubliant pas ses conditions initiales. Il est montré comment il est possible de surmonter ces difficultés dans une application de poursuite et identification de formes géométriques par caméra numérique CCD. Des étapes d’échantillonnage MCMC (Chaîne de Markov Monte Carlo) sont introduites pour diversifier les échantillons sans altérer la distribution a posteriori. Pour une autre application de contrôle de matériau, qui cette fois « hors ligne » mêle paramètres statiques et dynamiques, on a proposé une approche originale. Elle consiste en un algorithme PMMH (Particle Marginal Metropolis-Hastings) intégrant des traitements SMC Rao-Blackwellisés, basés sur des filtres de Kalman d’ensemble en interaction.D’autres travaux en traitement de l’information ont été menés, que ce soit en filtrage particulaire pour la poursuite d’un véhicule en phase de rentrée atmosphérique, en imagerie radar 3D par régularisation parcimonieuse ou en recalage d’image par information mutuelle
When a quantity of interest is not directly observed, it is usual to observe other quantities that are linked by physical laws. They can provide information about the quantity of interest if it is able to solve the inverse problem, often ill posed, and infer the value. Bayesian inference is a powerful tool for inversion that requires the computation of high dimensional integrals. Sequential Monte Carlo (SMC) methods, a.k.a. interacting particles methods, are a type of Monte Carlo methods that are able to sample from a sequence of probability densities of growing dimension. They are many applications, for instance in filtering, in global optimization or rare event simulation.The work has focused in particular on the extension of SMC methods in a dynamic context where the system, governed by a hidden Markov process, is also determined by static parameters that we seek to estimate. In sequential Bayesian estimation, the determination of fixed parameters causes particular difficulties: such a process is non-ergodic, the system not forgetting its initial conditions. It is shown how it is possible to overcome these difficulties in an application of tracking and identification of geometric shapes by CCD digital camera. Markov Monte Carlo Chain (MCMC) sampling steps are introduced to diversify the samples without altering the posterior distribution. For another material control application, which mixes static and dynamic parameters, we proposed an original offline approach. It consists of a Particle Marginal Metropolis-Hastings (PMMH) algorithm that integrates Rao-Blackwellized SMC, based on a bank of interacting Ensemble Kalman filters.Other information processing works has been conducted: particle filtering for atmospheric reentry vehicle tracking, 3D radar imaging by sparse regularization and image registration by mutual information
22

Madurapperuma, Buddhika D. "From Bray-Curtis Ordination to Markov Chain Monte Carlo Simulation: Assessing Anthropogenically-Induced andor Climatically-Induced Changes in Arboreal Ecosystems." Diss., North Dakota State University, 2013. https://hdl.handle.net/10365/27057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mapping forest resources is useful for identifying threat patterns and monitoring changes associated with landscapes. Remote Sensing and Geographic Information Science techniques are effective tools used to identify and forecast forest resource threats such as exotic plant invasion, vulnerability to climate change, and land-use/cover change. This research focused on mapping abundance and distribution of Russian-olive using soil and land-use/cover data, evaluating historic land-use/cover change using mappable water-related indices addressing the primary loss of riparian arboreal ecosystems, and detecting year-to-year land-cover changes on forest conversion processes. Digital image processing techniques were used to detect the changes of arboreal ecosystems using ArcGIS ArcInfo? 9.3, ENVI?, and ENVI? EX platforms. Research results showed that Russian-olive at the inundated habitats of the Missouri River is abundant compared to terrestrial habitats in the Bismarck-Mandan Wildland Urban Interface. This could be a consequence of habitat quality of the floodplain, such as its silt loam and silty clay soil type, which favors Russian-olive regeneration. Russian-olive has close assemblage with cottonwood (Populus deltoides) and buffaloberry (Shepherdia argentea) trees at the lower elevations. In addition, the Russian-olive-cottonwood association correlated with low nitrogen, low pH, and high Fe, while Russian-olive- buffaloberry association occurred in highly eroded areas. The Devils Lake sub-watershed was selected to demonstrate how both land-use/cover modification and climatic variability have caused the vulnerability of arboreal ecosystems on the fringe to such changes. Land-cover change showed that the forest acreage declined from 9% to 1%, water extent increased from 13% to 25%, and cropland extent increased from 34% to 39% between 1992 and 2006. In addition, stochastic modeling was adapted to simulate how land-use/cover change influenced forest conversion to non-forested lands at the urban-wildland fringes in Cass County. The analysis yielded two distinct statistical groups of transition probabilities for forest to non-forest, with high transition probability of unchanged forest (0.54?; Pff?; 0.68) from 2006 to 2011. Generally, the land-uses, such as row crops, showed an increasing trend, while grains, hay, seeds, and other crops showed a declining trend. This information is vital to forest managers for implementing restoration and conservation practices in arboreal ecosystems.
United States Department of Agriculture Forest Service award #10-DG-11010000-011
Catalog of Federal Domestic Assistance (CFDA) Cooperative Forestry #10.664
NDSU Department of Geosciences
Research, Creative Activities & Tech Transfer (RCATT)
NDSU Department of Geosciences
Environmental and Conservation Sciences Program
NDSU College of Science & Mathematics
North Dakota View Scholarship
Alexander Goetz Instrument Support Program
23

Madurapperuma, Buddhika Dilhan. "From Bray-Curtis ordination to Markov Chain Monte Carlo simulation| assessing anthropogenically-induced and/or climatically-induced changes in arboreal ecosystems." Thesis, North Dakota State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3589285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Mapping forest resources is useful for identifying threat patterns and monitoring changes associated with landscapes. Remote Sensing and Geographic Information Science techniques are effective tools used to identify and forecast forest resource threats such as exotic plant invasion, vulnerability to climate change, and land-use/cover change. This research focused on mapping abundance and distribution of Russian-olive using soil and land-use/cover data, evaluating historic land-use/cover change using mappable water-related indices addressing the primary loss of riparian arboreal ecosystems, and detecting year-to-year land-cover changes on forest conversion processes. Digital image processing techniques were used to detect the changes of arboreal ecosystems using ArcGIS ArcInfo® 9.3, ENVI®, and ENVI® EX platforms.

Research results showed that Russian-olive at the inundated habitats of the Missouri River is abundant compared to terrestrial habitats in the Bismarck-Mandan Wildland Urban Interface. This could be a consequence of habitat quality of the floodplain, such as its silt loam and silty clay soil type, which favors Russian-olive regeneration. Russian-olive has close assemblage with cottonwood (Populus deltoides) and buffaloberry (Shepherdia argentea) trees at the lower elevations. In addition, the Russian-olive-cottonwood association correlated with low nitrogen, low pH, and high Fe, while Russian-olive- buffaloberry association occurred in highly eroded areas.

The Devils Lake sub-watershed was selected to demonstrate how both land-use/cover modification and climatic variability have caused the vulnerability of arboreal ecosystems on the fringe to such changes. Land-cover change showed that the forest acreage declined from 9% to 1%, water extent increased from 13% to 25%, and cropland extent increased from 34% to 39% between 1992 and 2006. In addition, stochastic modeling was adapted to simulate how land-use/cover change influenced forest conversion to non-forested lands at the urban-wildland fringes in Cass County. The analysis yielded two distinct statistical groups of transition probabilities for forest to non-forest, with high transition probability of unchanged forest (0.54≤ Pff ≤ 0.68) from 2006 to 2011. Generally, the land-uses, such as row crops, showed an increasing trend, while grains, hay, seeds, and other crops showed a declining trend. This information is vital to forest managers for implementing restoration and conservation practices in arboreal ecosystems.

24

Haque, Shovanur S. "Assessing the accuracy of record matching algorithms in data linkage." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/123042/1/Shovanur_Haque_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis developed a Markov Chain based Monte Carlo (MaCSim) simulation approach, implemented in the R software, for assessing the accuracy of a linked file and illustrates the utility of the approach using the ABS (Australian Bureau of Statistics) synthetic data in realistic data settings. MaCSim, can be used either to assess a linking method or to compare multiple linking methods. The accuracy results using MaCSim can inform decisions on a preferred linking method or whether records are linkable at all. This will prove extremely important in applying analysis techniques which can adequately account for the errors associated with linkage.
25

Larson, Kajsa. "On perfect simulation and EM estimation." Doctoral thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-33779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Ken. "Reseau local d'entreprise : contribution a la modelisation et a l'evaluation des protocoles d'acces." Paris 11, 1988. http://www.theses.fr/1988PA112272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux sont composes de trois parties essentielles : l'analyse et la simulation des protocoles d'acces; l'elaboration d'un logiciel d'aide a la simulation; la conception d'un banc d'essai destine a la simulation materielle. Les protocoles etudies sont de type "jetar" pour les topologies "anneau" et "bus". Le logiciel de simulation, baptise "atom", est une extension du langage pascal
27

Ittiwattana, Waraporn. "A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1030366020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yalcinoz, Zerrin. "A Simulation Study On Marginalized Transition Random Effects Models For Multivariate Longitudinal Binary Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609568/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, a simulation study is held and a statistical model is fitted to the simulated data. This data is assumed to be the satisfaction of the customers who withdraw their salary from a particular bank. It is a longitudinal data which has bivariate and binary response. It is assumed to be collected from 200 individuals at four different time points. In such data sets, two types of dependence -the dependence within subject measurements and the dependence between responses- are important and these are considered in the model. The model is Marginalized Transition Random Effects Models, which has three levels. The first level measures the effect of covariates on responses, the second level accounts for temporal changes, and the third level measures the difference between individuals. Markov Chain Monte Carlo methods are used for the model fit. In the simulation study, the changes between the estimated values and true parameters are searched under two conditions, when the model is correctly specified or not. Results suggest that the better convergence is obtained with the full model. The third level which observes the individual changes is more sensitive to the model misspecification than the other levels of the model.
29

Brosse, Nicolas. "Around the Langevin Monte Carlo algorithm : extensions and applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX014/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur le problème de l'échantillonnage en grande dimension et est basée sur l'algorithme de Langevin non ajusté (ULA).Dans une première partie, nous proposons deux extensions d'ULA et fournissons des garanties de convergence précises pour ces algorithmes. ULA n'est pas applicable lorsque la distribution cible est à support compact; grâce à une régularisation de Moreau Yosida, il est néanmoins possible d'échantillonner à partir d'une distribution suffisamment proche de la distribution cible. ULA diverge lorsque les queues de la distribution cible sont trop fines; en renormalisant correctement le gradient, cette difficulté peut être surmontée.Dans une deuxième partie, nous donnons deux applications d'ULA. Nous fournissons un algorithme pour estimer les constantes de normalisation de densités log concaves à partir d'une suite de distributions dont la variance augmente graduellement. En comparant ULA avec la diffusion de Langevin, nous développons une nouvelle méthode de variables de contrôle basée sur la variance asymptotique de la diffusion de Langevin.Dans une troisième partie, nous analysons Stochastic Gradient Langevin Dynamics (SGLD), qui diffère de ULA seulement dans l'estimation stochastique du gradient. Nous montrons que SGLD, appliqué avec des paramètres habituels, peut être très éloigné de la distribution cible. Cependant, avec une technique appropriée de réduction de variance, son coût calcul peut être bien inférieur à celui d'ULA pour une précision similaire
This thesis focuses on the problem of sampling in high dimension and is based on the unadjusted Langevin algorithm (ULA).In a first part, we suggest two extensions of ULA and provide precise convergence guarantees for these algorithms. ULA is not feasible when the target distribution is compactly supported; thanks to a Moreau Yosida regularization, it is nevertheless possible to sample from a probability distribution close enough to the distribution of interest. ULA diverges when the tails of the target distribution are too thin; by taming appropriately the gradient, this difficulty can be overcome.In a second part, we give two applications of ULA. We provide an algorithm to estimate normalizing constants of log concave densities based on a sequence of distributions with increasing variance. By comparison of ULA with the Langevin diffusion, we develop a new control variates methodology based on the asymptotic variance of the Langevin diffusion.In a third part, we analyze Stochastic Gradient Langevin Dynamics (SGLD), which differs from ULA only in the stochastic estimation of the gradient. We show that SGLD, applied with usual parameters, may be very far from the target distribution. However, with an appropriate variance reduction technique, its computational cost can be much lower than ULA for the same accuracy
30

Lee, Xing Ju. "Statistical and simulation modelling for enhanced understanding of hospital pathogen and related health issues." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103762/1/Xing%20Ju_Lee_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigated the temporal occurrence and transmission of within hospital pathogens using appropriate statistical and simulation models applied to imperfect hospital data. The research provides new insights into the transmission dynamics of methicillin-resistant Staphylococcus aureus within a hospital ward to assist infection control and prevention efforts. Additionally, appropriate statistical methods are identified to analyse hospital infection data which take into account the intricacies and potential limitations of such data.
31

Trávníček, Jan. "Tvorba spolehlivostních modelů pro pokročilé číslicové systémy." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis deals with the systems reliability. At First, there is discussed the concept of reliability itself and its indicators, which can specifically express reliability. The second chapter describes the different kinds of reliability models for simple and complex systems. It further describes the basic methods for construction of reliability models. The fourth chapter is devoted to a very important Markov models. Markov models are very powerful and complex model for calculating the reliability of advanced systems. Their suitability is explained here for recovered systems, which may contain absorption states. The next chapter describes the standby redundancy. Discusses the advantages and disadvantages of static, dynamic and hybrid standby. There is described the influence of different load levels on the service life. The sixth chapter is devoted to the implementation, description of the application and description of the input file in XML format. There are discussed the results obtaining in experimental calculations.
32

Heng, Jeremy. "On the use of transport and optimal control methods for Monte Carlo simulation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6cbc7690-ac54-4a6a-b235-57fa62e5b2fc.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
33

Harlow, Jennifer. "Data-Adaptive Multivariate Density Estimation Using Regular Pavings, With Applications to Simulation-Intensive Inference." Thesis, University of Canterbury. School of Mathematics and Statistics, 2013. http://hdl.handle.net/10092/9160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A regular paving (RP) is a finite succession of bisections that partitions a multidimensional box into sub-boxes using a binary tree-based data structure, with the restriction that an existing sub-box in the partition may only be bisected on its first widest side. Mapping a real value to each element of the partition gives a real-mapped regular paving (RMRP) that can be used to represent a piecewise-constant function density estimate on a multidimensional domain. The RP structure allows real arithmetic to be extended to density estimates represented as RMRPs. Other operations such as computing marginal and conditional functions can also be carried out very efficiently by exploiting these arithmetical properties and the binary tree structure. The purpose of this thesis is to explore the potential for density estimation using RPs. The thesis is structured in three parts. The first part formalises the operational properties of RP-structured density estimates. The next part considers methods for creating a suitable RP partition for an RMRP-structured density estimate. The advantages and disadvantages of a Markov chain Monte Carlo algorithm, already developed, are investigated and this is extended to include a semi-automatic method for heuristic diagnosis of convergence of the chain. An alternative method is also proposed that uses an RMRP to approximate a kernel density estimate. RMRP density estimates are not differentiable and have slower convergence rates than good multivariate kernel density estimators. The advantages of an RMRP density estimate relate to its operational properties. The final part of this thesis describes a new approach to Bayesian inference for complex models with intractable likelihood functions that exploits these operational properties.
34

M'Saad, Soumaya. "Détection de changement de comportement de vie chez la personne âgée par images de profondeur." Thesis, Rennes 1, 2022. http://www.theses.fr/2022REN1S039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le nombre des personnes âgées ne cesse d’augmenter dans le monde d’où l’enjeu de les aider à continuer de vivre chez eux et de vieillir en bonne santé. Cette thèse s’inscrit dans cette problématique de santé publique et propose la détection du changement de comportement de la personne en se basant sur l’enregistrement des activités au domicile par des capteurs de profondeur à bas coût qui garantissent l’anonymat et qui fonctionnent de façon autonome de jour comme de nuit. Après une étude initiale associant la classification des images par des approches de machine learning, une méthode basée sur les réseaux de neurones profonds ResNet-18 a été proposée pour la détection de la chute et la détection des postures. Cette approche a donné des résultats satisfaisants avec une précision globale de 93,44% et une sensibilité globale de 93.24%. La détection des postures permet de suivre les changements de comportement qui sont synonymes de la perte de la routine. Deux stratégies ont été déployées pour le suivi du maintien de la routine. La première examine la succession des activités dans la journée en établissant une distance d’édition ou une déformation dynamique de la journée, l’autre consiste à classer la journée en routine et non-routine en associant des approches supervisées (k-moyennes et k-modes), non supervisées (Random Forest) ou les connaissances a priori sur la journée routine de la personne. Ces stratégies ont été évaluées à la fois sur des données réelles enregistrées en EHPAD chez deux personnes fragiles et à la fois sur des données simulées créés pour combler le manque de données réelles. Elles ont montré la possibilité de détecter différents scénarios de changement de comportement (brusque, progressif, récurrent) et prouvent que les capteurs de profondeur peuvent être utilisés en EHPAD ou dans l’habitat d’une personne âgée
The number of elderly people in the world is constantly increasing, hence the challenge of helping them to continue to live at home and ageing in good health. This PhD takes part in this public health issue and proposes the detection of the person behavior change based on the recording of activities in the home by low-cost depth sensors that guarantee anonymity and that operate autonomously day and night. After an initial study combining image classification by machine learning approaches, a method based on Resnet-18 deep neural networks was proposed for fall and posture position detection. This approach gave good results with a global accuracy of 93.44% and a global sensitivity of 93.24%. The detection of postures makes possible to follow the state of the person and in particular the behavior changes which are assumed to be the routine loss. Two strategies were deployed to monitor the routine. The first one examines the succession of activities in the day by computing an edit distance or a dynamic deformation of the day, the other one consists in classifying the day into routine and non-routine by combining supervised (k-means and k-modes), unsupervised (Random Forest) or a priori knowledge about the person's routine. These strategies were evaluated both on real data recorded in EHPAD in two frail people and on simulated data created to fill the lack of real data. They have shown the possibility to detect different behavioral change scenarios (abrupt, progressive, recurrent) and prove that depth sensors can be used in EHPAD or in the home of an elderly person
35

Behlouli, Abdeslam. "Simulation du canal optique sans fil. Application aux télécommunications optique sans fil." Thesis, Poitiers, 2016. http://www.theses.fr/2016POIT2308/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le contexte de cette thèse est celui des communications optiques sans fil pour des applications en environnements indoor. Pour discuter des performances d'une liaison optique sans fil, il est nécessaire d'établir une étude caractéristique du comportement du canal de propagation. Cette étude passe par l'étape de la mesure ou de l'estimation par la simulation de la réponse impulsionnelle. Après avoir décrit la composition d'une liaison et passé en revue les méthodes de simulation existantes, nous présentons nos algorithmes de simulation dans des environnements réalistes, en nous intéressant à leurs performances en termes de précision et de temps de calcul. Ces méthodes sont basées sur la résolution des équations de transport de la lumière par du lancer de rayons associées aux méthodes d'intégration stochastique de Monte Carlo. La version classique de ces méthodes est à la base de trois algorithmes de simulations proposés. En utilisant une optimisation par des chaînes de Markov, nous présentons ensuite deux autres algorithmes. Un bilan des performances de ces algorithmes est établi dans des scénarios mono et multi-antennes. Finalement, nous appliquons nos algorithmes pour caractériser l'impact de l'environnement de simulation sur les performances d'une liaison de communication par lumière visible, à savoir les modèles d'émetteurs, les matériaux des surfaces, l'obstruction du corps de l'utilisateur et sa mobilité, et la géométrie de la scène de simulation
The context of this PhD thesis falls within the scope of optical wireless communications for applications in indoor environments. To discuss the performance of an optical wireless link, it is necessary to establish a characteristic study of the behavior of the optical wave propagation channel. This study can be realized by measurement or by the simulation of the channel impulse response. After describing the composition of an optical wireless link and reviewing existing simulation methods, we present our new simulation algorithms channel in realistic environments by focusing on their performances in terms of accuracy and their complexity in terms of computation time. These methods are based on solving the light transport equations by ray-tracing techniques associated with stochastic Monte Carlo integration methods. The classical version of these methods is the basis of three proposed simulation algorithms. By applying an optimization using Markov Chain, we present two new algorithms. A performance assessment of our simulation algorithms is established in mono and multi-antenna scenarios of our simulation algorithms. Finally, we present the application of these algorithms for characterizing the impact of the simulation environment on the performances of a visible light communication link. We particularly focus on the transmitter models, surface coating materials, obstruction of the user's body and its mobility, and the geometry of the simulation scene
36

Wang, Guojun. "Some Bayesian Methods in the Estimation of Parameters in the Measurement Error Models and Crossover Trial." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1076852153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Girbino, Michael James. "Detecting Distribution-Level Voltage Anomalies by Monitoring State Transitions in Voltage Regulation Control Systems." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1550483383962611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Senegas, Julien. "Méthodes de Monte Carlo en Vision Stéréoscopique." Phd thesis, École Nationale Supérieure des Mines de Paris, 2002. http://tel.archives-ouvertes.fr/tel-00005637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse a pour objet l'étude de l'incertitude attachée à l'estimation de la géometrie d'une scène à partir d'un couple stéréoscopique d'images. La mise en correspondance des points homologues d'un couple suppose la similarité locale des deux images et nécessite une information radiométrique discriminante. Dans de nombreuses situations cependant (déformations géométriques, bruit d'acquisition, manque de contraste, ....), ces hypothèses sont mises en défaut et les erreurs d'appariemment qui en résultent dépendent fortement de l'information contenue dans le couple et non du sytème stéréoscopique lui-meme.
Afin d'aborder ce problème, nous proposons un cadre bayésien et l'application de méthodes de Monte Carlo par chaînes de Markov. Celles-ci consistent à simuler la distribution conditionnelle du champ de disparité connaissant le couple stéréoscopique et permettent de déterminer les zones où des erreurs importantes peuvent apparaitre avec une probabilité éventuellement faible. Différents modèles stochastiques sont comparés et testés a partir de scènes stéréoscopiques SPOT, et nous donnons quelques pistes pour étendre ces modèles à d'autres types d'images. Nous nous intéressons également au probleme de l'estimation des paramètres de ces modèles et proposons un certain nombre d'algorithmes permettant une estimation automatique. Enfin, une part importante du travail est consacrée à l'étude d'algorithmes de simulation reposant sur la théorie des chaînes de Markov. L'apport essentiel réside dans l'extension de l'algorithme de Metropolis-Hastings dans une perspective multi-dimensionnelle. Une application performante reposant sur l'utilisation de la loi gaussienne est donnée. De plus, nous montrons comment le recours à des techniques d'échantillonnage d'importance permet de diminuer efficacement le temps de calcul.
39

Ljunggren, Tim. "Probabilistic Life Cycle Costing : A Monte Carlo Approach for Distribution System Operators in Sweden." Thesis, KTH, Elektroteknisk teori och konstruktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-218293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Investments in power systems are characterized by large investment costs and uncertainties doto extended time frame. New consumption patterns in the electricity grid, as well as an aginggrid calls for modernization, new solutions and new investments. Components in the electricalsystem is characterized by most of their costs that are caused after their acquisition. One stateof the art method in analyzing investments over long time frames and provide long-term costestimation is life cycle costing (LCC). In LCC a "cradle to grave"-approach is performed whichenables comparative cost assessment to be made. This thesis reviews the existing literature inprobabilistic life cycle costing and gives a step by step methodology for DSOs to systematicallyaddress uncertainty in cost and technical parameters.This thesis proposes a Monte Carlo sampling method in combination with a Markov Chainfailure model to model failures is providing a comprehensive method of reaching nancial benetwhen comparing dierent investment decisions. The model evaluates nancial implications andtechnical properties to demonstrate the total cost of components. This thesis analyses a casefor Swedish distribution system operators and their investment in transformers. The proposedmodel includes an all-covering model of costs and incentives. The main conclusion is that probabilisticlife cycle costing benets investment decisions and the applied method shows promisingresults in addressing uncertainty and investment risks. The developed PLCC model is used onan investment decision where two transformers are compared. Results shows that PLCC is apowerful tool and could be used in power system applications.
Investeringar i kraftsystem kännetecknas av höga investeringskostnader och osäkerheter på grundav komponenternas långa livslängd. Nya konsumtionsönster och ett föråldrat elsystem efterfrågar mordenisering, nya lösningar och nya investeringar. Komponenter i elnätet karakteriserasav att den största delen av kostnader orsakas efter de förvärvats. En framstående metod för attanalysera investeringar som löper över långa tidsspann och som kan ge en kostnadsestimeringär livscykelkostnadsanalys. Inom livscykelkostnadsanalys tillämpas ett från vaggan till graventillvägagångssätt vilket möjliggör jämförelser av kostnader. Denna uppsats granskar existerandeforskning inom probabilistisk livscykelanalys och ger en steg-för-steg-metodik för att en distributionsnätsoperatör systematiskt skall kunna adressera osäkerheter relaterade till kostnader samttekniska parametrar.Denna uppsatsen föreslår en Monte Carlo-metod i kombination av en Markovkedja, för attmed en heltäckande metod nå finansiell jämförbarhet mellan olika investeringsbeslut. Denna uppsatsenanalyserar ett fall för en svensk distributionsnätsoperatör och dess investering i transformatorer.Den föreslagna modellen inkluderar en heltäckande modell för kostnader och incitamet.Huvudresultatet från den föreslagna metoden är att probabilistisk livscykelkostnadsanalys samtde använda metoderna visar lovande resultat för att adressera osäkerheter och risker vid investeringsbeslut.
40

Krull, Claudia. "Discrete time Markov chains advanced applications in simulation." Erlangen San Diego, Calif. SCS, 2008. http://d-nb.info/992577586/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Viho, Agbélénko Goudjo. "Etude de modèles markoviens en génétique et calculs des temps d'absorption." Université Joseph Fourier (Grenoble), 1996. http://www.theses.fr/1996GRE10121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur l'étude de la modélisation de l'évolution, par des processus markoviens, de l'effectif d'un gène donné au cours des générations dans une population à taille limitée. Dans une première partie, nous faisons un recensement des différents types de chaînes de Markov homogènes utilisées dans la littérature pour une telle modélisation. Quand la taille de la population est grande, il est possible d'obtenir la convergence de ces chaînes de Markov vers des processus de diffusion homogènes grâce aux théorèmes de convergence qui font l'objet de la deuxième partie. La troisième partie de cette thèse est consacrée aux différentes méthodes de calcul des temps moyens avant absorption qui correspondent ici aux temps moyens avant disparition ou invasion complète du gène donné en partant de différentes conditions initiales. La quatrième partie propose une modélisation simple d'une maladie génétique, la drépanocytose, et l'influence du paludisme comme facteur de sélection sur l'évolution de cette maladie en Afrique
42

Nilsson, Albert. "Exploring strategies in Monopoly using Markov chains and simulation." Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

GRANDIS, HENDRA. "Imagerie electromagnetique bayesienne par la simulation d'une chaine de markov." Paris 7, 1994. http://www.theses.fr/1994PA077036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le travail presente dans cette these concerne le developpement d'une methode de resolution du probleme inverse en electromagnetisme dans le cadre d'une approche bayesienne. Dans cette perspective, la solution du probleme inverse est obtenue par l'integration de toutes les informations disponibles sur les donnees et sur les parametres du modele. Nous utilisons un algorithme stochastique pour calculer la probabilite a posteriori des parametres du modele connaissant les donnees et l'information a priori sur les parametres du modele. Une serie de modeles est construite en utilisant une loi de probabilite calculable directement pour les differentes valeurs possibles des parametres du modele. Ces modeles constituent une chaine de markov dont la probabilite invariante est la probabilite a posteriori des parametres du modele recherche. Ainsi, les modeles obtenus convergent vers un modele invariant ou modele inverse. Deux algorithmes d'imagerie ont ete developpes pour obtenir l'image electrique du sous-sol a partir de donnees electromagnetiques pour, d'une part, des structures uni-dimensionnelles, et d'autre part, des structures tri-dimensionnelles dans l'approximation de plaque mince. Nous avons etudie l'influence de quelques parametres preponderants sur la resolution du probleme inverse. Les tests sur des donnees synthetiques ont montre qu'il est necessaire d'adapter l'algorithme de base afin d'assurer la stabilite et la convergence de la solution. Les inversions de donnees reelles de nature differente (tenseur d'impedance, vecteur d'induction et resistivite apparente) justifient la validite de la methode proposee
44

Coste, Nicolas. "Vers la prédiction de performance de modèles compositionnels dans les architectures GALS." Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00538425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La validation, incluant vérification fonctionnelle et évaluation de performance, est un processus critique pour la conception de designs matériels complexes : un design fonctionnellement correct peut s'avérer incapable d'atteindre la performance ciblée. Plus un problème dans un design est identifié tard, plus son coût de correction est élevé. La validation de designs devrait donc être entreprise le plus tôt possible dans le flot de conception. Cette thèse présente un formalisme de modélisation par composition, couvrant les aspects fonctionnels et temporisés des systèmes matériels, et définit une approche d'évaluation de performance afin d'analyser les modèles construits. Le formalisme de modélisation défini, appelé Interactive Probabilistic Chain (IPC), est une algèbre de processus a temps discret. Nous avons défini une bisimulation de branchement et prouvé sa congruence par rapport à l'opérateur de composition parallèle, nous permettant une approche compositionnelle. les IPCs peuvent être vues comme une transposition des Interactive Markov Chains dans un espace de temps discret. Pour l'évaluation de performance, une IPC complètement spécifiée est transformée en une chaîne de Markov à temps discret, qui peut être analysée. De plus, nous avons défini une mesure de perfor- mance, appelée latence, et un algorithme permettant de calculer sa distribution moyenne sur le long terme. A l'aide d'outils permettant de traiter les IPCs, développés sur la base de la boîte à outils CADP, nous avons étudié les aspects de communication d'un design industriel, l'architecture xSTream, développée chez STMicroelectronics.
45

Gosselin, Frédéric. "Modèles stochastiques d'extinction de population : propriétés mathématiques et leurs applications." Paris 6, 1997. http://www.theses.fr/1997PA066358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Des modeles d'extinction de population sont frequemment employes comme outils d'aide a la decision pour la gestion de populations animales ou vegetales menacees. Le but de cette these est de prouver des resultats mathematiques decrivant la maniere dont se passe l'extinction dans la plupart de ces modeles, et de les appliquer a quelques cas particuliers. Apres une revue bibliographique sur l'extinction de populations et les modeles utilises pour l'apprehender, je propose une serie de resultats mathematiques, pour la plupart asymptotiques, generalisant au cadre plus etendu des chaines de markov a espace d'etats denombrable des resultats connus pour les chaines de markov finies ou les processus de ramification. Je prouve l'extinction certaine de ces modeles et deux types de stabilisation stochastique et asymptotique conditionnellement a la non-extinction. C'est l'utilisation de resultats d'analyse lineaire qui aura permis une telle extension. Je presente ensuite deux manieres de simuler numeriquement ces modeles, dont l'une est en partie nouvelle, puis j'etudie a la lumiere des resultats de cette these un modele d'extinction correspondant a une population reelle. Je clos ce travail par une discussion sur l'applicabilite, l'utilisation et l'utilite de mes resultats pour la modelisation de l'extinction, mais aussi dans d'autres domaines biologiques. Les principales conclusions sont que mes resultats s'appliquent a un grand nombre de modeles d'extinction existants et possedent des applications conceptuelles interessantes en ecologie theorique, concernant les notions d'equilibre et de stabilite, mais qu'ils ne sont pas systematiquement utiles dans l'interpretation des simulations et qu'ils doivent etre utilises avec une certaine rigueur mathematique
46

Fischer, Alexander. "An Uncoupling Coupling method for Markov chain Monte Carlo simulations with an application to biomolecules." [S.l. : s.n.], 2003. http://www.diss.fu-berlin.de/2003/234/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Grill, Tomas, and Håkan Östberg. "A Financial Optimization Approach to Quantitative Analysis of Long Term Government Debt Management in Sweden." Thesis, Linköping University, Department of Mathematics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

The Swedish National Debt Office (SNDO) is the Swedish Government’s financial administration. It has several tasks and the main one is to manage the central government’s debt in a way that minimizes the cost with due regard to risk. The debt management problem is to choose currency composition and maturity profile - a problem made difficult because of the many stochastic factors involved.

The SNDO has created a simulation model to quantitatively analyze different aspects of this problem by evaluating a set of static strategies in a great number of simulated futures. This approach has a number of drawbacks, which might be handled by using a financial optimization approach based on Stochastic Programming.

The objective of this master’s thesis is thus to apply financial optimization on the Swedish government’s strategic debt management problem, using the SNDO’s simulation model to generate scenarios, and to evaluate this approach against a set of static strategies in fictitious future macroeconomic developments.

In this report we describe how the SNDO’s simulation model is used along with a clustering algorithm to form future scenarios, which are then used by an optimization model to find an optimal decision regarding the debt management problem.

Results of the evaluations show that our optimization approach is expected to have a lower average annual real cost, but with somewhat higher risk, than a set of static comparison strategies in a simulated future. These evaluation results are based on a risk preference set by ourselves, since the government has not expressed its risk preference quantitatively. We also conclude that financial optimization is applicable on the government debt management problem, although some work remains before the method can be incorporated into the strategic work of the SNDO.

48

Milios, Dimitrios. "On approximating the stochastic behaviour of Markovian process algebra models." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Markov chains offer a rigorous mathematical framework to describe systems that exhibit stochastic behaviour, as they are supported by a plethora of methodologies to analyse their properties. Stochastic process algebras are high-level formalisms, where systems are represented as collections of interacting components. This compositional approach to modelling allows us to describe complex Markov chains using a compact high-level specification. There is an increasing need to investigate the properties of complex systems, not only in the field of computer science, but also in computational biology. To explore the stochastic properties of large Markov chains is a demanding task in terms of computational resources. Approximating the stochastic properties can be an effective way to deal with the complexity of large models. In this thesis, we investigate methodologies to approximate the stochastic behaviour of Markovian process algebra models. The discussion revolves around two main topics: approximate state-space aggregation and stochastic simulation. Although these topics are different in nature, they are both motivated by the need to efficiently handle complex systems. Approximate Markov chain aggregation constitutes the formulation of a smaller Markov chain that approximates the behaviour of the original model. The principal hypothesis is that states that can be characterised as equivalent can be adequately represented as a single state. We discuss different notions of approximate state equivalence, and how each of these can be used as a criterion to partition the state-space accordingly. Nevertheless, approximate aggregation methods typically require an explicit representation of the transition matrix, a fact that renders them impractical for large models. We propose a compositional approach to aggregation, as a means to efficiently approximate complex Markov models that are defined in a process algebra specification, PEPA in particular. Regarding our contributions to Markov chain simulation, we propose an accelerated method that can be characterised as almost exact, in the sense that it can be arbitrarily precise. We discuss how it is possible to sample from the trajectory space rather than the transition space. This approach requires fewer random samples than a typical simulation algorithm. Most importantly, our approach does not rely on particular assumptions with respect to the model properties, in contrast to otherwise more efficient approaches.
49

Virotta, Francesco. "Critical slowing down and error analysis of lattice QCD simulations." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2012. http://dx.doi.org/10.18452/16502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In dieser Arbeit untersuchen wir das Critical Slowing down der Gitter-QCD Simulationen. Wir führen eine Vorstudie in der quenched Approximation durch, in der wir feststellen, dass unsere Schätzung der exponentiellen Autokorrelation wie $\tauexp(a) \sim a^{-5} $ skaliert, wobei $a$ der Gitterabstand ist. In unquenched Simulationen mit O(a)-verbesserten Wilson-Fermionen finden wir ein ähnliches Skalierungsgesetz. Die Diskussion wird von einem gro\ss{}en Satz an Ensembles sowohl in reiner Eichtheorie als auch in der Theorie mit zwei entarteten Seequarks unterstützt. Wir haben darüber hinaus die Wirkung von langsamen algorithmischen Modi in der Fehleranalyse des Erwartungswertes von typischen Gitter-QCD-Observablen (hadronische Matrixelemente und Massen) untersucht. Im Kontext der Simulationen, die durch langsame Modi betroffen sind, schlagen wir vor und testen eine Methode, um zuverlässige Schätzungen der statistischen Fehler zu bekommen. Diese Methode soll in dem typischen Simulationsbereich der Gitter-QCD helfen, nämlich dann, wenn die gesamte erfasste Statistik O(10)\tauexp ist. Dies ist der typische Fall bei Simulationen in der Nähe des Kontinuumslimes, wo der Rechenaufwand für die Erzeugung von zwei unabhängigen Datenpunkten sehr gro\ss{} sein kann. Schlie\ss{}lich diskutieren wir die Skalenbestimmung in N_f=2-Simulationen mit der Kaon Zerfallskonstante f_K als experimentellem Input. Die Methode wird zusammen mit einer gründlichen Diskussion der angewandten Fehleranalyse erklärt. Eine Beschreibung der öffentlich zugänglichen Software, die für die Fehleranalyse genutzt wurde, ist eingeschlossen.
In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as $\tauexp(a)\sim a^{-5}$, where $a$ is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)\tauexp. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in Nf=2 simulations using the Kaon decay constant f_K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.
50

Rönnby, Karl. "Monte Carlo Simulations for Chemical Systems." Thesis, Linköpings universitet, Matematiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigates dierent types of Monte Carlo estimators for use in computationof chemical system, mainly to be used in calculating surface growthand evolution of SiC. Monte Carlo methods are a class of algorithms using randomsampling to numerical solve problems and are used in many cases. Threedierent types of Monte Carlo methods are studied, a simple Monte Carlo estimatorand two types of Markov chain Monte Carlo Metropolis algorithm MonteCarlo and kinetic Monte Carlo. The mathematical background is given for allmethods and they are tested both on smaller system, with known results tocheck their mathematical and chemical soundness and on larger surface systemas an example on how they could be used

To the bibliography