To see the other types of publications on this topic, follow the link: Sequential Monte Carlo methods.

Dissertations / Theses on the topic 'Sequential Monte Carlo methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sequential Monte Carlo methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fearnhead, Paul. "Sequential Monte Carlo methods in filter theory." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Punskaya, Elena. "Sequential Monte Carlo methods for digital communications." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Henderson, Donna. "Sequential Monte Carlo methods for demographic inference." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:a3516e76-ac95-4efc-9d57-53092ca4c8f3.

Full text
Abstract:
Patterns of mutations in the DNA of modern-day individuals have been shaped by the demographic history of our ancestors. Inferring the demographic history from these patterns is a challenging problem due to complex dependencies along the genome. Several recent methods have adopted McVean's sequentially Markovian coalescent (SMC') to model these dependencies. However, these methods involve simplifying assumptions that preclude the inference of rates of migration between populations. We have developed the first method to infer directional migration rates as a function of time. To do this, we employ sequential Monte Carlo (SMC) methods, also known as particle filters, to infer parameters in the SMC' model. To improve the sampling from the state space of SMC' we have developed a sophisticated sampling technique that shows better performance than the standard bootstrap filter. We apply our algorithm, SMC2, to Neanderthal data and are able to infer the time and extent of migration from the Vindija Neanderthal population into Europeans. With the large volume of sequencing data being produced from diverse populations, both modern and ancient, there is high demand for methods to interrogate this data. SMC2 provides a flexible algorithm, which can be modified to suit many data applications. For instance, we show that our method performs well when the phasing of the samples is unknown, which is often the case in practice. The long runtime of SMC2 is the main limiting factor in the adoption of the method. We have started to explore ways to improve the runtime, by developing an adaptive online expectation maximisation (EM) procedure.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Jun Feng. "Sequential Monte Carlo methods for multiple target tracking." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dias, Stiven Schwanz. "Collaborative emitter tracking using distributed sequential Monte Carlo methods." Instituto Tecnológico de Aeronáutica, 2014. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=3137.

Full text
Abstract:
We introduce in this Thesis several particle filter (PF) solutions to the problem of collaborative emitter tracking. In the studied scenario, multiple agents with sensing, processing and communication capabilities passively collect received-signal-strength (RSS) measurements of the same signal originating from a non-cooperative emitter and collaborate to estimate its hidden state. Assuming unknown sensor noise variances, we derive an exact decentralized implementation of the optimal centralized PF solution for this problem in a fully connected network. Next, assuming local internode communication only, we derive two fully distributed consensus-based solutions to the problem using respectively average consensus iterations and a novel ordered minimum consensus approach which allow us to reproduce the exact centralized solution in a finite number of consensus iterations. In the sequel, to reduce the communication cost, we derive a suboptimal tracker which employs suitable parametric approximations to summarize messages that are broadcast over the network. Moreover, to further reduce communication and processing requirements, we introduce a non-iterative tracker based on random information dissemination which is suited for online applications. We derive the proposed random exchange diffusion PF (ReDif-PF) assuming both that observation model parameters are perfectly known and that the emitter is always present. We extend then the ReDif-PF tracker to operate in scenarios with unknown sensor noise variances and propose the Rao-Blackwellized (RB) ReDif-PF. Finally, we introduce the random exchange diffusion Bernoulli filter (RndEx-BF) which enables the network of collaborative RSS sensors to jointly detect and track the emitter within the surveillance space.
APA, Harvard, Vancouver, ISO, and other styles
6

Creal, Drew D. "Essays in sequential Monte Carlo methods for economics and finance /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/7444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Petrov, Nikolay. "Sequential Monte Carlo methods for extended and group object tracking." Thesis, Lancaster University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658087.

Full text
Abstract:
This dissertation deals with the challenging tasks of real-time extended and group object tracking. The problems are formulated as joint parameter and state estimation of dynamic systems. The solutions proposed are formulated within a general nonlinear framework and are based on the Sequential Monte Carlo (SMC) method, also known as Particle Filtering (PF) method. Eour different solutions are proposed for the extended object tracking problem. The first two are based on border parametrisation of the visible surface of the extended object. The likelihood functions are derived for two different scenarios - one without clutter in the measurements and another one in the presence of clutter. In the third approach the kernel density estimation technique is utilised to approximate the joint posterior density of the target dynamic state and static size parameters. The forth proposed approach solves the extended object tracking problem based on the recently emerged SMC method combined with interval analysis , called Box Particle Filter (Box P F). Simulation results for all of the developed algorithms show accurate online tracking, with very good estimates both for the target kinematic states and for the parameters of the target extent. In addition, the performance of the Box PF and the border parametrised PF is validated utilising real measurements from laser range scanners obtained within a prototype security system replicating an airport corridor.
APA, Harvard, Vancouver, ISO, and other styles
8

Johansen, Adam Michael. "Some non-standard sequential Monte Carlo methods and their applications." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Arnold, Andrea. "Sequential Monte Carlo Parameter Estimation for Differential Equations." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396617699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brasnett, Paul. "Sequential Monte-Carlo methods for object tracking and replacement in video." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kuhlenschmidt, Bernd. "On the stability of sequential Monte Carlo methods for parameter estimation." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Spengler, Martin Spengler Martin. "On the applicability of sequential Monte Carlo methods to multiple target tracking /." [S.l.] : [s.n.], 2005. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ozgur, Soner. "Reduced Complexity Sequential Monte Carlo Algorithms for Blind Receivers." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10518.

Full text
Abstract:
Monte Carlo algorithms can be used to estimate the state of a system given relative observations. In this dissertation, these algorithms are applied to physical layer communications system models to estimate channel state information, to obtain soft information about transmitted symbols or multiple access interference, or to obtain estimates of all of these by joint estimation. Initially, we develop and analyze a multiple access technique utilizing mutually orthogonal complementary sets (MOCS) of sequences. These codes deliberately introduce inter-chip interference, which is naturally eliminated during processing at the receiver. However, channel impairments can destroy their orthogonality properties and additional processing becomes necessary. We utilize Monte Carlo algorithms to perform joint channel and symbol estimation for systems utilizing MOCS sequences as spreading codes. We apply Rao-Blackwellization to reduce the required number of particles. However, dense signaling constellations, multiuser environments, and the interchannel interference introduced by the spreading codes all increase the dimensionality of the symbol state space significantly. A full maximum likelihood solution is computationally expensive and generally not practical. However, obtaining the optimum solution is critical, and looking at only a part of the symbol space is generally not a good solution. We have sought algorithms that would guarantee that the correct transmitted symbol is considered, while only sampling a portion of the full symbol space. The performance of the proposed method is comparable to the Maximum Likelihood (ML) algorithm. While the computational complexity of ML increases exponentially with the dimensionality of the problem, the complexity of our approach increases only quadratically. Markovian structures such as the one imposed by MOCS spreading sequences can be seen in other physical layer structures as well. We have applied this partitioning approach with some modification to blind equalization of frequency selective fading channel and to multiple-input multiple output receivers that track channel changes. Additionally, we develop a method that obtains a metric for quantifying the convergence rate of Monte Carlo algorithms. Our approach yields an eigenvalue based method that is useful in identifying sources of slow convergence and estimation inaccuracy.
APA, Harvard, Vancouver, ISO, and other styles
14

Noh, Seong Jin. "Sequential Monte Carlo methods for probabilistic forecasts and uncertainty assessment in hydrologic modeling." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/170084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Schäfer, Christian. "Monte Carlo methods for sampling high-dimensional binary vectors." Phd thesis, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00767163.

Full text
Abstract:
This thesis is concerned with Monte Carlo methods for sampling high-dimensional binary vectors from complex distributions of interest. If the state space is too large for exhaustive enumeration, these methods provide a mean of estimating the expected value with respect to some function of interest. Standard approaches are mostly based on random walk type Markov chain Monte Carlo, where the equilibrium distribution of the chain is the distribution of interest and its ergodic mean converges to the expected value. We propose a novel sampling algorithm based on sequential Monte Carlo methodology which copes well with multi-modal problems by virtue of an annealing schedule. The performance of the proposed sequential Monte Carlo sampler depends on the ability to sample proposals from auxiliary distributions which are, in a certain sense, close to the current distribution of interest. The core work of this thesis discusses strategies to construct parametric families for sampling binary vectors with dependencies. The usefulness of this approach is demonstrated in the context of Bayesian variable selection and combinatorial optimization of pseudo-Boolean objective functions.
APA, Harvard, Vancouver, ISO, and other styles
16

Evers, Christine. "Blind dereverberation of speech from moving and stationary speakers using sequential Monte Carlo methods." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4761.

Full text
Abstract:
Speech signals radiated in confined spaces are subject to reverberation due to reflections of surrounding walls and obstacles. Reverberation leads to severe degradation of speech intelligibility and can be prohibitive for applications where speech is digitally recorded, such as audio conferencing or hearing aids. Dereverberation of speech is therefore an important field in speech enhancement. Driven by consumer demand, blind speech dereverberation has become a popular field in the research community and has led to many interesting approaches in the literature. However, most existing methods are dictated by their underlying models and hence suffer from assumptions that constrain the approaches to specific subproblems of blind speech dereverberation. For example, many approaches limit the dereverberation to voiced speech sounds, leading to poor results for unvoiced speech. Few approaches tackle single-sensor blind speech dereverberation, and only a very limited subset allows for dereverberation of speech from moving speakers. Therefore, the aim of this dissertation is the development of a flexible and extendible framework for blind speech dereverberation accommodating different speech sound types, single- or multiple sensor as well as stationary and moving speakers. Bayesian methods benefit from – rather than being dictated by – appropriate model choices. Therefore, the problem of blind speech dereverberation is considered from a Bayesian perspective in this thesis. A generic sequential Monte Carlo approach accommodating a multitude of models for the speech production mechanism and room transfer function is consequently derived. In this approach both the anechoic source signal and reverberant channel are estimated using their optimal estimators by means of Rao-Blackwellisation of the state-space of unknown variables. The remaining model parameters are estimated using sequential importance resampling. The proposed approach is implemented for two different speech production models for stationary speakers, demonstrating substantial reduction in reverberation for both unvoiced and voiced speech sounds. Furthermore, the channel model is extended to facilitate blind dereverberation of speech from moving speakers. Due to the structure of measurement model, single- as well as multi-microphone processing is facilitated, accommodating physically constrained scenarios where only a single sensor can be used as well as allowing for the exploitation of spatial diversity in scenarios where the physical size of microphone arrays is of no concern. This dissertation is concluded with a survey of possible directions for future research, including the use of switching Markov source models, joint target tracking and enhancement, as well as an extension to subband processing for improved computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
17

Miryusupov, Shohruh. "Particle methods in finance." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E069.

Full text
Abstract:
Cette thèse contient deux sujets différents la simulation d'événements rares et un transport d'homotopie pour l'estimation du modèle de volatilité stochastique, dont chacun est couvert dans une partie distincte de cette thèse. Les méthodes de particules, qui généralisent les modèles de Markov cachés, sont largement utilisées dans différents domaines tels que le traitement du signal, la biologie, l'estimation d'événements rares, la finance, etc. Il existe un certain nombre d'approches basées sur les méthodes de Monte Carlo, tels que Markov Chain Monte Carlo (MCMC), Monte Carlo séquentiel (SMC). Nous appliquons des algorithmes SMC pour estimer les probabilités de défaut dans un processus d'intensité basé sur un processus stable afin de calculer un ajustement de valeur de crédit (CV A) avec le wrong way risk (WWR). Nous proposons une nouvelle approche pour estimer les événements rares, basée sur la génération de chaînes de Markov en simulant le système hamiltonien. Nous démontrons les propriétés, ce qui nous permet d'avoir une chaîne de Markov ergodique et nous montrons la performance de notre approche sur l'exemple que nous rencontrons dans la valorisation des options. Dans la deuxième partie, nous visons à estimer numériquement un modèle de volatilité stochastique, et à le considérer dans le contexte d'un problème de transport, lorsque nous aimerions trouver «un plan de transport optimal» qui représente la mesure d'image. Dans un contexte de filtrage, nous le comprenons comme le transport de particules d'une distribution antérieure à une distribution postérieure dans le pseudo-temps. Nous avons également proposé de repondérer les particules transportées, de manière à ce que nous puissions nous diriger vers la zone où se concentrent les particules de poids élevé. Nous avons montré sur l'exemple du modèle de la volatilité stochastique de Stein-Stein l'application de notre méthode et illustré le biais et la variance
The thesis introduces simulation techniques that are based on particle methods and it consists of two parts, namely rare event simulation and a homotopy transport for stochastic volatility model estimation. Particle methods, that generalize hidden Markov models, are widely used in different fields such as signal processing, biology, rare events estimation, finance, etc. There are a number of approaches that are based on Monte Carlo methods that allow to approximate a target density such as Markov Chain Monte Carlo (MCMC), sequential Monte Carlo (SMC). We apply SMC algorithms to estimate default probabilities in a stable process based intensity process to compute a credit value adjustment (CV A) with a wrong way risk (WWR). We propose a novel approach to estimate rare events, which is based on the generation of Markov Chains by simulating the Hamiltonian system. We demonstrate the properties, that allows us to have ergodic Markov Chain and show the performance of our approach on the example that we encounter in option pricing.In the second part, we aim at numerically estimating a stochastic volatility model, and consider it in the context of a transportation problem, when we would like to find "an optimal transport map" that pushes forward the measure. In a filtering context, we understand it as the transportation of particles from a prior to a posterior distribution in pseudotime. We also proposed to reweight transported particles, so as we can direct to the area, where particles with high weights are concentrated. We showed the application of our method on the example of option pricing with Stein­Stein stochastic volatility model and illustrated the bias and variance
APA, Harvard, Vancouver, ISO, and other styles
18

De, Freitas Allan. "Sequential Monte Carlo methods for crowd and extended object tracking and dealing with tall data." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/16743/.

Full text
Abstract:
The Bayesian methodology is able to deal with a number of challenges in object tracking, especially with uncertainties in the system dynamics and sensor characteristics. However, model complexities can result in non-analytical expressions which require computationally cumbersome approximate solutions. In this thesis computationally efficient approximate methods for object tracking with complex models are developed. One such complexity is when a large group of objects, referred to as a crowd, is required to be tracked. A crowd generates multiple measurements with uncertain origin. Two solutions are proposed, based on a box particle filtering approach and a convolution particle filtering approach. Contributions include a theoretical derivation for the generalised likelihood function for the box particle filter, and an adaptive convolution particle filter able to resolve the data association problem without the measurement rates. The performance of the two filters is compared over a realistic scenario for a large crowd of pedestrians. Extended objects also generate a variable number of multiple measurements. In contrast with point objects, extended objects are characterised with their size or volume. Multiple object tracking is a notoriously challenging problem due to complexities caused by data association. An efficient box particle filter method for multiple extended object tracking is proposed, and for the first time it is shown how interval based approaches can deal efficiently with data association problems and reduce the computational complexity of the data association. The performance of the method is evaluated on real laser rangefinder data. Advances in digital sensors have resulted in systems being capable of accumulating excessively large volumes of data. Three efficient Bayesian inference methods are developed for object tracking when excessively large numbers of measurements may otherwise cause standard algorithms to be inoperable. The underlying mechanics of these methods are adaptive subsampling and the expectation propagation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Wen-shiang. "Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systems." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1086146309.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiv, 117 p. : ill. (some col.). Advisors: Bhavik R. Bakshi and Prem K. Goel, Department of Chemical Engineering. Includes bibliographical references (p. 114-117).
APA, Harvard, Vancouver, ISO, and other styles
20

Skrivanek, Zachary. "Sequential Imputation and Linkage Analysis." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1039121487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Valdes, LeRoy I. "Analysis Of Sequential Barycenter Random Probability Measures via Discrete Constructions." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3304/.

Full text
Abstract:
Hill and Monticino (1998) introduced a constructive method for generating random probability measures with a prescribed mean or distribution on the mean. The method involves sequentially generating an array of barycenters that uniquely defines a probability measure. This work analyzes statistical properties of the measures generated by sequential barycenter array constructions. Specifically, this work addresses how changing the base measures of the construction affects the statististics of measures generated by the SBA construction. A relationship between statistics associated with a finite level version of the SBA construction and the full construction is developed. Monte Carlo statistical experiments are used to simulate the effect changing base measures has on the statistics associated with the finite level construction.
APA, Harvard, Vancouver, ISO, and other styles
22

Heng, Jeremy. "On the use of transport and optimal control methods for Monte Carlo simulation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6cbc7690-ac54-4a6a-b235-57fa62e5b2fc.

Full text
Abstract:
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Yuting. "Inférence bayésienne dans les modèles de croissance de plantes pour la prévision et la caractérisation des incertitudes." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2014. http://www.theses.fr/2014ECAP0040/document.

Full text
Abstract:
La croissance des plantes en interaction avec l'environnement peut être décrite par des modèles mathématiques. Ceux-ci présentent des perspectives prometteuses pour un nombre considérable d'applications telles que la prévision des rendements ou l'expérimentation virtuelle dans le contexte de la sélection variétale. Dans cette thèse, nous nous intéressons aux différentes solutions capables d'améliorer les capacités prédictives des modèles de croissance de plantes, en particulier grâce à des méthodes statistiques avancées. Notre contribution se résume en quatre parties.Tout d'abord, nous proposons un nouveau modèle de culture (Log-Normal Allocation and Senescence ; LNAS). Entièrement construit dans un cadre probabiliste, il décrit seulement les processus écophysiologiques essentiels au bilan de la biomasse végétale afin de contourner les problèmes d'identification et d'accentuer l'évaluation des incertitudes. Ensuite, nous étudions en détail le paramétrage du modèle. Dans le cadre Bayésien, nous mettons en œuvre des méthodes Monte-Carlo Séquentielles (SMC) et des méthodes de Monte-Carlo par Chaînes de Markov (MCMC) afin de répondre aux difficultés soulevées lors du paramétrage des modèles de croissance de plantes, caractérisés par des équations dynamiques non-linéaires, des données rares et un nombre important de paramètres. Dans les cas où la distribution a priori est peu informative, voire non-informative, nous proposons une version itérative des méthodes SMC et MCMC, approche équivalente à une variante stochastique d'un algorithme de type Espérance-Maximisation, dans le but de valoriser les données d'observation tout en préservant la robustesse des méthodes Bayésiennes. En troisième lieu, nous soumettons une méthode d'assimilation des données en trois étapes pour résoudre le problème de prévision du modèle. Une première étape d'analyse de sensibilité permet d'identifier les paramètres les plus influents afin d'élaborer une version plus robuste de modèle par la méthode de sélection de modèles à l'aide de critères appropriés. Ces paramètres sélectionnés sont par la suite estimés en portant une attention particulière à l'évaluation des incertitudes. La distribution a posteriori ainsi obtenue est considérée comme information a priori pour l'étape de prévision, dans laquelle une méthode du type SMC telle que le filtrage par noyau de convolution (CPF) est employée afin d'effectuer l'assimilation de données. Dans cette étape, les estimations des états cachés et des paramètres sont mis à jour dans l'objectif d'améliorer la précision de la prévision et de réduire l'incertitude associée. Finalement, d'un point de vue applicatif, la méthodologie proposée est mise en œuvre et évaluée avec deux modèles de croissance de plantes, le modèle LNAS pour la betterave sucrière et le modèle STICS pour le blé d'hiver. Quelques pistes d'utilisation de la méthode pour l'amélioration du design expérimental sont également étudiées, dans le but d'améliorer la qualité de la prévision. Les applications aux données expérimentales réelles montrent des performances prédictives encourageantes, ce qui ouvre la voie à des outils d'aide à la décision en agriculture
Plant growth models aim to describe plant development and functional processes in interaction with the environment. They offer promising perspectives for many applications, such as yield prediction for decision support or virtual experimentation inthe context of breeding. This PhD focuses on the solutions to enhance plant growth model predictive capacity with an emphasis on advanced statistical methods. Our contributions can be summarized in four parts. Firstly, from a model design perspective, the Log-Normal Allocation and Senescence (LNAS) crop model is proposed. It describes only the essential ecophysiological processes for biomass budget in a probabilistic framework, so as to avoid identification problems and to accentuate uncertainty assessment in model prediction. Secondly, a thorough research is conducted regarding model parameterization. In a Bayesian framework, both Sequential Monte Carlo (SMC) methods and Markov chain Monte Carlo (MCMC) based methods are investigated to address the parameterization issues in the context of plant growth models, which are frequently characterized by nonlinear dynamics, scarce data and a large number of parameters. Particularly, whenthe prior distribution is non-informative, with the objective to put more emphasis on the observation data while preserving the robustness of Bayesian methods, an iterative version of the SMC and MCMC methods is introduced. It can be regarded as a stochastic variant of an EM type algorithm. Thirdly, a three-step data assimilation approach is proposed to address model prediction issues. The most influential parameters are first identified by global sensitivity analysis and chosen by model selection. Subsequently, the model calibration is performed with special attention paid to the uncertainty assessment. The posterior distribution obtained from this estimation step is consequently considered as prior information for the prediction step, in which a SMC-based on-line estimation method such as Convolution Particle Filtering (CPF) is employed to perform data assimilation. Both state and parameter estimates are updated with the purpose of improving theprediction accuracy and reducing the associated uncertainty. Finally, from an application point of view, the proposed methodology is implemented and evaluated with two crop models, the LNAS model for sugar beet and the STICS model for winter wheat. Some indications are also given on the experimental design to optimize the quality of predictions. The applications to real case scenarios show encouraging predictive performances and open the way to potential tools for yield prediction in agriculture
APA, Harvard, Vancouver, ISO, and other styles
24

Yildirim, Berkin. "A Comparative Evaluation Of Conventional And Particle Filter Based Radar Target Tracking." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609043/index.pdf.

Full text
Abstract:
In this thesis the radar target tracking problem in Bayesian estimation framework is studied. Traditionally, linear or linearized models, where the uncertainty in the system and measurement models is typically represented by Gaussian densities, are used in this area. Therefore, classical sub-optimal Bayesian methods based on linearized Kalman filters can be used. The sequential Monte Carlo methods, i.e. particle filters, make it possible to utilize the inherent non-linear state relations and non-Gaussian noise models. Given the sufficient computational power, the particle filter can provide better results than Kalman filter based methods in many cases. A survey over relevant radar tracking literature is presented including aspects as estimation and target modeling. In various target tracking related estimation applications, particle filtering algorithms are presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Gu, Feng. "Dynamic Data Driven Application System for Wildfire Spread Simulation." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/cs_diss/57.

Full text
Abstract:
Wildfires have significant impact on both ecosystems and human society. To effectively manage wildfires, simulation models are used to study and predict wildfire spread. The accuracy of wildfire spread simulations depends on many factors, including GIS data, fuel data, weather data, and high-fidelity wildfire behavior models. Unfortunately, due to the dynamic and complex nature of wildfire, it is impractical to obtain all these data with no error. Therefore, predictions from the simulation model will be different from what it is in a real wildfire. Without assimilating data from the real wildfire and dynamically adjusting the simulation, the difference between the simulation and the real wildfire is very likely to continuously grow. With the development of sensor technologies and the advance of computer infrastructure, dynamic data driven application systems (DDDAS) have become an active research area in recent years. In a DDDAS, data obtained from wireless sensors is fed into the simulation model to make predictions of the real system. This dynamic input is treated as the measurement to evaluate the output and adjust the states of the model, thus to improve simulation results. To improve the accuracy of wildfire spread simulations, we apply the concept of DDDAS to wildfire spread simulation by dynamically assimilating sensor data from real wildfires into the simulation model. The assimilation system relates the system model and the observation data of the true state, and uses analysis approaches to obtain state estimations. We employ Sequential Monte Carlo (SMC) methods (also called particle filters) to carry out data assimilation in this work. Based on the structure of DDDAS, this dissertation presents the data assimilation system and data assimilation results in wildfire spread simulations. We carry out sensitivity analysis for different densities, frequencies, and qualities of sensor data, and quantify the effectiveness of SMC methods based on different measurement metrics. Furthermore, to improve simulation results, the image-morphing technique is introduced into the DDDAS for wildfire spread simulation.
APA, Harvard, Vancouver, ISO, and other styles
26

Al-Saadony, Muhannad. "Bayesian stochastic differential equation modelling with application to finance." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1530.

Full text
Abstract:
In this thesis, we consider some popular stochastic differential equation models used in finance, such as the Vasicek Interest Rate model, the Heston model and a new fractional Heston model. We discuss how to perform inference about unknown quantities associated with these models in the Bayesian framework. We describe sequential importance sampling, the particle filter and the auxiliary particle filter. We apply these inference methods to the Vasicek Interest Rate model and the standard stochastic volatility model, both to sample from the posterior distribution of the underlying processes and to update the posterior distribution of the parameters sequentially, as data arrive over time. We discuss the sensitivity of our results to prior assumptions. We then consider the use of Markov chain Monte Carlo (MCMC) methodology to sample from the posterior distribution of the underlying volatility process and of the unknown model parameters in the Heston model. The particle filter and the auxiliary particle filter are also employed to perform sequential inference. Next we extend the Heston model to the fractional Heston model, by replacing the Brownian motions that drive the underlying stochastic differential equations by fractional Brownian motions, so allowing a richer dependence structure across time. Again, we use a variety of methods to perform inference. We apply our methodology to simulated and real financial data with success. We then discuss how to make forecasts using both the Heston and the fractional Heston model. We make comparisons between the models and show that using our new fractional Heston model can lead to improve forecasts for real financial data.
APA, Harvard, Vancouver, ISO, and other styles
27

Hol, Jeroen D. "Resampling in particle filters." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2366.

Full text
Abstract:

In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.

APA, Harvard, Vancouver, ISO, and other styles
28

Johansson, Anders. "Acoustic Sound Source Localisation and Tracking : in Indoor Environments." Doctoral thesis, Blekinge Tekniska Högskola [bth.se], School of Engineering - Dept. of Signal Processing, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00401.

Full text
Abstract:
With advances in micro-electronic complexity and fabrication, sophisticated algorithms for source localisation and tracking can now be deployed in cost sensitive appliances for both consumer and commercial markets. As a result, such algorithms are becoming ubiquitous elements of contemporary communication, robotics and surveillance systems. Two of the main requirements of acoustic localisation and tracking algorithms are robustness to acoustic disturbances (to maximise localisation accuracy), and low computational complexity (to minimise power-dissipation and cost of hardware components). The research presented in this thesis covers both advances in robustness and in computational complexity for acoustic source localisation and tracking algorithms. This thesis also presents advances in modelling of sound propagation in indoor environments; a key to the development and evaluation of acoustic localisation and tracking algorithms. As an advance in the field of tracking, this thesis also presents a new method for tracking human speakers in which the problem of the discontinuous nature of human speech is addressed using a new state-space filter based algorithm which incorporates a voice activity detector. The algorithm is shown to achieve superior tracking performance compared to traditional approaches. Furthermore, the algorithm is implemented in a real-time system using a method which yields a low computational complexity. Additionally, a new method is presented for optimising the parameters for the dynamics model used in a state-space filter. The method features an evolution strategy optimisation algorithm to identify the optimum dynamics’ model parameters. Results show that the algorithm is capable of real-time online identification of optimum parameters for different types of dynamics models without access to ground-truth data. Finally, two new localisation algorithms are developed and compared to older well established methods. In this context an analytic analysis of noise and room reverberation is conducted, considering its influence on the performance of localisation algorithms. The algorithms are implemented in a real-time system and are evaluated with respect to robustness and computational complexity. Results show that the new algorithms outperform their older counterparts, both with regards to computational complexity, and robustness to reverberation and background noise. The field of acoustic modelling is advanced in a new method for predicting the energy decay in impulse responses simulated using the image source method. The new method is applied to the problem of designing synthetic rooms with a defined reverberation time, and is compared to several well established methods for reverberation time prediction. This comparison reveals that the new method is the most accurate.
APA, Harvard, Vancouver, ISO, and other styles
29

Montazeri, Shahtori Narges. "Quantifying the impact of contact tracing on ebola spreading." Thesis, Kansas State University, 2016. http://hdl.handle.net/2097/34540.

Full text
Abstract:
Master of Science
Department of Electrical and Computer Engineering
Faryad Darabi Sahneh
Recent experience of Ebola outbreak of 2014 highlighted the importance of immediate response to impede Ebola transmission at its very early stage. To this aim, efficient and effective allocation of limited resources is crucial. Among standard interventions is the practice of following up with physical contacts of individuals diagnosed with Ebola virus disease -- known as contact tracing. In an effort to objectively understand the effect of possible contact tracing protocols, we explicitly develop a model of Ebola transmission incorporating contact tracing. Our modeling framework has several features to suit early–stage Ebola transmission: 1) the network model is patient–centric because when number of infected cases are small only the myopic networks of infected individuals matter and the rest of possible social contacts are irrelevant, 2) the Ebola disease model is individual–based and stochastic because at the early stages of spread, random fluctuations are significant and must be captured appropriately, 3) the contact tracing model is parameterizable to analyze the impact of critical aspects of contact tracing protocols. Notably, we propose an activity driven network approach to contact tracing, and develop a Monte-Carlo method to compute the basic reproductive number of the disease spread in different scenarios. Exhaustive simulation experiments suggest that while contact tracing is important in stopping the Ebola spread, it does not need to be done too urgently. This result is due to rather long incubation period of Ebola disease infection. However, immediate hospitalization of infected cases is crucial and requires the most attention and resource allocation. Moreover, to investigate the impact of mitigation strategies in the 2014 Ebola outbreak, we consider reported data in Guinea, one the three West Africa countries that had experienced the Ebola virus disease outbreak. We formulate a multivariate sequential Monte Carlo filter that utilizes mechanistic models for Ebola virus propagation to simultaneously estimate the disease progression states and the model parameters according to reported incidence data streams. This method has the advantage of performing the inference online as the new data becomes available and estimating the evolution of the basic reproductive ratio R₀(t) throughout the Ebola outbreak. Our analysis identifies a peak in the basic reproductive ratio close to the time of Ebola cases reports in Europe and the USA.
APA, Harvard, Vancouver, ISO, and other styles
30

Allaya, Mouhamad M. "Méthodes de Monte-Carlo EM et approximations particulaires : application à la calibration d'un modèle de volatilité stochastique." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010072/document.

Full text
Abstract:
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à Travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel ou la structure de dépendance est d'ordre 1, l'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables, et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les modèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCS ainsi que quelques une de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d’illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance
This thesis pursues a double perspective in the joint use of sequential Monte Carlo methods (SMC) and the Expectation-Maximization algorithm (EM) under hidden Mar­kov models having a Markov dependence structure of order grater than one in the unobserved component signal. Firstly, we begin with a brief description of the theo­retical basis of both statistical concepts through Chapters 1 and 2 that are devoted. In a second hand, we focus on the simultaneous implementation of both concepts in Chapter 3 in the usual setting where the dependence structure is of order 1. The contribution of SMC methods in this work lies in their ability to effectively approximate any bounded conditional functional in particular, those of filtering and smoothing quantities in a non-linear and non-Gaussian settings. The EM algorithm is itself motivated by the presence of both observable and unobservable ( or partially observed) variables in Hidden Markov Models and particularly the stochastic volatility models in study. Having presented the EM algorithm as well as the SMC methods and some of their properties in Chapters 1 and 2 respectively, we illustrate these two statistical tools through the calibration of a stochastic volatility model. This application is clone for exchange rates and for some stock indexes in Chapter 3. We conclude this chapter on a slight departure from canonical stochastic volatility model as well Monte Carlo simulations on the resulting model. Finally, we strive in Chapters 4 and 5 to provide the theoretical and practical foundation of sequential Monte Carlo methods extension including particle filtering and smoothing when the Markov structure is more pronounced. As an illustration, we give the example of a degenerate stochastic volatility model whose approximation has such a dependence property
APA, Harvard, Vancouver, ISO, and other styles
31

Nguyen, Thi Ngoc Minh. "Lissage de modèles linéaires et gaussiens à régimes markoviens. : Applications à la modélisation de marchés de matières premières." Electronic Thesis or Diss., Paris, ENST, 2016. https://pastel.hal.science/tel-03689917.

Full text
Abstract:
Les travaux présentés dans cette thèse portent sur l'analyse et l'application de méthodes de Monte Carlo séquentielles pour l'estimation de chaînes de Markov cachées. Ces méthodes sont utilisées pour approcher les lois conditionnelles des états cachés sachant les observations. Nous nous intéressons en particulier à la méthode des deux filtres qui permet d'estimer les lois marginales des états sachant les observations. Nous prouvons tout d'abord des résultats de convergence pour l'ensemble de ces méthodes sous des hypothèses faibles sur le modèle de Markov caché. En ajoutant des hypothèses de mélange fort, nous montrons que les constantes des inégalités de déviation ainsi que les variances asymptotiques sont uniformément majorées en temps. Nous nous intéressons par la suite à l'utilisation de ces modèles et de la méthode des deux filtres à la modélisation, l’inférence et la prédiction des marchés de matières premières. Les marchés sont modélisés par une extension du modèle de Gibson-Schwartz portant sur le prix spot et le convenience yield avec la dynamique de ces variables est contrôlée par une chaîne de Markov discrète cachée identifiant le régime dans lequel se trouve le marché. A chaque régime correspond un ensemble de paramètres régissant la dynamique des variables d'état, ce qui permet de définir différents comportements pour le prix spot et le convenience yield. Nous proposons un nouvel algorithme de type Expectation Maximization basé sur une méthode de deux-filtres pour approcher la loi postérieure des états cachés sachant des observations. Les performances de cet algorithme sont évaluées sur les données du pétrole brut du marché Chicago Mercantile Exchange
The work presented in this thesis focuses on Sequential Monte Carlo methods for general state space models. These procedures are used to approximate any sequence of conditional distributions of some hidden state variables given a set observations. We are particularly interested in two-filter based methods to estimate the marginal smoothing distribution of a state variable given past and future observations. We first prove convergence results for the estimators produced by all two-filter based Sequential Monte Carlo methods under weak assumptions on the hidden Markov model. Under additional strong mixing assumptions which are more restrictive but still standard in this context, we show that the constants of the deviation inequalities and the asymptotic variances are uniformly bounded in time. Then, a Conditionally Linear and Gaussian hidden Markov model is introduced to explain commodity markets regime shifts. The markets are modeled by extending the Gibson-Schwartz model on the spot price and the convenience yield. It is assumed that the dynamics of these variables is controlled by a discrete hidden Markov chain identifying the regimes. Each regime corresponds to a set of parameters driving the state space model dynamics. We propose a Monte Carlo Expectation Maximization algorithm to estimate the parameters of the model based on a two-filter method to approximate the intermediate quantity. This algorithm uses explicit marginalization (Rao Blackwellisation) of the linear states to reduce Monte Carlo variance. The algorithm performance is illustrated using Chicago Mercantile Exchange (CME) crude oil data
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Xi. "Sequential Monte Carlo radio-frequency tomographic tracking." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104844.

Full text
Abstract:
Target tracking in over a small-scale area using wireless sensor networks (WSNs) is a technique that can be used in applications ranging from emergency rescue after an earthquake to security protection in a building. Many target tracking systems rely on the presence of an electric device which must be carried by the target in order to reports back its location and status. This makes these systems unsuitable for many emergency applications; in such applications device-free tracking systems that where no devices are attached to the targets are needed. Radio-Frequency (RF) tomographic tracking is one such device-free tracking technique. This system tracks moving targets by analyzing changes in attenuation in wireless transmissions. The target can be tracked within the sensor network area without being required to carry an electric device.Some previously-proposed device-free tracking approaches require a time-consuming training phase before tracking can be carried out, which is time-consuming. Others perform tracking by sacrificing part of the estimation accuracy. In this thesis, we propose a novel sequential Monte Carlo (SMC) algorithm for RF tomographic tracking. It can track a single target moving in a wireless sensor network without the system needing to be trained. The algorithm adopts a particle filtering method to estimate the target position and incorporates on-line Expectation Maximization (EM) to estimate model parameters. Based on experimental measurements, the work also introduces a novel measurement model for the attenuation caused by a target with the goal of improving estimation accuracy. The performance of the algorithm is assessed through numerical simulations and field experiments carried out with a wireless sensor network testbed. Both simulated and experimental results demonstrate that our work outperforms previous RF tomographic tracking approaches for single target tracking.
Suivi de cible dans la zone à petite échelle en utilisant les réseaux de capteurs sans fil est une technique qui peut être largement utilisé dans des applications telles que le sauvetage d'urgence après un tremblement de terre, ou la protection de la sécurité dans un bâtiment. Beaucoup de systèmes de poursuite de cibles nécessitent un dispositif électrique réalisée par l'objectif de faire rapport de ses localisation instantanée et le statut. L'inconvénient rend ces systèmes ne conviennent pas pour des applications nombreuses interventions d'urgence, dispositif sans systèmes de suivi qui ne les périphériques connectés sur les objectifs sont nécessaires. Radio-Fréquence (RF) suivi tomographique est l'une des techniques dispositif de suivi-libres. Il s'agit d'un processus de suivi des cibles mobiles en analysant l'évolution de l'atténuation dans les transmissions sans fil. La cible peut être suivi dans la zone de réseau de capteurs, tandis que les appareils électriques ne doivent être effectués. Cependant, certaines approches précédentes dispositif de suivi-libre nécessite une phase d'entraînement avant de suivi, ce qui prend beaucoup de temps. Autres effectuer un suivi par scarification partie de précision de l'estimation.Dans cette thèse, nous proposons une nouvelle Monte Carlo séquentielles (SMC) algorithme de suivi RF tomographique. Il peut suivre une cible unique sans formation du système dans un réseau de capteurs sans fil. L'algorithme de filtrage particulaire adopte la méthode pour estimer la position cible et intègre en ligne Expectation Maximization (EM) pour estimer les paramètres du modèle. Sur la base de mesures expérimentales, le travail introduit également un modèle de mesure de roman pour l'atténuation provoquée par une cible pour améliorer la précision d'estimation. La performance de l'algorithme est évaluée par des simulations numériques et expériences sur le terrain avec un réseau de capteurs sans fil banc d'essai. Les deux résultats simulés et expérimentaux démontrent que notre travail surpasse précédente approche RF suivi tomographique pour le suivi de cible unique.
APA, Harvard, Vancouver, ISO, and other styles
33

Fallon, M. F. "Acoustic source tracking using sequential Monte Carlo." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598928.

Full text
Abstract:
Particle Filter-based Acoustic Source Localisation algorithms track (online and in real-time) the position of a sound source – a person speaking in a room – based on the current data from a microphone array as well as all previous data up to that point. The first section of this thesis reviews previous research in this field and discusses the suitability of using particle filters to solve this problem. Experiments are then detailed which examine the typical performance and behaviour of various instantaneous localisation functions. In subsequent sections, algorithms are detailed which advance the state-of-the-art. First an orientation estimation algorithm is introduced which uses speaker directivity to infer head pose. Second an algorithm is introduced for multi-target acoustic source tracking and is based upon the Track Before Detect (TBD) methodology. Using this methodology avoids the need to identify a set of source measurements and allows for a large saving in computational power. Finally this algorithm is extended to allow for an unknown and time-varying number of speakers. By leveraging the frequency content of speech it is shown that regions of the surveillance space can be monitored for activity while requiring only a minor increase in overall computation. A variable dimension particle filter is then outlined which proposes newly active targets, maintains target tracks and removes targets when they become inactive.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Yan. "Bayesian model comparison via sequential Monte Carlo." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/62064/.

Full text
Abstract:
The sequential Monte Carlo (smc) methods have been widely used for modern scientific computation. Bayesian model comparison has been successfully applied in many fields. Yet there have been few researches on the use of smc for the purpose of Bayesian model comparison. This thesis studies different smc strategies for Bayesian model computation. In addition, various extensions and refinements of existing smc practices are proposed in this thesis. Through empirical examples, it will be shown that the smc strategies can be applied for many realistic applications which might be difficult for Markov chain Monte Carlo (mcmc) algorithms. The extensions and refinements lead to an automatic and adaptive strategy. This strategy is able to produce accurate estimates of the Bayes factor with minimal manual tuning of algorithms. Another advantage of smc algorithms over mcmc algorithms is that it can be parallelized in a straightforward way. This allows the algorithms to better utilize modern computer resources. This thesis presents work on the parallel implementation of generic smc algorithms. A C++ framework within which generic smc algorithms can be implemented easily on parallel computers is introduced. We show that with little additional effort, the implementations using this framework can provide significant performance speedup.
APA, Harvard, Vancouver, ISO, and other styles
35

Kostov, Svetoslav. "Hamiltonian sequential Monte Carlo and normalizing constants." Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.702941.

Full text
Abstract:
The present thesis deals with the problems of simulation from a given target distribution and the estimation of ratios of normalizing constants, i.e. marginal likelihoods (ML). Both problems could be considerably difficult even for the simplest possible real-world statistical setups. We investigate how the combination of Hamiltonian Monte Carlo (HMC) and Sequential Monte Carlo (SMC) could be used to sample effectively from a multi-modal target distribution and to estimate ratios of normalizing constants at the same time. We call this novel combination Hamiltonian SMC (HSMC) algorithm and we show that it achieves some improvements over the existing Monte Carlo sampling algorithms, especially when the target distribution is multi-modal and/ or have complicated covariance structure. An important convergence result is proved for the HSMC, as well as an upper bound on the bias of the estimate of the ratio of MLs. Our investigation of the continuous time limit of the HSMC process reveals an interesting connection between Monte Carlo simulation and physics. We also concern ourselves with the problem of estimation of the uncertainty of the estimate of the ML of a HMM. We propose a new algorithm (Pairs algorithm) to estimate the non-asymptotic second moment of the estimate of the ML for general HMM. Later we show that there exists a linear-in-time bound on the relative variance of the estimate of the second moment of the ML obtained using the Pairs algorithm. This theoretical property of the relative variance translates in practice into a more reliable estimates of the second moment of the estimate of the MLs compared to the standard approach of running independent copies of the particle filter. We support out investigations with different numerical examples like Bayesian inference of a heteroscedastic regression, inference of a Lotka - Volterra based HMM, etc.
APA, Harvard, Vancouver, ISO, and other styles
36

Martin, James Stewart. "Some new results in sequential Monte Carlo." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/11655.

Full text
Abstract:
Sequential Monte Carlo (SMC) methods have been well studied within the context of performing inference with respect to partially observed Markov processes, and their use in this context relies upon the ability to evaluate or estimate the likelihood of a set of observed data, given the state of the latent process. In many real-world applications such as the study of population genetics and econometrics, however, this likelihood can neither be analytically evaluated nor replaced by an unbiased estimator, and so the application of exact SMC methods to these problems may be infeasible, or even impossible. The models in many of these applications are complex, yet realistic, and so development of techniques that can deal with problems of likelihood intractability can help us to perform inference for many important yet otherwise inaccessible problems; this motivates the research presented within this thesis. The main focus of this work is the application of approximate Bayesian computation (ABC) methodology to state-space models (SSMs) and the development of SMC methods in the context of these ABC SSMs for filtering and smoothing of the latent process. The introduction of ABC here avoids the need to evaluate the likelihood, at the cost of introducing a bias into the resulting filtering and smoothing estimators; this bias is explored theoretically and through simulation studies. An alternative SMC procedure, incorporating an additional rejection step, is also considered and the novel application of this rejection-based SMC procedure to the ABC approximation of the SSM is considered. This thesis will also consider the application of MCMC and SMC methods to a class of partially observed point process (PP) models. We investigate the problem of performing sequential inference for these models and note that current methods often fail. We present a new approach to smoothing in this context, using SMC samplers (Del Moral et al., 2006). This approach is illustrated, with some theoretical discussion, on a doubly stochastic PP applied in the context of finance.
APA, Harvard, Vancouver, ISO, and other styles
37

Pace, Michele. "Stochastic models and methods for multi-object tracking." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00651396.

Full text
Abstract:
La poursuite multi-cibles a pour objet le suivi d'un ensemble de cibles mobiles à partir de données obtenues séquentiellement. Ce problème est particulièrement complexe du fait du nombre inconnu et variable de cibles, de la présence de bruit de mesure, de fausses alarmes, d'incertitude de détection et d'incertitude dans l'association de données. Les filtres PHD (Probability Hypothesis Density) constituent une nouvelle gamme de filtres adaptés à cette problématique. Ces techniques se distinguent des méthodes classiques (MHT, JPDAF, particulaire) par la modélisation de l'ensemble des cibles comme un ensemble fini aléatoire et par l'utilisation des moments de sa densité de probabilité. Dans la première partie, on s'intéresse principalement à la problématique de l'application des filtres PHD pour le filtrage multi-cibles maritime et aérien dans des scénarios réalistes et à l'étude des propriétés numériques de ces algorithmes. Dans la seconde partie, nous nous intéressons à l'étude théorique des processus de branchement liés aux équations du filtrage multi-cibles avec l'analyse des propriétés de stabilité et le comportement en temps long des semi-groupes d'intensités de branchements spatiaux. Ensuite, nous analysons les propriétés de stabilité exponentielle d'une classe d'équations à valeurs mesures que l'on rencontre dans le filtrage non-linéaire multi-cibles. Cette analyse s'applique notamment aux méthodes de type Monte Carlo séquentielles et aux algorithmes particulaires dans le cadre des filtres de Bernoulli et des filtres PHD.
APA, Harvard, Vancouver, ISO, and other styles
38

Jewell, Sean William. "Divide and conquer sequential Monte Carlo for phylogenetics." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54514.

Full text
Abstract:
Recently reconstructing evolutionary histories has become a computational issue due to the increased availability of genetic sequencing data and relaxations of classical modelling assumptions. This thesis specializes a Divide & conquer sequential Monte Carlo (DCSMC) inference algorithm to phylogenetics to address these challenges. In phylogenetics, the tree structure used to represent evolutionary histories provides a model decomposition used for DCSMC. In particular, speciation events are used to recursively decompose the model into subproblems. Each subproblem is approximated by an independent population of weighted particles, which are merged and propagated to create an ancestral population. This approach provides the flexibility to relax classical assumptions on large trees by parallelizing these recursions.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
39

Dickinson, Andrew Samuel. "On the analysis of Monte Carlo and quasi-Monte Carlo methods." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Göncü, Ahmet. "Monte Carlo and quasi-Monte Carlo methods in pricing financial derivatives." Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-06232009-140439/.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2009.
Advisor: Giray Ökten, Florida State University, College of Arts and Sciences, Dept. of Mathematics. Title and description from dissertation home page (viewed on Oct. 5, 2009). Document formatted into pages; contains x, 105 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
41

Taft, Keith. "Monte Carlo methods for radiosity." Thesis, University of Liverpool, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Strathmann, Heiko. "Kernel methods for Monte Carlo." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10040707/.

Full text
Abstract:
This thesis investigates the use of reproducing kernel Hilbert spaces (RKHS) in the context of Monte Carlo algorithms. The work proceeds in three main themes. Adaptive Monte Carlo proposals: We introduce and study two adaptive Markov chain Monte Carlo (MCMC) algorithms to sample from target distributions with non-linear support and intractable gradients. Our algorithms, generalisations of random walk Metropolis and Hamiltonian Monte Carlo, adaptively learn local covariance and gradient structure respectively, by modelling past samples in an RKHS. We further show how to embed these methods into the sequential Monte Carlo framework. Efficient and principled score estimation: We propose methods for fitting an RKHS exponential family model that work by fitting the gradient of the log density, the score, thus avoiding the need to compute a normalization constant. While the problem is of general interest, here we focus on its embedding into the adaptive MCMC context from above. We improve the computational efficiency of an earlier solution with two novel fast approximation schemes without guarantees, and a low-rank, Nyström-like solution. The latter retains the consistency and convergence rates of the exact solution, at lower computational cost. Goodness-of-fit testing: We propose a non-parametric statistical test for goodness-of-fit. The measure is a divergence constructed via Stein's method using functions from an RKHS. We derive a statistical test, both for i.i.d. and non-i.i.d. samples, and apply the test to quantifying convergence of approximate MCMC methods, statistical model criticism, and evaluating accuracy in non-parametric score estimation.
APA, Harvard, Vancouver, ISO, and other styles
43

Maire, Florian. "Détection et classification de cibles multispectrales dans l'infrarouge." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0007/document.

Full text
Abstract:
Les dispositifs de protection de sites sensibles doivent permettre de détecter des menaces potentielles suffisamment à l’avance pour pouvoir mettre en place une stratégie de défense. Dans cette optique, les méthodes de détection et de reconnaissance d’aéronefs se basant sur des images infrarouge multispectrales doivent être adaptées à des images faiblement résolues et être robustes à la variabilité spectrale et spatiale des cibles. Nous mettons au point dans cette thèse, des méthodes statistiques de détection et de reconnaissance d’aéronefs satisfaisant ces contraintes. Tout d’abord, nous spécifions une méthode de détection d’anomalies pour des images multispectrales, combinant un calcul de vraisemblance spectrale avec une étude sur les ensembles de niveaux de la transformée de Mahalanobis de l’image. Cette méthode ne nécessite aucune information a priori sur les aéronefs et nous permet d’identifier les images contenant des cibles. Ces images sont ensuite considérées comme des réalisations d’un modèle statistique d’observations fluctuant spectralement et spatialement autour de formes caractéristiques inconnues. L’estimation des paramètres de ce modèle est réalisée par une nouvelle méthodologie d’apprentissage séquentiel non supervisé pour des modèles à données manquantes que nous avons développée. La mise au point de ce modèle nous permet in fine de proposer une méthode de reconnaissance de cibles basée sur l’estimateur du maximum de vraisemblance a posteriori. Les résultats encourageants, tant en détection qu’en classification, justifient l’intérêt du développement de dispositifs permettant l’acquisition d’images multispectrales. Ces méthodes nous ont également permis d’identifier les regroupements de bandes spectrales optimales pour la détection et la reconnaissance d’aéronefs faiblement résolus en infrarouge
Surveillance systems should be able to detect potential threats far ahead in order to put forward a defence strategy. In this context, detection and recognition methods making use of multispectral infrared images should cope with low resolution signals and handle both spectral and spatial variability of the targets. We introduce in this PhD thesis a novel statistical methodology to perform aircraft detection and classification which take into account these constraints. We first propose an anomaly detection method designed for multispectral images, which combines a spectral likelihood measure and a level set study of the image Mahalanobis transform. This technique allows to identify images which feature an anomaly without any prior knowledge on the target. In a second time, these images are used as realizations of a statistical model in which the observations are described as random spectral and spatial deformation of prototype shapes. The model inference, and in particular the prototype shape estimation, is achieved through a novel unsupervised sequential learning algorithm designed for missing data models. This model allows to propose a classification algorithm based on maximum a posteriori probability Promising results in detection as well as in classification, justify the growing interest surrounding the development of multispectral imaging devices. These methods have also allowed us to identify the optimal infrared spectral band regroupments regarding the low resolution aircraft IRS detection and classification
APA, Harvard, Vancouver, ISO, and other styles
44

Jonnavithula, Annapoorani. "Composite system reliability evaluation using sequential Monte Carlo simulation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/nq23941.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Di, Caro Gianni. "Ant colony optimization and its application to adaptive routing in telecommunication networks." Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211149.

Full text
Abstract:
In ant societies, and, more in general, in insect societies, the activities of the individuals, as well as of the society as a whole, are not regulated by any explicit form of centralized control. On the other hand, adaptive and robust behaviors transcending the behavioral repertoire of the single individual can be easily observed at society level. These complex global behaviors are the result of self-organizing dynamics driven by local interactions and communications among a number of relatively simple individuals.

The simultaneous presence of these and other fascinating and unique characteristics have made ant societies an attractive and inspiring model for building new algorithms and new multi-agent systems. In the last decade, ant societies have been taken as a reference for an ever growing body of scientific work, mostly in the fields of robotics, operations research, and telecommunications.

Among the different works inspired by ant colonies, the Ant Colony Optimization metaheuristic (ACO) is probably the most successful and popular one. The ACO metaheuristic is a multi-agent framework for combinatorial optimization whose main components are: a set of ant-like agents, the use of memory and of stochastic decisions, and strategies of collective and distributed learning.

It finds its roots in the experimental observation of a specific foraging behavior of some ant colonies that, under appropriate conditions, are able to select the shortest path among few possible paths connecting their nest to a food site. The pheromone, a volatile chemical substance laid on the ground by the ants while walking and affecting in turn their moving decisions according to its local intensity, is the mediator of this behavior.

All the elements playing an essential role in the ant colony foraging behavior were understood, thoroughly reverse-engineered and put to work to solve problems of combinatorial optimization by Marco Dorigo and his co-workers at the beginning of the 1990's.

From that moment on it has been a flourishing of new combinatorial optimization algorithms designed after the first algorithms of Dorigo's et al. and of related scientific events.

In 1999 the ACO metaheuristic was defined by Dorigo, Di Caro and Gambardella with the purpose of providing a common framework for describing and analyzing all these algorithms inspired by the same ant colony behavior and by the same common process of reverse-engineering of this behavior. Therefore, the ACO metaheuristic was defined a posteriori, as the result of a synthesis effort effectuated on the study of the characteristics of all these ant-inspired algorithms and on the abstraction of their common traits.

The ACO's synthesis was also motivated by the usually good performance shown by the algorithms (e.g. for several important combinatorial problems like the quadratic assignment, vehicle routing and job shop scheduling, ACO implementations have outperformed state-of-the-art algorithms).

The definition and study of the ACO metaheuristic is one of the two fundamental goals of the thesis. The other one, strictly related to this former one, consists in the design, implementation, and testing of ACO instances for problems of adaptive routing in telecommunication networks.

This thesis is an in-depth journey through the ACO metaheuristic, during which we have (re)defined ACO and tried to get a clear understanding of its potentialities, limits, and relationships with other frameworks and with its biological background. The thesis takes into account all the developments that have followed the original 1999's definition, and provides a formal and comprehensive systematization of the subject, as well as an up-to-date and quite comprehensive review of current applications. We have also identified in dynamic problems in telecommunication networks the most appropriate domain of application for the ACO ideas. According to this understanding, in the most applicative part of the thesis we have focused on problems of adaptive routing in networks and we have developed and tested four new algorithms.

Adopting an original point of view with respect to the way ACO was firstly defined (but maintaining full conceptual and terminological consistency), ACO is here defined and mainly discussed in the terms of sequential decision processes and Monte Carlo sampling and learning.

More precisely, ACO is characterized as a policy search strategy aimed at learning the distributed parameters (called pheromone variables in accordance with the biological metaphor) of the stochastic decision policy which is used by so-called ant agents to generate solutions. Each ant represents in practice an independent sequential decision process aimed at constructing a possibly feasible solution for the optimization problem at hand by using only information local to the decision step.

Ants are repeatedly and concurrently generated in order to sample the solution set according to the current policy. The outcomes of the generated solutions are used to partially evaluate the current policy, spot the most promising search areas, and update the policy parameters in order to possibly focus the search in those promising areas while keeping a satisfactory level of overall exploration.

This way of looking at ACO has facilitated to disclose the strict relationships between ACO and other well-known frameworks, like dynamic programming, Markov and non-Markov decision processes, and reinforcement learning. In turn, this has favored reasoning on the general properties of ACO in terms of amount of complete state information which is used by the ACO's ants to take optimized decisions and to encode in pheromone variables memory of both the decisions that belonged to the sampled solutions and their quality.

The ACO's biological context of inspiration is fully acknowledged in the thesis. We report with extensive discussions on the shortest path behaviors of ant colonies and on the identification and analysis of the few nonlinear dynamics that are at the very core of self-organized behaviors in both the ants and other societal organizations. We discuss these dynamics in the general framework of stigmergic modeling, based on asynchronous environment-mediated communication protocols, and (pheromone) variables priming coordinated responses of a number of ``cheap' and concurrent agents.

The second half of the thesis is devoted to the study of the application of ACO to problems of online routing in telecommunication networks. This class of problems has been identified in the thesis as the most appropriate for the application of the multi-agent, distributed, and adaptive nature of the ACO architecture.

Four novel ACO algorithms for problems of adaptive routing in telecommunication networks are throughly described. The four algorithms cover a wide spectrum of possible types of network: two of them deliver best-effort traffic in wired IP networks, one is intended for quality-of-service (QoS) traffic in ATM networks, and the fourth is for best-effort traffic in mobile ad hoc networks.

The two algorithms for wired IP networks have been extensively tested by simulation studies and compared to state-of-the-art algorithms for a wide set of reference scenarios. The algorithm for mobile ad hoc networks is still under development, but quite extensive results and comparisons with a popular state-of-the-art algorithm are reported. No results are reported for the algorithm for QoS, which has not been fully tested. The observed experimental performance is excellent, especially for the case of wired IP networks: our algorithms always perform comparably or much better than the state-of-the-art competitors.

In the thesis we try to understand the rationale behind the brilliant performance obtained and the good level of popularity reached by our algorithms. More in general, we discuss the reasons of the general efficacy of the ACO approach for network routing problems compared to the characteristics of more classical approaches. Moving further, we also informally define Ant Colony Routing (ACR), a multi-agent framework explicitly integrating learning components into the ACO's design in order to define a general and in a sense futuristic architecture for autonomic network control.

Most of the material of the thesis comes from a re-elaboration of material co-authored and published in a number of books, journal papers, conference proceedings, and technical reports. The detailed list of references is provided in the Introduction.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
46

Mandreoli, Lorenzo. "Density based Kinetic Monte Carlo Methods." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975329111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Kai. "Monte Carlo methods in derivative modelling." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/35689/.

Full text
Abstract:
This thesis addresses issues in discretization and variance reduction methods for Monte Carlo simulation. For the discretization methods, we investigate the convergence properties of various Itˆo-Taylor schemes and the strong Taylor expansion (Siopacha and Teichmann [77]) for the LIBOR market model. We also provide an improvement on the strong Taylor expansion method which produces lower pricing bias. For the variance reduction methods, we have four contributions. Firstly, we formulate a general stochastic volatility model nesting many existing models in the literature. Secondly, we construct a correlation control variate for this model. Thirdly, we apply the model as well as the new control variate to pricing average rate and barrier options. Numerical results demonstrate the improvement over using old control variates alone. Last but not least, with the help of our model and control variate, we explore the variations in barrier option pricing consistent with the implied volatility surface.
APA, Harvard, Vancouver, ISO, and other styles
48

Maggio, Emilio. "Monte Carlo methods for visual tracking." Thesis, Queen Mary, University of London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Crosby, Richard S. "Monte Carlo methods for lattice fields." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/77699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Junxiong. "Option Pricing Using Monte Carlo Methods." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/331.

Full text
Abstract:
This project is devoted primarily to the use of Monte Carlo methods to simulate stock prices in order to price European call options using control variates, and to the use of the binominal model to price American put options. At the end, we can use the information to form a portfolio position using an Interactive Brokers paper trading account. This project was done as a part of the masters capstone course Math 573: Computational Methods of Financial Mathematics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography