To see the other types of publications on this topic, follow the link: Algorithme Metropolis.

Dissertations / Theses on the topic 'Algorithme Metropolis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithme Metropolis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Merhi, Bleik Josephine. "Modeling, estimation and simulation into two statistical models : quantile regression and blind deconvolution." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2506.

Full text
Abstract:
Cette thèse est consacrée à l’estimation de deux modèles statistiques : le modèle des quantiles de régression simultanés et le modèle de déconvolution aveugle. Elle se compose donc de deux parties. Dans la première partie, nous nous intéressons à l’estimation simultanée de plusieurs quantiles de régression par l’approche Bayésienne. En supposant que le terme d’erreur suit la distribution de Laplace asymétrique et en utilisant la relation entre deux quantiles distincts de cette distribution, nous proposons une méthode simple entièrement Bayésienne qui satisfait la propriété non croisée des quantiles. Pour la mise en œuvre, nous utilisons l’algorithme de Gibbs avec une étape de Metropolis-Hastings pour simuler les paramètres inconnus suivant leur distribution conditionnelle a posteriori. Nous montrons la performance et la compétitivité de la méthode sous-jacente par rapport à d’autres méthodes en fournissant des exemples de simulation. Dans la deuxième partie, nous nous concentrons sur la restoration du filtre inverse et du niveau de bruit d’un modèle de déconvolution aveugle bruyant dans un environnement paramétrique. Après la caractérisation du niveau de bruit et du filtre inverse, nous proposons une nouvelle procédure d’estimation plus simple à mettre en œuvre que les autres méthodes existantes. De plus, nous considérons l’estimation de la distribution discrète inconnue du signal d’entrée. Nous obtenons une forte cohérence et une normalité asymptotique pour toutes nos estimations. En incluant une comparaison avec une autre méthode, nous effectuons une étude de simulation cohérente qui démontre empiriquement la performance informatique de nos procédures d’estimation<br>This thesis is dedicated to the estimation of two statistical models: the simultaneous regression quantiles model and the blind deconvolution model. It therefore consists of two parts. In the first part, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples. In the second part, we focus on recovering both the inverse filter and the noise level of a noisy blind deconvolution model in a parametric setting. After the characterization of both the true noise level and inverse filter, we provide a new estimation procedure that is simpler to implement compared with other existing methods. As well, we consider the estimation of the unknown discrete distribution of the input signal. We derive strong consistency and asymptotic normality for all our estimates. Including a comparison with another method, we perform a consistent simulation study that demonstrates empirically the computational performance of our estimation procedures
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Huu Du. "System Reliability : Inference for Common Cause Failure Model in Contexts of Missing Information." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS530.

Full text
Abstract:
Le bon fonctionnement de l’ensemble d’un système industriel est parfois fortement dépendant de la fiabilité de certains éléments qui le composent. Une défaillance de l’un de ces éléments peut conduire à une défaillance totale du système avec des conséquences qui peuvent être catastrophiques en particulier dans le secteur de l’industrie nucléaire ou dans le secteur de l’industrie aéronautique. Pour réduire ce risque de panne catastrophique, une stratégie consiste à dupliquer les éléments sensibles dans le dispositif. Ainsi, si l’un de ces éléments tombe en panne, un autre pourra prendre le relais et le bon fonctionnement du système pourra être maintenu. Cependant, on observe couramment des situations qui conduisent à des défaillances simultanées d’éléments du système : on parle de défaillance de cause commune. Analyser, modéliser, prédire ce type d’événement revêt donc une importance capitale et sont l’objet des travaux présentés dans cette thèse. Il existe de nombreux modèles pour les défaillances de cause commune. Des méthodes d’inférence pour étudier les paramètres de ces modèles ont été proposées. Dans cette thèse, nous considérons la situation où l’inférence est menée sur la base de données manquantes. Nous étudions en particulier le modèle BFR (Binomial Failure Rate) et la méthode des facteurs alpha. En particulier, une approche bayésienne est développée en s’appuyant sur des techniques algorithmiques (Metropolis, IBF). Dans le domaine du nucléaire, les données de défaillances sont peu abondantes et des techniques particulières d’extrapolations de données doivent être mis en oeuvre pour augmenter l’information. Nous proposons dans le cadre de ces stratégies, des techniques de prédiction des défaillances de cause commune. L’actualité récente a mis en évidence l’importance de la fiabilité des systèmes redondants et nous espérons que nos travaux contribueront à une meilleure compréhension et prédiction des risques de catastrophes majeures<br>The effective operation of an entire industrial system is sometimes strongly dependent on the reliability of its components. A failure of one of these components can lead to the failure of the system with consequences that can be catastrophic, especially in the nuclear industry or in the aeronautics industry. To reduce this risk of catastrophic failures, a redundancy policy, consisting in duplicating the sensitive components in the system, is often applied. When one of these components fails, another will take over and the normal operation of the system can be maintained. However, some situations that lead to simultaneous failures of components in the system could be observed. They are called common cause failure (CCF). Analyzing, modeling, and predicting this type of failure event are therefore an important issue and are the subject of the work presented in this thesis. We investigate several methods to deal with the statistical analysis of CCF events. Different algorithms to estimate the parameters of the models and to make predictive inference based on various type of missing data are proposed. We treat confounded data using a BFR (Binomial Failure Rare) model. An EM algorithm is developed to obtain the maximum likelihood estimates (MLE) for the parameters of the model. We introduce the modified-Beta distribution to develop a Bayesian approach. The alpha-factors model is considered to analyze uncertainties in CCF. We suggest a new formalism to describe uncertainty and consider Dirichlet distributions (nested, grouped) to make a Bayesian analysis. Recording of CCF cause data leads to incomplete contingency table. For a Bayesian analysis of this type of tables, we propose an algorithm relying on inverse Bayes formula (IBF) and Metropolis-Hasting algorithm. We compare our results with those obtained with the alpha- decomposition method, a recent method proposed in the literature. Prediction of catastrophic event is addressed and mapping strategies are described to suggest upper bounds of prediction intervals with pivotal method and Bayesian techniques. Recent events have highlighted the importance of reliability redundant systems and we hope that our work will contribute to a better understanding and prediction of the risks of major CCF events
APA, Harvard, Vancouver, ISO, and other styles
3

Martinez, Marie-José. "Modèles linéaires généralisés à effets aléatoires : contributions au choix de modèle et au modèle de mélange." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00388820.

Full text
Abstract:
Ce travail est consacré à l'étude des modèles linéaires généralisés à effets aléatoires (GL2M). Dans ces modèles, sous une hypothèse de distribution normale des effets aléatoires, la vraisemblance basée sur la distribution marginale du vecteur à expliquer n'est pas, en général, calculable de façon formelle. Dans la première partie de notre travail, nous revisitons différentes méthodes d'estimation non exactes par le biais d'approximations réalisées à différents niveaux selon les raisonnements. La deuxième partie est consacrée à la mise en place de critères de sélection de modèles au sein des GL2M. Nous revenons sur deux méthodes d'estimation nécessitant la construction de modèles linéarisés et nous proposons des critères basés sur la vraisemblance marginale calculée dans le modèle linéarisé obtenu à la convergence de la procédure d'estimation. La troisième et dernière partie s'inscrit dans le cadre des modèles de mélanges de GL2M. Les composants du mélange sont définis par des GL2M et traduisent différents états possibles des individus. Dans le cadre de la loi exponentielle, nous proposons une méthode d'estimation des paramètres du mélange basée sur une linéarisation spécifique à cette loi. Nous proposons ensuite une méthode plus générale puisque s'appliquant à un mélange de GL2M quelconques. Cette méthode s'appuie sur une étape de Metropolis-Hastings pour construire un algorithme de type MCEM. Les différentes méthodes développées sont testées par simulations.
APA, Harvard, Vancouver, ISO, and other styles
4

Ounaissi, Daoud. "Méthodes quasi-Monte Carlo et Monte Carlo : application aux calculs des estimateurs Lasso et Lasso bayésien." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10043/document.

Full text
Abstract:
La thèse contient 6 chapitres. Le premier chapitre contient une introduction à la régression linéaire et aux problèmes Lasso et Lasso bayésien. Le chapitre 2 rappelle les algorithmes d’optimisation convexe et présente l’algorithme FISTA pour calculer l’estimateur Lasso. La statistique de la convergence de cet algorithme est aussi donnée dans ce chapitre en utilisant l’entropie et l’estimateur de Pitman-Yor. Le chapitre 3 est consacré à la comparaison des méthodes quasi-Monte Carlo et Monte Carlo dans les calculs numériques du Lasso bayésien. Il sort de cette comparaison que les points de Hammersely donne les meilleurs résultats. Le chapitre 4 donne une interprétation géométrique de la fonction de partition du Lasso bayésien et l’exprime en fonction de la fonction Gamma incomplète. Ceci nous a permis de donner un critère de convergence pour l’algorithme de Metropolis Hastings. Le chapitre 5 présente l’estimateur bayésien comme la loi limite d’une équation différentielle stochastique multivariée. Ceci nous a permis de calculer le Lasso bayésien en utilisant les schémas numériques semi implicite et explicite d’Euler et les méthodes de Monte Carlo, Monte Carlo à plusieurs couches (MLMC) et l’algorithme de Metropolis Hastings. La comparaison des coûts de calcul montre que le couple (schéma semi-implicite d’Euler, MLMC) gagne contre les autres couples (schéma, méthode). Finalement dans le chapitre 6 nous avons trouvé la vitesse de convergence du Lasso bayésien vers le Lasso lorsque le rapport signal/bruit est constant et le bruit tend vers 0. Ceci nous a permis de donner de nouveaux critères pour la convergence de l’algorithme de Metropolis Hastings<br>The thesis contains 6 chapters. The first chapter contains an introduction to linear regression, the Lasso and the Bayesian Lasso problems. Chapter 2 recalls the convex optimization algorithms and presents the Fista algorithm for calculating the Lasso estimator. The properties of the convergence of this algorithm is also given in this chapter using the entropy estimator and Pitman-Yor estimator. Chapter 3 is devoted to comparison of Monte Carlo and quasi-Monte Carlo methods in numerical calculations of Bayesian Lasso. It comes out of this comparison that the Hammersely points give the best results. Chapter 4 gives a geometric interpretation of the partition function of the Bayesian lasso expressed as a function of the incomplete Gamma function. This allowed us to give a convergence criterion for the Metropolis Hastings algorithm. Chapter 5 presents the Bayesian estimator as the law limit a multivariate stochastic differential equation. This allowed us to calculate the Bayesian Lasso using numerical schemes semi-implicit and explicit Euler and methods of Monte Carlo, Monte Carlo multilevel (MLMC) and Metropolis Hastings algorithm. Comparing the calculation costs shows the couple (semi-implicit Euler scheme, MLMC) wins against the other couples (scheme method). Finally in chapter 6 we found the Lasso convergence rate of the Bayesian Lasso when the signal / noise ratio is constant and when the noise tends to 0. This allowed us to provide a new criteria for the convergence of the Metropolis algorithm Hastings
APA, Harvard, Vancouver, ISO, and other styles
5

Mainguy, Thomas. "Processus de substitution markoviens : un modèle statistique pour la linguistique." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066354/document.

Full text
Abstract:
Ce travail de thèse propose une nouvelle approche au traitement des langues naturelles. Plutôt qu'essayer d'estimer directement la probabilité d'une phrase quelconque, nous identifions des structures syntaxiques dans le langage, qui peuvent être utilisées pour modifier et créer de nouvelles phrases à partir d'un échantillon initial. L'étude des structures syntaxiques est accomplie avec des ensembles de substitution Markoviens, ensembles de chaînes de caractères qui peuvent être échangées sans affecter la distribution. Ces ensembles définissent des processus de substitution Markoviens qui modélisent l'indépendance conditionnelle de certaines chaînes vis-À-Vis de leur contexte. Ce point de vue décompose l'analyse du langage en deux parties, une phase de sélection de modèle, où les ensembles de substitution sont sélectionnés, et une phase d'estimation des paramètres, où les fréquences pour chaque ensemble sont estimées. Nous montrons que ces processus constituent des familles exponentielles quand la structure du langage est fixée. Lorsque la structure du langage est inconnue, nous proposons des méthodes pour identifier des ensembles de substitution à partir d'un échantillon, et pour estimer les paramètres de la distribution. Les ensembles de substitution ont quelques relations avec les grammaires hors-Contexte, qui peuvent être utilisées pour aider l'analyse. Nous construisons alors des dynamiques invariantes pour les processus de substitution. Elles peuvent être utilisées pour calculer l'estimateur du maximum de vraisemblance. En effet, les processus de substitution peuvent être vus comme la limite thermodynamique de la mesure invariante d'une dynamique de crossing-Over<br>This thesis proposes a new approach to natural language processing. Rather than trying to estimate directly the probability distribution of a random sentence, we will detect syntactic structures in the language, which can be used to modify and create new sentences from an initial sample.The study of syntactic structures will be done using Markov substitute sets, sets of strings that can be freely substituted in any sentence without affecting the whole distribution. These sets define the notion of Markov substitute processes, modelling conditional independence of certain substrings (given by the sets) with respect to their context. This point of view splits the issue of language analysis into two parts, a model selection stage where Markov substitute sets are selected, and a parameter estimation stage where the actual frequencies for each set are estimated.We show that these substitute processes form exponential families of distributions, when the language structure (the Markov substitute sets) is fixed. On the other hand, when the language structure is unknown, we propose methods to identify Markov substitute sets from a statistical sample, and to estimate the parameters of the distribution. Markov substitute sets show some connections with context-Free grammars, that can be used to help the analysis. We then proceed to build invariant dynamics for Markov substitute processes. They can among other things be used to effectively compute the maximum likelihood estimate. Indeed, Markov substitute models can be seen as the thermodynamical limit of the invariant measure of crossing-Over dynamics
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Quoc Thong. "Modélisation probabiliste d’impression à l’échelle micrométrique." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10039/document.

Full text
Abstract:
Nous développons des modèles probabilistes pour l’impression à l’échelle micrométrique. Tenant compte de l’aléa de la forme des points qui composent les impressions, les modèles proposés pourront être ultérieurement exploités dans différentes applications dont l’authentification de documents imprimés. Une analyse de l’impression sur différents supports papier et par différentes imprimantes a été effectuée. Cette étude montre que la grande variété de forme dépend de la technologie et du papier. Le modèle proposé tient compte à la fois de la distribution du niveau de gris et de la répartition spatiale de l’encre sur le papier. Concernant le niveau de gris, les modèles des surfaces encrées/vierges sont obtenues en sélectionnant les distributions dans un ensemble de lois de forme similaire aux histogrammes et à l’aide de K-S critère. Le modèle de répartition spatiale de l’encre est binaire. Le premier modèle consiste en un champ de variables indépendantes de Bernoulli non-stationnaire dont les paramètres forment un noyau gaussien généralisé. Un second modèle de répartition spatiale des particules d’encre est proposé, il tient compte de la dépendance des pixels à l’aide d’un modèle de Markov non stationnaire. Deux méthodes d’estimation ont été développées, l’une approchant le maximum de vraisemblance par un algorithme de Quasi Newton, la seconde approchant le critère de l’erreur quadratique moyenne minimale par l’algorithme de Metropolis within Gibbs. Les performances des estimateurs sont évaluées et comparées sur des images simulées. La précision des modélisations est analysée sur des jeux d’images d’impression à l’échelle micrométrique obtenues par différentes imprimantes<br>We develop the probabilistic models of the print at the microscopic scale. We study the shape randomness of the dots that originates the prints, and the new models could improve many applications such as the authentication. An analysis was conducted on various papers, printers. The study shows a large variety of shape that depends on the printing technology and paper. The digital scan of the microscopic print is modeled in: the gray scale distribution, and the spatial binary process modeling the printed/blank spatial distribution. We seek the best parametric distribution that takes account of the distributions of the blank and printed areas. Parametric distributions are selected from a set of distributions with shapes close to the histograms and with the Kolmogorov-Smirnov divergence. The spatial binary model handles the wide diversity of dot shape and the range of variation of spatial density of inked particles. At first, we propose a field of independent and non-stationary Bernoulli variables whose parameters form a Gaussian power. The second spatial binary model encompasses, in addition to the first model, the spatial dependence of the inked area through an inhomogeneous Markov model. Two iterative estimation methods are developed; a quasi-Newton algorithm which approaches the maximum likelihood and the Metropolis-Hasting within Gibbs algorithm that approximates the minimum mean square error estimator. The performances of the algorithms are evaluated and compared on simulated images. The accuracy of the models is analyzed on the microscopic scale printings coming from various printers. Results show the good behavior of the estimators and the consistency of the models
APA, Harvard, Vancouver, ISO, and other styles
7

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
8

Guidi, Henrique Santos. "Estudo do resfriamento em um sistema com múltiplos estados fundamentais." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-23042008-215856/.

Full text
Abstract:
Estudamos um sistema de dois níveis acoplados como um modelo que imita o comportamento de líquidos super-resfriados. Em equilíbrio o modelo apresenta uma fase líquida e uma fase cristalina com diversos estados fundamentais. O modelo é definido numa rede quadrada e a cada sítio é associada uma variável estocástica de Ising. A característica que torna este modelo particularmente interessante é que ele apresenta estados metaestáveis duráveis que podem desaparecer dentro do tempo acessível para as simulações numéricas. Para imitar o processo de formação dos vidros, realizamos simulações de Monte Carlo a taxas de resfriamento constante. Apresentamos também simulações para resfriamentos súbitos a temperatura abaixo da temperatura de fusão.<br>We study a coupled two level systems as a model that imitate the behavior of supercooled liquids that become structural glasses under cooling. In the equilibrium the model shows a liquid phase and a crystalline phase with many grouond states. The model is defined on a square lattice and to each site a stochastic Ising variable is associated. The feature that makes this model particularly interesting is that it display durable metastables states which can vanish within the time available for numerical simulations. In order to imitate the glass former process, we perform Monte Carlo simulations at constant cooling rate. We present also simulations for quenchs to temperatures below the melting temperature.
APA, Harvard, Vancouver, ISO, and other styles
9

Taylor, Katarzyna B. "Exact algorithms for simulation of diffusions with discontinuous drift and robust curvature metropolis-adjusted Langevin algorithms." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/72262/.

Full text
Abstract:
In our work we propose new Exact Algorithms for simulation of diffusions with discontinuous drift and new methodology for simulating Brownian motion jointly with its local time. In the second part of the thesis we introduce Metropolis-adjusted Langevin algorithm which uses local geometry and we prove geometric ergodicity in case of benchmark distributions with light tails.
APA, Harvard, Vancouver, ISO, and other styles
10

Tremblay, Marie. "Estimation des paramètres des modèles de culture : application au modèle STICS Tournesol." Toulouse 3, 2004. http://www.theses.fr/2004TOU30020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kosowski, Adrian. "Time and Space-Efficient Algorithms for Mobile Agents in an Anonymous Network." Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00867765.

Full text
Abstract:
Computing with mobile agents is rapidly becoming a topic of mainstream research in the theory of distributed computing. The main research questions undertaken in this study concern the feasibility of solving fundamental tasks in an anonymous network, subject to limitations on the resources available to the agent. The considered challenges include: exploring a graph by means of an agent with limited memory, discovery of the network topology, and attempting to meet with another agent in another network (rendezvous). The constraints imposed on the agent include the number of moves which the agent is allowed to perform in the network, the amount of state memory available to the agent, the ability of the agent to communicate with other agents, as well as its a priori knowledge of the network topology or of global parameters.
APA, Harvard, Vancouver, ISO, and other styles
12

Þorgeirsson, Sverrir. "Bayesian parameter estimation in Ecolego using an adaptive Metropolis-Hastings-within-Gibbs algorithm." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-304259.

Full text
Abstract:
Ecolego is scientific software that can be used to model diverse systems within fields such as radioecology and pharmacokinetics. The purpose of this research is to develop an algorithm for estimating the probability density functions of unknown parameters of Ecolego models. In order to do so, a general-purpose adaptive Metropolis-Hastings-within-Gibbs algorithm is developed and tested on some examples of Ecolego models. The algorithm works adequately on those models, which indicates that the algorithm could be integrated successfully into future versions of Ecolego.
APA, Harvard, Vancouver, ISO, and other styles
13

Bachouch, Achref. "Numerical Computations for Backward Doubly Stochastic Differential Equations and Nonlinear Stochastic PDEs." Thesis, Le Mans, 2014. http://www.theses.fr/2014LEMA1034/document.

Full text
Abstract:
L’objectif de cette thèse est l’étude d’un schéma numérique pour l’approximation des solutions d’équations différentielles doublement stochastiques rétrogrades (EDDSR). Durant les deux dernières décennies, plusieurs méthodes ont été proposées afin de permettre la résolution numérique des équations différentielles stochastiques rétrogrades standards. Dans cette thèse, on propose une extension de l’une de ces méthodes au cas doublement stochastique. Notre méthode numérique nous permet d’attaquer une large gamme d’équations aux dérivées partielles stochastiques (EDPS) nonlinéaires. Ceci est possible par le biais de leur représentation probabiliste en termes d’EDDSRs. Dans la dernière partie, nous étudions une nouvelle méthode des particules dans le cadre des études de protection en neutroniques<br>The purpose of this thesis is to study a numerical method for backward doubly stochastic differential equations (BDSDEs in short). In the last two decades, several methods were proposed to approximate solutions of standard backward stochastic differential equations. In this thesis, we propose an extension of one of these methods to the doubly stochastic framework. Our numerical method allows us to tackle a large class of nonlinear stochastic partial differential equations (SPDEs in short), thanks to their probabilistic interpretation. In the last part, we study a new particle method in the context of shielding studies
APA, Harvard, Vancouver, ISO, and other styles
14

Rizzi, Leandro Gutierrez. "Simulações numéricas de Monte Carlo aplicadas no estudo das transições de fase do modelo de Ising dipolar bidimensional." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/59/59135/tde-23052009-134513/.

Full text
Abstract:
O modelo de Ising dipolar bidimensional inclui, além da interação ferromagnética entre os primeiros vizinhos, interações de longo alcance entre os momentos de dipolo magnético dos spins. A presença da interação dipolar muda completamente o sistema, apresentando um rico diagrama de fase, cujas características têm originado inúmeros estudos na literatura. Além disso, a possibilidade de explicar fenômenos observados em filmes magnéticos ultrafinos, os quais possuem diversas aplicações em àreas tecnológicas, também motiva o estudo deste modelo. O estado fundamental ferromagnético do modelo de Ising puro é alterado para uma série de fases do tipo faixas, as quais consistem em domínios ferromagnéticos de largura $h$ com magnetizações opostas. A largura das faixas depende da razao $\\delta$ das intensidades dos acoplamentos ferromagnético e dipolar. Através de simulações de Monte Carlo e técnicas de repesagem em histogramas múltiplos identificamos as temperaturas críticas de tamanho finito para as transições de fase quando $\\delta=2$, o que corresponde a $h=2$. Calculamos o calor específico e a susceptibilidade do parâmetro de ordem, no intervalo de temperaturas onde as transições são observadas, para diferentes tamanhos de rede. As técnicas de repesagem permitem-nos explorar e identificar máximos distintos nessas funções da temperatura e, desse modo, estimar as temperaturas críticas de tamanho finito com grande precisão. Apresentamos evidências numéricas da existência de uma fase nemática de Ising para tamanhos grandes de rede. Em nossas simulações, observamos esta fase para tamanhos de rede a partir de $L=48$. Para verificar o quanto a interação dipolar de longo alcance afeta as estimativas físicas, nós calculamos o tempo de autocorrelação integrado nas séries temporais da energia. Inferimos daí quão severo é o critical slowing down (decaimento lento crítico) para esse sistema próximo às transições de fase termodinâmicas. Os resultados obtidos utilizando um algoritmo de atualização local foram comparados com os resultados obtidos utilizando o algoritmo multicanônico.<br>Two-dimensional spin model with nearest-neighbor ferromagnetic interaction and long-range dipolar interactions exhibit a rich phase diagram, whose characteristics have been exploited by several studies in the recent literature. Furthermore, the possibility of explain observed phenomena in ultrathin magnetic films, which have many technological applications, also motivates the study of this model. The presence of dipolar interaction term changes the ferromagnetic ground state expected for the pure Ising model to a series of striped phases, which consist of ferromagnetic domains of width $h$ with opposite magnetization. The width of the stripes depends on the ratio $\\delta$ of the ferromagnetic and dipolar couplings. Monte Carlo simulations and reweighting multiple histograms techniques allow us to identify the finite-size critical temperatures of the phase transitions when $\\delta=2$, which corresponds to $h=2$. We calculate, for different lattice sizes, the specific heat and susceptibility of the order parameter around the transition temperatures by means of reweighting techniques. This allows us to identify in these observables, as functions of temperature, the distinct maxima and thereby to estimate the finite-size critical temperatures with high precision. We present numerical evidence of the existence of a Ising nematic phase for large lattice sizes. Our results show that simulations need to be performed for lattice sizes at least as large as $L=48$ to clearly observe the Ising nematic phase. To access how the long-range dipolar interaction may affect physical estimates we also evaluate the integrated autocorrelation time in energy time series. This allows us to infer how severe is the critical slowing down for this system with long-range interaction and nearby thermodynamic phase transitions. The results obtained using a local update algorithm are compared with results obtained using the multicanonical algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Michel, Manon. "Irreversible Markov chains by the factorized Metropolis filter : algorithms and applications in particle systems and spin models." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE039/document.

Full text
Abstract:
Cette thèse porte sur le développement et l'application en physique statistique d'un nouveau paradigme pour les méthodes sans rejet de Monte-Carlo par chaînes de Markov irréversibles, grâce à la mise en œuvre du filtre factorisé de Metropolis et du concept de lifting. Les deux premiers chapitres présentent la méthode de Monte-Carlo et ses différentes applications à des problèmes de physique statistique. Une des principales limites de ces méthodes se rencontre dans le voisinage des transitions de phase, où des phénomènes de ralentissement dynamique entravent fortement la thermalisation des systèmes. Le troisième chapitre présente la nouvelle classe des algorithmes de Metropolis factorisés et irréversibles. Se fondant sur le concept de lifting des chaînes de Markov, le filtre factorisé de Metropolis permet de décomposer un potentiel multidimensionnel en plusieurs autres unidimensionnels. De là, il est possible de définir un algorithme sans rejet de Monte-Carlo par chaînes de Markov irréversibles. Le quatrième chapitre examine les performances de ce nouvel algorithme dans une grande variété de systèmes. Des accélérations du temps de thermalisation sont observées dans des systèmes bidimensionnels de particules molles, des systèmes bidimensionnels de spins XY ferromagnétiques et des systèmes tridimensionnels de verres de spins XY. Finalement, une réduction importante du ralentissement critique est exposée pour un système tridimensionnel de spins Heisenberg ferromagnétiques<br>This thesis deals with the development and application in statistical physics of a general framework for irreversible and rejection-free Markov-chain Monte Carlo methods, through the implementation of the factorized Metropolis filter and the lifting concept. The first two chapters present the Markov-chain Monte Carlo method and its different implementations in statistical physics. One of the main limitations of Markov-chain Monte Carlo methods arises around phase transitions, where phenomena of dynamical slowing down greatly impede the thermalization of the system. The third chapter introduces the new class of irreversible factorized Metropolis algorithms. Building on the concept of lifting of Markov chains, the factorized Metropolis filter allows to decompose a multidimensional potential into several unidimensional ones. From there, it is possible to define a rejection-free and completely irreversible Markov-chain Monte Carlo algorithm. The fourth chapter reviews the performance of the irreversible factorized algorithm in a wide variety of systems. Clear accelerations of the thermalization time are observed in bidimensional soft-particle systems, bidimensional ferromagnetic XY spin systems and three-dimensional XY spin glasses. Finally, an important reduction of the critical slowing down is exhibited in three-dimensional ferromagnetic Heisenberg spin systems
APA, Harvard, Vancouver, ISO, and other styles
16

VASCONCELOS, Josimar Mendes de. "Equações simultâneas no contexto clássico e bayesiano: uma abordagem à produção de soja." Universidade Federal Rural de Pernambuco, 2011. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5012.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-07-07T12:44:03Z No. of bitstreams: 1 Josimar Mendes de Vasconcelos.pdf: 4725831 bytes, checksum: 716f4b6bc6100003772271db252915b7 (MD5)<br>Made available in DSpace on 2016-07-07T12:44:03Z (GMT). No. of bitstreams: 1 Josimar Mendes de Vasconcelos.pdf: 4725831 bytes, checksum: 716f4b6bc6100003772271db252915b7 (MD5) Previous issue date: 2011-08-08<br>Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq<br>The last years has increased the quantity of researchers and search scientific in the plantation, production and value of the soybeans in the Brazil, in grain. In front of this, the present dissertation looks for to analyze the data and estimate models that explain, of satisfactory form, the variability observed of the quantity produced and value of the production of soya in grain in the Brazil, in the field of the study. For the development of these analyses is used the classical and Bayesian inference, in the context of simultaneous equations by the tools of indirect square minimum in two practices. In the classical inference uses the estimator of square minima in two practices. In the Bayesian inference worked the method of Mountain Carlo via Chain of Markov with the algorithms of Gibbs and Metropolis-Hastings by means of the technician of simultaneous equations. In the study, consider the variable area harvested, quantity produced, value of the production and gross inner product, in which it adjusted the model with the variable answer quantity produced and afterwards the another variable answer value of the production for finally do the corrections and obtain the final result, in the classical and Bayesian method. Through of the detours normalized, statistics of the proof-t, criteria of information Akaike and Schwarz normalized stands out the good application of the method of Mountain Carlo via Chain of Markov by the algorithm of Gibbs, also is an efficient method in the modelado and of easy implementation in the statistical softwares R & WinBUGS, as they already exist smart libraries to compile the method. Therefore, it suggests work the method of Mountain Carlo via chain of Markov through the method of Gibbs to estimate the production of soya in grain.<br>Nos últimos anos tem aumentado a quantidade de pesquisadores e pesquisas científicas na plantação, produção e valor de soja no Brasil, em grão. Diante disso, a presente dissertação busca analisar os dados e ajustar modelos que expliquem, de forma satisfatória, a variabilidade observada da quantidade produzida e valor da produção de soja em grão no Brasil, no campo do estudo. Para o desenvolvimento dessas análises é utilizada a inferência clássica e bayesiana, no contexto de equações simultâneas através da ferramenta de mínimos quadrados em dois estágios. Na inferência clássica utiliza-se o estimador de mínimos quadrados em dois estágios. Na inferência bayesiana trabalhou-se o método de Monte Carlo via Cadeia de Markov com os algoritmos de Gibbs e Metropolis-Hastings por meio da técnica de equações simultâneas. No estudo, consideram-se as variáveis área colhida, quantidade produzida, valor da produção e produto interno bruto, no qual ajustou-se o modelo com a variável resposta quantidade produzida e depois a variável resposta valor da produção para finalmente fazer as correções e obter o resultado final, no método clássico e bayesiano. Através, dos desvios padrão, estatística do teste-t, critérios de informação Akaike e Schwarz normalizados destaca-se a boa aplicação do método de Monte Carlo via Cadeia de Markov pelo algoritmo de Gibbs, também é um método eficiente na modelagem e de fácil implementação nos softwares estatísticos R & WinBUGS, pois já existem bibliotecas prontas para compilar o método. Portanto, sugere-se trabalhar o método de Monte Carlo via cadeia de Markov através do método de Gibbs para estimar a produção de soja em grão, no Brasil.
APA, Harvard, Vancouver, ISO, and other styles
17

Mendis, Ruchini Dilinika. "Sensitivity Analyses for Tumor Growth Models." TopSCHOLAR®, 2019. https://digitalcommons.wku.edu/theses/3113.

Full text
Abstract:
This study consists of the sensitivity analysis for two previously developed tumor growth models: Gompertz model and quotient model. The two models are considered in both continuous and discrete time. In continuous time, model parameters are estimated using least-square method, while in discrete time, the partial-sum method is used. Moreover, frequentist and Bayesian methods are used to construct confidence intervals and credible intervals for the model parameters. We apply the Markov Chain Monte Carlo (MCMC) techniques with the Random Walk Metropolis algorithm with Non-informative Prior and the Delayed Rejection Adoptive Metropolis (DRAM) algorithm to construct parameters' posterior distributions and then obtain credible intervals.
APA, Harvard, Vancouver, ISO, and other styles
18

Cercone, Maria Grazia. "Origini e sviluppi del calcolo delle probabilità ed alcune applicazioni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13500/.

Full text
Abstract:
Questa tesi ripercorre l’evoluzione del calcolo delle probabilità attraverso i secoli partendo dalle origini e giungendo al XX secolo, segnato dall’introduzione della teoria sulle Catene di Markov che, fornendo importanti concetti matematici, troverà in seguito applicazione in ambiti diversi tra loro. Il primo capitolo è interamente dedicato ad illustrare l’excursus storico del calcolo delle probabilità; si parte dalle antiche civiltà, dove l’idea di probabilità sorse intorno a questioni riguardanti la vita comune e il gioco d’azzardo, per arrivare al XX secolo in cui si formarono le tre scuole di pensiero: frequentista, soggettivista e assiomatica. Il secondo capitolo esamina la figura di A. A. Markov e l’importante contributo apportato al calcolo delle probabilità attraverso l’introduzione della teoria delle Catene di Markov. Si trattano poi importanti applicazioni della stessa quali l’Algoritmo di Metropolis-Hastings, il Campionamento di Gibbs e la procedura di Simulated Annealing. Il terzo capitolo analizza un’ulteriore applicazione della teoria markoviana: l’algoritmo di link analysis ranking, noto come algoritmo PageRank, alla base del motore di ricerca Google.
APA, Harvard, Vancouver, ISO, and other styles
19

Potter, Christopher C. J. "Kernel Selection for Convergence and Efficiency in Markov Chain Monte Carol." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/249.

Full text
Abstract:
Markov Chain Monte Carlo (MCMC) is a technique for sampling from a target probability distribution, and has risen in importance as faster computing hardware has made possible the exploration of hitherto difficult distributions. Unfortunately, this powerful technique is often misapplied by poor selection of transition kernel for the Markov chain that is generated by the simulation. Some kernels are used without being checked against the convergence requirements for MCMC (total balance and ergodicity), but in this work we prove the existence of a simple proxy for total balance that is not as demanding as detailed balance, the most widely used standard. We show that, for discrete-state MCMC, that if a transition kernel is equivalent when it is “reversed” and applied to data which is also “reversed”, then it satisfies total balance. We go on to prove that the sequential single-variable update Metropolis kernel, where variables are simply updated in order, does indeed satisfy total balance for many discrete target distributions, such as the Ising model with uniform exchange constant. Also, two well-known papers by Gelman, Roberts, and Gilks (GRG)[1, 2] have proposed the application of the results of an interesting mathematical proof to the realistic optimization of Markov Chain Monte Carlo computer simulations. In particular, they advocated tuning the simulation parameters to select an acceptance ratio of 0.234 . In this paper, we point out that although the proof is valid, its result’s application to practical computations is not advisable, as the simulation algorithm considered in the proof is so inefficient that it produces very poor results under all circumstances. The algorithm used by Gelman, Roberts, and Gilks is also shown to introduce subtle time-dependent correlations into the simulation of intrinsically independent variables. These correlations are of particular interest since they will be present in all simulations that use multi-dimensional MCMC moves.
APA, Harvard, Vancouver, ISO, and other styles
20

Volfson, Alexander. "Exploring the optimal Transformation for Volatility." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-theses/472.

Full text
Abstract:
This paper explores the fit of a stochastic volatility model, in which the Box-Cox transformation of the squared volatility follows an autoregressive Gaussian distribution, to the continuously compounded daily returns of the Australian stock index. Estimation was difficult, and over-fitting likely, because more variables are present than data. We developed a revised model that held a couple of these variables fixed and then, further, a model which reduced the number of variables significantly by grouping trading days. A Metropolis-Hastings algorithm was used to simulate the joint density and derive estimated volatilities. Though autocorrelations were higher with a smaller Box-Cox transformation parameter, the fit of the distribution was much better.
APA, Harvard, Vancouver, ISO, and other styles
21

Torrence, Robert Billington. "Bayesian Parameter Estimation on Three Models of Influenza." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/77611.

Full text
Abstract:
Mathematical models of viral infections have been informing virology research for years. Estimating parameter values for these models can lead to understanding of biological values. This has been successful in HIV modeling for the estimation of values such as the lifetime of infected CD8 T-Cells. However, estimating these values is notoriously difficult, especially for highly complex models. We use Bayesian inference and Monte Carlo Markov Chain methods to estimate the underlying densities of the parameters (assumed to be continuous random variables) for three models of influenza. We discuss the advantages and limitations of parameter estimation using these methods. The data and influenza models used for this project are from the lab of Dr. Amber Smith in Memphis, Tennessee.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

BARROS, Kleber Napoleão Nunes de Oliveira. "Abordagem clássica e Bayesiana em modelos simétricos transformados aplicados à estimativa de crescimento em altura de Eucalyptus urophylla no Polo Gesseiro do Araripe-PE." Universidade Federal Rural de Pernambuco, 2010. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5142.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-08-01T17:35:24Z No. of bitstreams: 1 Kleber Napoleao Nunes de Oliveira Barros.pdf: 2964667 bytes, checksum: a3c757cb7ed16fc9c38b7834b6e0fa29 (MD5)<br>Made available in DSpace on 2016-08-01T17:35:24Z (GMT). No. of bitstreams: 1 Kleber Napoleao Nunes de Oliveira Barros.pdf: 2964667 bytes, checksum: a3c757cb7ed16fc9c38b7834b6e0fa29 (MD5) Previous issue date: 2010-02-22<br>It is presented in this work the growth model nonlinear Chapman-Richards with distribution of errors following the new class of symmetric models processed and Bayesian inference for the parameters. The objective was to apply this structure, via Metropolis-Hastings algorithm, in order to select the equation that best predicted heights of clones of Eucalyptus urophilla experiment established at the Agronomic Institute of Pernambuco (IPA) in the city of Araripina . The Gypsum Pole of Araripe is an industrial zone, located on the upper interior of Pernambuco, which consumes large amount of wood from native vegetation (caatinga) for calcination of gypsum. In this scenario, there is great need for a solution, economically and environmentally feasible that allows minimizing the pressure on native vegetation. The generus Eucalyptus presents itself as an alternative for rapid development and versatility. The height has proven to be an important factor in prognosis of productivity and selection of clones best adapted. One of the main growth curves, is the Chapman-Richards model with normal distribution for errors. However, some alternatives have been proposed in order to reduce the influence of atypical observations generated by this model. The data were taken from a plantation, with 72 months. Were performed inferences and diagnostics for processed and unprocessed model with many distributions symmetric. After selecting the best equation, was shown some convergence of graphics and other parameters that show the fit to the data model transformed symmetric Student’s t with 5 degrees of freedom in the parameters using Bayesian inference.<br>É abordado neste trabalho o modelo de crescimento não linear de Chapman-Richards com distribuição dos erros seguindo a nova classe de modelos simétricos transformados e inferência Bayesiana para os parâmetros. O objetivo foi aplicar essa estrutura, via algoritmo de Metropolis-Hastings, afim de selecionar a equação que melhor estimasse as alturas de clones de Eucalyptus urophilla provenientes de experimento implantado no Instituto Agronômico de Pernambuco (IPA), na cidade de Araripina. O Polo Gesseiro do Araripe é uma zona industrial, situada no alto sertão pernambucano, que consume grande quantidade de lenha proveniente da vegetação nativa (caatinga) para calcinação da gipsita. Nesse cenário, há grande necessidade de uma solução, econômica e ambientalmente, viável que possibilite uma minimização da pressão sobre a flora nativa. O gênero Eucalyptus se apresenta como alternativa, pelo seu rápido desenvolvimento e versatilidade. A altura tem se revelado fator importante na prognose de produtividade e seleção de clones melhores adaptados. Uma das principais curvas de crescimento, é o modelo de Chapman- Richards com distribuição normal para os erros. No entanto, algumas alternativas tem sido propostas afim de reduzir a influência de observações atípicas geradas por este modelo. Os dados foram retirados de uma plantação, com 72 meses. Foram realizadas as inferências e diagnósticos para modelo transformado e não transformado com diversas distribuições simétricas. Após a seleção da melhor equação, foram mostrados alguns gráficos da convergência dos parâmetros e outros que comprovam o ajuste aos dados do modelo simétrico transformado t de Student com 5 graus de liberdade utilizando inferência Bayesiana nos parâmetros.
APA, Harvard, Vancouver, ISO, and other styles
23

Gendre, Victor Hugues. "Predicting short term exchange rates with Bayesian autoregressive state space models: an investigation of the Metropolis Hastings algorithm forecasting efficiency." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437399395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Almeida, Alexandre Barbosa de. "Predição de estrutura terciária de proteínas com técnicas multiobjetivo no algoritmo de monte carlo." Universidade Federal de Goiás, 2016. http://repositorio.bc.ufg.br/tede/handle/tede/5872.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-08-05T17:38:42Z No. of bitstreams: 2 Dissertação - Alexandre Barbosa de Almeida - 2016.pdf: 11943401 bytes, checksum: 94f2e941bbde05e098c40f40f0f2f69c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)<br>Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-09T11:57:53Z (GMT) No. of bitstreams: 2 Dissertação - Alexandre Barbosa de Almeida - 2016.pdf: 11943401 bytes, checksum: 94f2e941bbde05e098c40f40f0f2f69c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)<br>Made available in DSpace on 2016-08-09T11:57:53Z (GMT). No. of bitstreams: 2 Dissertação - Alexandre Barbosa de Almeida - 2016.pdf: 11943401 bytes, checksum: 94f2e941bbde05e098c40f40f0f2f69c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-06-17<br>Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq<br>Proteins are vital for the biological functions of all living beings on Earth. However, they only have an active biological function in their native structure, which is a state of minimum energy. Therefore, protein functionality depends almost exclusively on the size and shape of its native conformation. However, less than 1% of all known proteins in the world has its structure solved. In this way, various methods for determining protein structures have been proposed, either in vitro or in silico experiments. This work proposes a new in silico method called Monte Carlo with Dominance, which addresses the problem of protein structure prediction from the point of view of ab initio and multi-objective optimization, considering both protein energetic and structural aspects. The software GROMACS was used for the ab initio treatment to perform Molecular Dynamics simulations, while the framework ProtPred-GROMACS (2PG) was used for the multi-objective optimization problem, employing genetic algorithms techniques as heuristic solutions. Monte Carlo with Dominance, in this sense, is like a variant of the traditional Monte Carlo Metropolis method. The aim is to check if protein tertiary structure prediction is improved when structural aspects are taken into account. The energy criterion of Metropolis and energy and structural criteria of Dominance were compared using RMSD calculation between the predicted and native structures. It was found that Monte Carlo with Dominance obtained better solutions for two of three proteins analyzed, reaching a difference about 53% in relation to the prediction by Metropolis.<br>As proteínas são vitais para as funções biológicas de todos os seres na Terra. Entretanto, somente apresentam função biológica ativa quando encontram-se em sua estrutura nativa, que é o seu estado de mínima energia. Portanto, a funcionalidade de uma proteína depende, quase que exclusivamente, do tamanho e da forma de sua conformação nativa. Porém, de todas as proteínas conhecidas no mundo, menos de 1% tem a sua estrutura resolvida. Deste modo, vários métodos de determinação de estruturas de proteínas têm sido propostos, tanto para experimentos in vitro quanto in silico. Este trabalho propõe um novo método in silico denominado Monte Carlo com Dominância, o qual aborda o problema da predição de estrutura de proteínas sob o ponto de vista ab initio e de otimização multiobjetivo, considerando, simultaneamente, os aspectos energéticos e estruturais da proteína. Para o tratamento ab initio utiliza-se o software GROMACS para executar as simulações de Dinâmica Molecular, enquanto que para o problema da otimização multiobjetivo emprega-se o framework ProtPred-GROMACS (2PG), o qual utiliza algoritmos genéticos como técnica de soluções heurísticas. O Monte Carlo com Dominância, nesse sentido, é como uma variante do tradicional método de Monte Carlo Metropolis. Assim, o objetivo é o de verificar se a predição da estrutura terciária de proteínas é aprimorada levando-se em conta também os aspectos estruturais. O critério energético de Metropolis e os critérios energéticos e estruturais da Dominância foram comparados empregando o cálculo de RMSD entre as estruturas preditas e as nativas. Foi verificado que o método de Monte Carlo com Dominância obteve melhores soluções para duas de três proteínas analisadas, chegando a cerca de 53% de diferença da predição por Metropolis.
APA, Harvard, Vancouver, ISO, and other styles
25

Zeppilli, Giulia. "Alcune applicazioni del Metodo Monte Carlo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3091/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Szymczak, Marcin. "Programming language semantics as a foundation for Bayesian inference." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28993.

Full text
Abstract:
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
APA, Harvard, Vancouver, ISO, and other styles
27

Datta, Sagnik. "Fully bayesian structure learning of bayesian networks and their hypergraph extensions." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2283.

Full text
Abstract:
Dans cette thèse, j’aborde le problème important de l’estimation de la structure des réseaux complexes, à l’aide de la classe des modèles stochastiques dits réseaux Bayésiens. Les réseaux Bayésiens permettent de représenter l’ensemble des relations d’indépendance conditionnelle. L’apprentissage statistique de la structure de ces réseaux complexes par les réseaux Bayésiens peut révéler la structure causale sous-jacente. Il peut également servir pour la prédiction de quantités qui sont difficiles, coûteuses, ou non éthiques comme par exemple le calcul de la probabilité de survenance d’un cancer à partir de l’observation de quantités annexes, plus faciles à obtenir. Les contributions de ma thèse consistent en : (A) un logiciel développé en langage C pour l’apprentissage de la structure des réseaux bayésiens; (B) l’introduction d’un nouveau "jumping kernel" dans l’algorithme de "Metropolis-Hasting" pour un échantillonnage rapide de réseaux; (C) l’extension de la notion de réseaux Bayésiens aux structures incluant des boucles et (D) un logiciel spécifique pour l’apprentissage des structures cycliques. Notre principal objectif est l’apprentissage statistique de la structure de réseaux complexes représentée par un graphe et par conséquent notre objet d’intérêt est cette structure graphique. Un graphe est constitué de nœuds et d’arcs. Tous les paramètres apparaissant dans le modèle mathématique et différents de ceux qui caractérisent la structure graphique sont considérés comme des paramètres de nuisance<br>In this thesis, I address the important problem of the determination of the structure of complex networks, with the widely used class of Bayesian network models as a concrete vehicle of my ideas. The structure of a Bayesian network represents a set of conditional independence relations that hold in the domain. Learning the structure of the Bayesian network model that represents a domain can reveal insights into its underlying causal structure. Moreover, it can also be used for prediction of quantities that are difficult, expensive, or unethical to measure such as the probability of cancer based on other quantities that are easier to obtain. The contributions of this thesis include (A) a software developed in C language for structure learning of Bayesian networks; (B) introduction a new jumping kernel in the Metropolis-Hasting algorithm for faster sampling of networks (C) extending the notion of Bayesian networks to structures involving loops and (D) a software developed specifically to learn cyclic structures. Our primary objective is structure learning and thus the graph structure is our parameter of interest. We intend not to perform estimation of the parameters involved in the mathematical models
APA, Harvard, Vancouver, ISO, and other styles
28

Toyinbo, Peter Ayo. "Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3673.

Full text
Abstract:
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.
APA, Harvard, Vancouver, ISO, and other styles
29

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-226748.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice<br>Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert
APA, Harvard, Vancouver, ISO, and other styles
30

Frühwirth-Schnatter, Sylvia, and Rudolf Frühwirth. "Bayesian Inference in the Multinomial Logit Model." Austrian Statistical Society, 2012. http://epub.wu.ac.at/5629/1/186%2D751%2D1%2DSM.pdf.

Full text
Abstract:
The multinomial logit model (MNL) possesses a latent variable representation in terms of random variables following a multivariate logistic distribution. Based on multivariate finite mixture approximations of the multivariate logistic distribution, various data-augmented Metropolis-Hastings algorithms are developed for a Bayesian inference of the MNL model.
APA, Harvard, Vancouver, ISO, and other styles
31

Rönnby, Karl. "Monte Carlo Simulations for Chemical Systems." Thesis, Linköpings universitet, Matematiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132811.

Full text
Abstract:
This thesis investigates dierent types of Monte Carlo estimators for use in computationof chemical system, mainly to be used in calculating surface growthand evolution of SiC. Monte Carlo methods are a class of algorithms using randomsampling to numerical solve problems and are used in many cases. Threedierent types of Monte Carlo methods are studied, a simple Monte Carlo estimatorand two types of Markov chain Monte Carlo Metropolis algorithm MonteCarlo and kinetic Monte Carlo. The mathematical background is given for allmethods and they are tested both on smaller system, with known results tocheck their mathematical and chemical soundness and on larger surface systemas an example on how they could be used
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Yi. "Time-Varying Coefficient Models for Recurrent Events." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/97999.

Full text
Abstract:
I have developed time-varying coefficient models for recurrent event data to evaluate the temporal profiles for recurrence rate and covariate effects. There are three major parts in this dissertation. The first two parts propose a mixed Poisson process model with gamma frailties for single type recurrent events. The third part proposes a Bayesian joint model based on multivariate log-normal frailties for multi-type recurrent events. In the first part, I propose an approach based on penalized B-splines to obtain smooth estimation for both time-varying coefficients and the log baseline intensity. An EM algorithm is developed for parameter estimation. One issue with this approach is that the estimating procedure is conditional on smoothing parameters, which have to be selected by cross-validation or optimizing certain performance criterion. The procedure can be computationally demanding with a large number of time-varying coefficients. To achieve objective estimation of smoothing parameters, I propose a mixed-model representation approach for penalized splines. Spline coefficients are treated as random effects and smoothing parameters are to be estimated as variance components. An EM algorithm embedded with penalized quasi-likelihood approximation is developed to estimate the model parameters. The third part proposes a Bayesian joint model with time-varying coefficients for multi-type recurrent events. Bayesian penalized splines are used to estimate time-varying coefficients and the log baseline intensity. One challenge in Bayesian penalized splines is that the smoothness of a spline fit is considerably sensitive to the subjective choice of hyperparameters. I establish a procedure to objectively determine the hyperparameters through a robust prior specification. A Markov chain Monte Carlo procedure based on Metropolis-adjusted Langevin algorithms is developed to sample from the high-dimensional distribution of spline coefficients. The procedure includes a joint sampling scheme to achieve better convergence and mixing properties. Simulation studies in the second and third part have confirmed satisfactory model performance in estimating time-varying coefficients under different curvature and event rate conditions. The models in the second and third part were applied to data from a commercial truck driver naturalistic driving study. The application results reveal that drivers with 7-hours-or-less sleep prior to a shift have a significantly higher intensity after 8 hours of on-duty driving and that their intensity remains higher after taking a break. In addition, the results also show drivers' self-selection on sleep time, total driving hours in a shift, and breaks. These applications provide crucial insight into the impact of sleep time on driving performance for commercial truck drivers and highlights the on-road safety implications of insufficient sleep and breaks while driving. This dissertation provides flexible and robust tools to evaluate the temporal profile of intensity for recurrent events.<br>PHD
APA, Harvard, Vancouver, ISO, and other styles
33

Stanley, Leanne M. "Flexible Multidimensional Item Response Theory Models Incorporating Response Styles." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494316298549437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ozkan, Pelin. "Analysis Of Stochastic And Non-stochastic Volatility Models." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605421/index.pdf.

Full text
Abstract:
Changing in variance or volatility with time can be modeled as deterministic by using autoregressive conditional heteroscedastic (ARCH) type models, or as stochastic by using stochastic volatility (SV) models. This study compares these two kinds of models which are estimated on Turkish / USA exchange rate data. First, a GARCH(1,1) model is fitted to the data by using the package E-views and then a Bayesian estimation procedure is used for estimating an appropriate SV model with the help of Ox code. In order to compare these models, the LR test statistic calculated for non-nested hypotheses is obtained.
APA, Harvard, Vancouver, ISO, and other styles
35

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Technische Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A20754.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice.<br>Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert.
APA, Harvard, Vancouver, ISO, and other styles
36

Jin, Yan. "Bayesian Solution to the Analysis of Data with Values below the Limit of Detection (LOD)." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1227293204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Joly, Jean-Luc. "Contributions à la génération aléatoire pour des classes d'automates finis." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2012/document.

Full text
Abstract:
Le concept d’automate, central en théorie des langages, est l’outil d’appréhension naturel et efficace de nombreux problèmes concrets. L’usage intensif des automates finis dans un cadre algorithmique s ’illustre par de nombreux travaux de recherche. La correction et l’ évaluation sont les deux questions fondamentales de l’algorithmique. Une méthode classique d’ évaluation s’appuie sur la génération aléatoire contrôlée d’instances d’entrée. Les travaux d´écrits dans cette thèse s’inscrivent dans ce cadre et plus particulièrement dans le domaine de la génération aléatoire uniforme d’automates finis.L’exposé qui suit propose d’abord la construction d’un générateur aléatoire d’automates à pile déterministes, real time. Cette construction s’appuie sur la méthode symbolique. Des résultats théoriques et une étude expérimentale sont exposés.Un générateur aléatoire d’automates non-déterministes illustre ensuite la souplesse d’utilisation de la méthode de Monte-Carlo par Chaînes de Markov (MCMC) ainsi que la mise en œuvre de l’algorithme de Metropolis - Hastings pour l’ échantillonnage à isomorphisme près. Un résultat sur le temps de mélange est donné dans le cadre général .L’ échantillonnage par méthode MCMC pose le problème de l’évaluation du temps de mélange dans la chaîne. En s’inspirant de travaux antérieurs pour construire un générateur d’automates partiellement ordonnés, on montre comment différents outils statistiques permettent de s’attaquer à ce problème<br>The concept of automata, central to language theory, is the natural and efficient tool to apprehendvarious practical problems.The intensive use of finite automata in an algorithmic framework is illustrated by numerous researchworks.The correctness and the evaluation of performance are the two fundamental issues of algorithmics.A classic method to evaluate an algorithm is based on the controlled random generation of inputs.The work described in this thesis lies within this context and more specifically in the field of theuniform random generation of finite automata.The following presentation first proposes to design a deterministic, real time, pushdown automatagenerator. This design builds on the symbolic method. Theoretical results and an experimental studyare given.This design builds on the symbolic method. Theoretical results and an experimental study are given.A random generator of non deterministic automata then illustrates the flexibility of the Markov ChainMonte Carlo methods (MCMC) as well as the implementation of the Metropolis-Hastings algorithm tosample up to isomorphism. A result about the mixing time in the general framework is given.The MCMC sampling methods raise the problem of the mixing time in the chain. By drawing on worksalready completed to design a random generator of partially ordered automata, this work shows howvarious statistical tools can form a basis to address this issue
APA, Harvard, Vancouver, ISO, and other styles
38

Bylin, Johan. "Best practice of extracting magnetocaloric properties in magnetic simulations." Thesis, Uppsala universitet, Materialteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388356.

Full text
Abstract:
In this thesis, a numerical study of simulating and computing the magnetocaloric properties of magnetic materials is presented. The main objective was to deduce the optimal procedure to obtain the isothermal change in entropy of magnetic systems, by evaluating two different formulas of entropy extraction, one relying on the magnetization of the material and the other on the magnet's heat capacity. The magnetic systems were simulated using two different Monte Carlo algorithms, the Metropolis and Wang-Landau procedures. The two entropy methods proved to be comparably similar to one another. Both approaches produced reliable and consistent results, though finite size effects could occur if the simulated system became too small. Erroneous fluctuations that invalidated the results did not seem stem from discrepancies between the entropy methods but mainly from the computation of the heat capacity itself. Accurate determination of the heat capacity via an internal energy derivative generated excellent results, while a heat capacity obtained from a variance formula of the internal energy rendered the extracted entropy unusable. The results acquired from the Metropolis algorithm were consistent, accurate and dependable, while all of those produced via the Wang-Landau method exhibited intrinsic fluctuations of varying severity. The Wang-Landau method also proved to be computationally ineffective compared to the Metropolis algorithm, rendering the method not suitable for magnetic simulations of this type.
APA, Harvard, Vancouver, ISO, and other styles
39

Eid, Abdelrahman. "Stochastic simulations for graphs and machine learning." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I018.

Full text
Abstract:
Bien qu’il ne soit pas pratique d’étudier la population dans de nombreux domaines et applications, l’échantillonnage est une méthode nécessaire permettant d’inférer l’information.Cette thèse est consacrée au développement des algorithmes d’échantillonnage probabiliste pour déduire l’ensemble de la population lorsqu’elle est trop grande ou impossible à obtenir.Les techniques Monte Carlo par chaîne de markov (MCMC) sont l’un des outils les plus importants pour l’échantillonnage à partir de distributions de probabilités surtout lorsque ces distributions ont des constantes de normalisation difficiles à évaluer.Le travail de cette thèse s’intéresse principalement aux techniques d’échantillonnage pour les graphes. Deux méthodes pour échantillonner des sous-arbres uniformes à partir de graphes en utilisant les algorithmes de Metropolis-Hastings sont présentées dans le chapitre 2. Les méthodes proposées visent à échantillonner les arbres selon une distribution à partir d’un graphe où les sommets sont marqués. L’efficacité de ces méthodes est prouvée mathématiquement. De plus, des études de simulation ont été menées et ont confirmé les résultats théoriques de convergence vers la distribution d’équilibre.En continuant à travailler sur l’échantillonnage des graphes, une méthode est présentée au chapitre 3 pour échantillonner des ensembles de sommets similaires dans un graphe arbitraire non orienté en utilisant les propriétés des processus des points permanents PPP. Notre algorithme d’échantillonnage des ensembles de k sommets est conçu pour surmonter le problème de la complexité de calcul lors du calcul du permanent par échantillonnage d’une distribution conjointe dont la distribution marginale est un kPPP.Enfin, dans le chapitre 4, nous utilisons les définitions des méthodes MCMC et de la vitesse de convergence pour estimer la bande passante du noyau utilisée pour la classification dans l’apprentissage machine supervisé. Une méthode simple et rapide appelée KBER est présentée pour estimer la bande passante du noyau de la fonction de base radiale RBF en utilisant la courbure moyenne de Ricci de graphes<br>While it is impractical to study the population in many domains and applications, sampling is a necessary method allows to infer information. This thesis is dedicated to develop probability sampling algorithms to infer the whole population when it is too large or impossible to be obtained. Markov chain Monte Carlo (MCMC) techniques are one of the most important tools for sampling from probability distributions especially when these distributions haveintractable normalization constants.The work of this thesis is mainly interested in graph sampling techniques. Two methods in chapter 2 are presented to sample uniform subtrees from graphs using Metropolis-Hastings algorithms. The proposed methods aim to sample trees according to a distribution from a graph where the vertices are labelled. The efficiency of these methods is proved mathematically. Additionally, simulation studies were conducted and confirmed the theoretical convergence results to the equilibrium distribution.Continuing to the work on graph sampling, a method is presented in chapter 3 to sample sets of similar vertices in an arbitrary undirected graph using the properties of the Permanental Point processes PPP. Our algorithm to sample sets of k vertices is designed to overcome the problem of computational complexity when computing the permanent by sampling a joint distribution whose marginal distribution is a kPPP.Finally in chapter 4, we use the definitions of the MCMC methods and convergence speed to estimate the kernel bandwidth used for classification in supervised Machine learning. A simple and fast method called KBER is presented to estimate the bandwidth of the Radial basis function RBF kernel using the average Ricci curvature of graphs
APA, Harvard, Vancouver, ISO, and other styles
40

Pérez, Forero Fernando José. "Essays in structural macroeconometrics." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/119323.

Full text
Abstract:
This thesis is concerned with the structural estimation of macroeconomic models via Bayesian methods and the economic implications derived from its empirical output. The first chapter provides a general method for estimating structural VAR models. The second chapter applies the method previously developed and provides a measure of the monetary stance of the Federal Reserve for the last forty years. It uses a pool of instruments and taking into account recent practices named Unconventional Monetary Policies. Then it is shown how the monetary transmission mechanism has changed over time, focusing the attention in the period after the Great Recession. The third chapter develops a model of exchange rate determination with dispersed information and regime switches. It has the purpose of fitting the observed disagreement in survey data of Japan. The model does a good job in terms of fitting the observed data.<br>Esta tesis trata sobre la estimación estructural de modelos macroeconómicos a través de métodos Bayesianos y las implicancias económicas derivadas de sus resultados. El primer capítulo proporciona un método general para la estimación de modelos VAR estructurales. El segundo capítulo aplica dicho método y proporciona una medida de la posición de política monetaria de la Reserva Federal para los últimos cuarenta años. Se utiliza una variedad de instrumentos y se tienen en cuenta las prácticas recientes denominadas políticas no convencionales. Se muestra cómo el mecanismo de transmisión de la política monetaria ha cambiado a través del tiempo, centrando la atención en el período posterior a la gran recesión. El tercer capítulo desarrolla un modelo de determinación del tipo de cambio con información dispersa y cambios de régimen, y tiene el propósito de capturar la dispersión observada en datos de encuestas de expectativas de Japón. El modelo realiza un buen trabajo en términos de ajuste de los datos.
APA, Harvard, Vancouver, ISO, and other styles
41

Mireuta, Matei. "Étude de la performance d’un algorithme Metropolis-Hastings avec ajustement directionnel." Thèse, 2011. http://hdl.handle.net/1866/6231.

Full text
Abstract:
Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Étant donné leur facilité d’application, ces méthodes sont largement répandues dans plusieurs communautés scientifiques et bien certainement en statistique, particulièrement en analyse bayésienne. Depuis l’apparition de la première méthode MCMC en 1953, le nombre de ces algorithmes a considérablement augmenté et ce sujet continue d’être une aire de recherche active. Un nouvel algorithme MCMC avec ajustement directionnel a été récemment développé par Bédard et al. (IJSS, 9 :2008) et certaines de ses propriétés restent partiellement méconnues. L’objectif de ce mémoire est de tenter d’établir l’impact d’un paramètre clé de cette méthode sur la performance globale de l’approche. Un second objectif est de comparer cet algorithme à d’autres méthodes MCMC plus versatiles afin de juger de sa performance de façon relative.<br>Markov Chain Monte Carlo algorithms (MCMC) have become popular tools for sampling from complex and/or high dimensional probability distributions. Given their relative ease of implementation, these methods are frequently used in various scientific areas, particularly in Statistics and Bayesian analysis. The volume of such methods has risen considerably since the first MCMC algorithm described in 1953 and this area of research remains extremely active. A new MCMC algorithm using a directional adjustment has recently been described by Bédard et al. (IJSS, 9:2008) and some of its properties remain unknown. The objective of this thesis is to attempt determining the impact of a key parameter on the global performance of the algorithm. Moreover, another aim is to compare this new method to existing MCMC algorithms in order to evaluate its performance in a relative fashion.
APA, Harvard, Vancouver, ISO, and other styles
42

Lalancette, Michaël. "Convergence d’un algorithme de type Metropolis pour une distribution cible bimodale." Thèse, 2017. http://hdl.handle.net/1866/19376.

Full text
Abstract:
Nous présentons dans ce mémoire un nouvel algorithme de type Metropolis-Hastings dans lequel la distribution instrumentale a été conçue pour l'estimation de distributions cibles bimodales. En fait, cet algorithme peut être vu comme une modification de l'algorithme Metropolis de type marche aléatoire habituel auquel on ajoute quelques incréments de grande envergure à des moments aléatoires à travers la simulation. Le but de ces grands incréments est de quitter le mode de la distribution cible où l'on se trouve et de trouver l'autre mode. Par la suite, nous présentons puis démontrons un résultat de convergence faible qui nous assure que, lorsque la dimension de la distribution cible croît vers l'infini, la chaîne de Markov engendrée par l'algorithme converge vers un certain processus stochastique qui est continu presque partout. L'idée est similaire à ce qui a été fait par Roberts et al. (1997), mais la technique utilisée pour la démonstration des résultats est basée sur ce qui a été fait par Bédard (2006). Nous proposons enfin une stratégie pour trouver la paramétrisation optimale de notre nouvel algorithme afin de maximiser la vitesse d'exploration locale des modes d'une distribution cible donnée tout en estimant bien la pondération relative de chaque mode. Tel que dans l'approche traditionnellement utilisée pour ce genre d'analyse, notre stratégie passe par l'optimisation de la vitesse d'exploration du processus limite. Finalement, nous présentons des exemples numériques d'implémentation de l'algorithme sur certaines distributions cibles, dont une ne respecte pas les conditions du résultat théorique présenté.<br>In this thesis, we present a new Metropolis-Hastings algorithm whose proposal distribution has been designed to successfully estimate bimodal target distributions. This sampler may be seen as a variant of the usual random walk Metropolis sampler in which we propose large candidate steps at random times. The goal of these large candidate steps is to leave the actual mode of the target distribution in order to find the second one. We then state and prove a weak convergence result stipulating that if we let the dimension of the target distribution increase to infinity, the Markov chain yielded by the algorithm converges to a certain stochastic process that is almost everywhere continuous. The theoretical result is in the flavour of Roberts et al. (1997), while the method of proof is similar to that found in Bédard (2006). We propose a strategy for optimally parameterizing our new sampler. This strategy aims at optimizing local exploration of the target modes, while correctly estimating the relative weight of each mode. As is traditionally done in the statistical literature, our approach consists of optimizing the limiting process rather than the finite-dimensional Markov chain. Finally, we illustrate our method via numerical examples on some target distributions, one of which violates the regularity conditions of the theoretical result.
APA, Harvard, Vancouver, ISO, and other styles
43

Groiez, Assia. "Recyclage des candidats dans l'algorithme Metropolis à essais multiples." Thèse, 2014. http://hdl.handle.net/1866/10853.

Full text
Abstract:
Les méthodes de Monte Carlo par chaînes de Markov (MCCM) sont des méthodes servant à échantillonner à partir de distributions de probabilité. Ces techniques se basent sur le parcours de chaînes de Markov ayant pour lois stationnaires les distributions à échantillonner. Étant donné leur facilité d’application, elles constituent une des approches les plus utilisées dans la communauté statistique, et tout particulièrement en analyse bayésienne. Ce sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Depuis l’apparition de la première méthode MCCM en 1953 (la méthode de Metropolis, voir [10]), l’intérêt pour ces méthodes, ainsi que l’éventail d’algorithmes disponibles ne cessent de s’accroître d’une année à l’autre. Bien que l’algorithme Metropolis-Hastings (voir [8]) puisse être considéré comme l’un des algorithmes de Monte Carlo par chaînes de Markov les plus généraux, il est aussi l’un des plus simples à comprendre et à expliquer, ce qui en fait un algorithme idéal pour débuter. Il a été sujet de développement par plusieurs chercheurs. L’algorithme Metropolis à essais multiples (MTM), introduit dans la littérature statistique par [9], est considéré comme un développement intéressant dans ce domaine, mais malheureusement son implémentation est très coûteuse (en termes de temps). Récemment, un nouvel algorithme a été développé par [1]. Il s’agit de l’algorithme Metropolis à essais multiples revisité (MTM revisité), qui définit la méthode MTM standard mentionnée précédemment dans le cadre de l’algorithme Metropolis-Hastings sur un espace étendu. L’objectif de ce travail est, en premier lieu, de présenter les méthodes MCCM, et par la suite d’étudier et d’analyser les algorithmes Metropolis-Hastings ainsi que le MTM standard afin de permettre aux lecteurs une meilleure compréhension de l’implémentation de ces méthodes. Un deuxième objectif est d’étudier les perspectives ainsi que les inconvénients de l’algorithme MTM revisité afin de voir s’il répond aux attentes de la communauté statistique. Enfin, nous tentons de combattre le problème de sédentarité de l’algorithme MTM revisité, ce qui donne lieu à un tout nouvel algorithme. Ce nouvel algorithme performe bien lorsque le nombre de candidats générés à chaque itérations est petit, mais sa performance se dégrade à mesure que ce nombre de candidats croît.<br>Markov Chain Monte Carlo (MCMC) algorithms are methods that are used for sampling from probability distributions. These tools are based on the path of a Markov chain whose stationary distribution is the distribution to be sampled. Given their relative ease of application, they are one of the most popular approaches in the statistical community, especially in Bayesian analysis. These methods are very popular for sampling from complex and/or high dimensional probability distributions. Since the appearance of the first MCMC method in 1953 (the Metropolis algorithm, see [10]), the interest for these methods, as well as the range of algorithms available, continue to increase from one year to another. Although the Metropolis-Hastings algorithm (see [8]) can be considered as one of the most general Markov chain Monte Carlo algorithms, it is also one of the easiest to understand and explain, making it an ideal algorithm for beginners. As such, it has been studied by several researchers. The multiple-try Metropolis (MTM) algorithm , proposed by [9], is considered as one interesting development in this field, but unfortunately its implementation is quite expensive (in terms of time). Recently, a new algorithm was developed by [1]. This method is named the revisited multiple-try Metropolis algorithm (MTM revisited), which is obtained by expressing the MTM method as a Metropolis-Hastings algorithm on an extended space. The objective of this work is to first present MCMC methods, and subsequently study and analyze the Metropolis-Hastings and standard MTM algorithms to allow readers a better perspective on the implementation of these methods. A second objective is to explore the opportunities and disadvantages of the revisited MTM algorithm to see if it meets the expectations of the statistical community. We finally attempt to fight the sedentarity of the revisited MTM algorithm, which leads to a new algorithm. The latter performs efficiently when the number of generated candidates in a given iteration is small, but the performance of this new algorithm then deteriorates as the number of candidates in a given iteration increases.
APA, Harvard, Vancouver, ISO, and other styles
44

Boisvert-Beaudry, Gabriel. "Efficacité des distributions instrumentales en équilibre dans un algorithme de type Metropolis-Hastings." Thèse, 2019. http://hdl.handle.net/1866/23794.

Full text
Abstract:
Dans ce mémoire, nous nous intéressons à une nouvelle classe de distributions instrumentales informatives dans le cadre de l'algorithme Metropolis-Hastings. Ces distributions instrumentales, dites en équilibre, sont obtenues en ajoutant de l'information à propos de la distribution cible à une distribution instrumentale non informative. Une chaîne de Markov générée par une distribution instrumentale en équilibre est réversible par rapport à la densité cible sans devoir utiliser une probabilité d'acceptation dans deux cas extrêmes: le cas local lorsque la variance instrumentale tend vers 0 et le cas global lorsqu'elle tend vers l'infini. Il est nécessaire d'approximer les distributions instrumentales en équilibre afin de pouvoir les utiliser en pratique. Nous montrons que le cas local mène au Metropolis-adjusted Langevin algorithm (MALA), tandis que le cas global mène à une légère modification du MALA. Ces résultats permettent de concevoir un nouvel algorithme généralisant le MALA grâce à l'ajout d'un nouveau paramètre. En fonction de celui-ci, l'algorithme peut utiliser l'équilibre local ou global ou encore une interpolation entre ces deux cas. Nous étudions ensuite la paramétrisation optimale de cet algorithme en fonction de la dimension de la distribution cible sous deux régimes: le régime asymptotique puis le régime en dimensions finies. Diverses simulations permettent d'illustrer les résultats théoriques obtenus. De plus, une application du nouvel algorithme à un problème de régression logistique bayésienne permet de comparer son efficacité à des algorithmes existants. Les résultats obtenus sont satisfaisants autant d'un point de vue théorique que computationnel.<br>In this master's thesis, we are interested in a new class of informed proposal distributions for Metropolis-Hastings algorithms. These new proposals, called balanced proposals, are obtained by adding information about the target density to an uninformed proposal distribution. A Markov chain generated by a balanced proposal is reversible with respect to the target density without the need for an acceptance probability in two extreme cases: the local case, where the proposal variance tends to zero, and the global case, where it tends to infinity. The balanced proposals need to be approximated to be used in practice. We show that the local case leads to the Metropolis-adjusted Langevin algorithm (MALA), while the global case leads to a small modification of the MALA. These results are used to create a new algorithm that generalizes the MALA by adding a new parameter. Depending on the value of this parameter, the new algorithm will use a locally balanced proposal, a globally balanced proposal, or an interpolation between these two cases. We then study the optimal choice for this parameter as a function of the dimension of the target distribution under two regimes: the asymptotic regime and a finite-dimensional regime. Simulations are presented to illustrate the theoretical results. Finally, we apply the new algorithm to a Bayesian logistic regression problem and compare its efficiency to existing algorithms. The results are satisfying on a theoretical and computational standpoint.
APA, Harvard, Vancouver, ISO, and other styles
45

Gagnon, Philippe. "Sélection de modèles robuste : régression linéaire et algorithme à sauts réversibles." Thèse, 2017. http://hdl.handle.net/1866/20583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bégin, Jean-François. "New simulation schemes for the Heston model." Thèse, 2012. http://hdl.handle.net/1866/8752.

Full text
Abstract:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.<br>Financial stocks are often modeled by stochastic differential equations (SDEs). These equations could describe the behavior of the underlying asset as well as some of the model's parameters. For example, the Heston (1993) model, which is a stochastic volatility model, describes the behavior of the stock and the variance of the latter. The Heston model is very interesting since it has semi-closed formulas for some derivatives, and it is quite realistic. However, many simulation schemes for this model have problems when the Feller (1951) condition is violated. In this thesis, we introduce new simulation schemes to simulate price paths using the Heston model. These new algorithms are based on Broadie and Kaya's (2006) method. In order to increase the speed of the exact scheme of Broadie and Kaya, we use, among other things, Markov chains Monte Carlo (MCMC) algorithms and some well-chosen approximations. In our first algorithm, we modify the second step of the Broadie and Kaya's method in order to get faster schemes. Instead of using the second-order Newton method coupled with the inversion approach, we use a Metropolis-Hastings algorithm. The second algorithm is a small improvement of our latter scheme. Instead of using the real integrated variance over time p.d.f., we use Smith's (2007) approximation. This helps us decrease the dimension of our problem (from three to two). Our last algorithm is not based on MCMC methods. However, we still try to speed up the second step of Broadie and Kaya. In order to achieve this, we use a moment-matched gamma random variable. According to Stewart et al. (2007), it is possible to approximate a complex gamma convolution (somewhat near the representation given by Glasserman and Kim (2008) when T-t is close to zero) by a gamma distribution.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Hai-Chen, and 吳海陣. "A Study of Metropolis-Hasting Algorithm." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/91075426362923386620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Atchadé, Yves F. "Quelques contributions sur les méthodes de Monte Carlo." Thèse, 2003. http://hdl.handle.net/1866/14581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zheng, Zhi-Long, and 鄭智隆. "Rates of convergence of Metropolis-Hastings algorithm." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/86531956484261783613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Γιαννόπουλος, Νικόλαος. "Μελετώντας τον αλγόριθμο Metropolis-Hastings". Thesis, 2012. http://hdl.handle.net/10889/5920.

Full text
Abstract:
Η παρούσα διπλωματική διατριβή εντάσσεται ερευνητικά στην περιοχή της Υπολογιστικής Στατιστικής, καθώς ασχολούμαστε με τη μελέτη μεθόδων προσομοίωσης από κάποια κατανομή π (κατανομή στόχο) και τον υπολογισμό σύνθετων ολοκληρωμάτων. Σε πολλά πραγματικά προβλήματα, όπου η μορφή της π είναι ιδιαίτερα πολύπλοκή ή/και η διάσταση του χώρου καταστάσεων μεγάλη, η προσομοίωση από την π δεν μπορεί να γίνει με απλές τεχνικές καθώς επίσης και ο υπολογισμός των ολοκληρωμάτων είναι πάρα πολύ δύσκολο αν όχι αδύνατο να γίνει αναλυτικά. Γι’ αυτό, καταφεύγουμε σε τεχνικές Monte Carlo (MC) και Markov Chain Monte Carlo (MCMC), οι οποίες προσομοιώνουν τιμές τυχαίων μεταβλητών και εκτιμούν τα ολοκληρώματα μέσω κατάλληλων συναρτήσεων των προσομοιωμένων τιμών. Οι τεχνικές MC παράγουν ανεξάρτητες παρατηρήσεις είτε απ’ ευθείας από την κατανομή-στόχο π είτε από κάποια διαφορετική κατανομή-πρότασης g. Οι τεχνικές MCMC προσομοιώνουν αλυσίδες Markov με στάσιμη κατανομή την και επομένως οι παρατηρήσεις είναι εξαρτημένες. Στα πλαίσια αυτής της εργασίας θα ασχοληθούμε κυρίως με τον αλγόριθμο Metropolis-Hastings που είναι ένας από τους σημαντικότερους, αν όχι ο σημαντικότερος, MCMC αλγόριθμους. Πιο συγκεκριμένα, στο Κεφάλαιο 2 γίνεται μια σύντομη αναφορά σε γνωστές τεχνικές MC, όπως η μέθοδος Αποδοχής-Απόρριψης, η μέθοδος Αντιστροφής και η μέθοδος Δειγματοληψίας σπουδαιότητας καθώς επίσης και σε τεχνικές MCMC, όπως ο αλγόριθμός Metropolis-Hastings, o Δειγματολήπτης Gibbs και η μέθοδος Metropolis Within Gibbs. Στο Κεφάλαιο 3 γίνεται αναλυτική αναφορά στον αλγόριθμο Metropolis-Hastings. Αρχικά, παραθέτουμε μια σύντομη ιστορική αναδρομή και στη συνέχεια δίνουμε μια αναλυτική περιγραφή του. Παρουσιάζουμε κάποιες ειδικές μορφές τού καθώς και τις βασικές ιδιότητες που τον χαρακτηρίζουν. Το κεφάλαιο ολοκληρώνεται με την παρουσίαση κάποιων εφαρμογών σε προσομοιωμένα καθώς και σε πραγματικά δεδομένα. Το τέταρτο κεφάλαιο ασχολείται με μεθόδους εκτίμησης της διασποράς του εργοδικού μέσου ο οποίος προκύπτει από τις MCMC τεχνικές. Ιδιαίτερη αναφορά γίνεται στις μεθόδους Batch means και Spectral Variance Estimators. Τέλος, το Κεφάλαιο 5 ασχολείται με την εύρεση μιας κατάλληλης κατανομή πρότασης για τον αλγόριθμό Metropolis-Hastings. Παρόλο που ο αλγόριθμος Metropolis-Hastings μπορεί να συγκλίνει για οποιαδήποτε κατανομή πρότασης αρκεί να ικανοποιεί κάποιες βασικές υποθέσεις, είναι γνωστό ότι μία κατάλληλη επιλογή της κατανομής πρότασης βελτιώνει τη σύγκλιση του αλγόριθμου. Ο προσδιορισμός της βέλτιστής κατανομής πρότασης για μια συγκεκριμένη κατανομή στόχο είναι ένα πολύ σημαντικό αλλά εξίσου δύσκολο πρόβλημα. Το πρόβλημα αυτό έχει προσεγγιστεί με πολύ απλοϊκές τεχνικές (trial-and-error τεχνικές) αλλά και με adaptive αλγόριθμούς που βρίσκουν μια "καλή" κατανομή πρότασης αυτόματα.<br>This thesis is part of research in Computational Statistics, as we deal with the study of methods of modeling some distribution π (target distribution) and calculate complex integrals. In many real problems, where the form of π is very complex and / or the size of large state space, simulation of π can not be done with simple techniques as well as the calculation of the integrals is very difficult if not impossible to done analytically. So we resort to techniques Monte Carlo (MC) and Markov Chain Monte Carlo (MCMC), which simulate values ​​of random variables and estimate the integrals by appropriate functions of the simulated values. These techniques produce MC independent observations either directly from the distribution n target or a different distribution motion-g. MCMC techniques simulate Markov chains with stationary distribution and therefore the observations are dependent. As part of this work we will deal mainly with the Metropolis-Hastings algorithm is one of the greatest, if not the most important, MCMC algorithms. More specifically, in Chapter 2 is a brief reference to known techniques MC, such as Acceptance-Rejection method, the inversion method and importance sampling methods as well as techniques MCMC, as the algorithm Metropolis-Hastings, o Gibbs sampler and method Metropolis Within Gibbs. Chapter 3 is a detailed report on the algorithm Metropolis-Hastings. First, we present a brief history and then give a detailed description. Present some specific forms as well as the basic properties that characterize them. The chapter concludes with a presentation of some applications on simulated and real data. The fourth chapter deals with methods for estimating the dispersion of ergodic average, derived from the MCMC techniques. Particular reference is made to methods Batch means and Spectral Variance Estimators. Finally, Chapter 5 deals with finding a suitable proposal for the allocation algorithm Metropolis-Hastings. Although the Metropolis-Hastings algorithm can converge on any distribution motion sufficient to satisfy some basic assumptions, it is known that an appropriate selection of the distribution proposal improves the convergence of the algorithm. Determining the optimal allocation proposal for a specific distribution target is a very important but equally difficult problem. This problem has been approached in a very simplistic techniques (trial-and-error techniques) but also with adaptive algorithms that find a "good" allocation proposal automatically.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!