Academic literature on the topic 'Algorithme Metropolis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithme Metropolis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithme Metropolis"

1

Schnetzler, Bernard. "Un algorithme dérivé de l'algorithme de Metropolis." Comptes Rendus Mathematique 355, no. 10 (2017): 1104–10. http://dx.doi.org/10.1016/j.crma.2017.10.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chauveau, Didier, and Pierre Vandekerkhove. "Un algorithme de Hastings-Metropolis avec apprentissage séquentiel." Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 329, no. 2 (1999): 173–76. http://dx.doi.org/10.1016/s0764-4442(99)80484-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamatani, K. "Ergodicity of Markov chain Monte Carlo with reversible proposal." Journal of Applied Probability 54, no. 2 (2017): 638–54. http://dx.doi.org/10.1017/jpr.2017.22.

Full text
Abstract:
Abstract We describe the ergodic properties of some Metropolis–Hastings algorithms for heavy-tailed target distributions. The results of these algorithms are usually analyzed under a subgeometric ergodic framework, but we prove that the mixed preconditioned Crank–Nicolson (MpCN) algorithm has geometric ergodicity even for heavy-tailed target distributions. This useful property comes from the fact that, under a suitable transformation, the MpCN algorithm becomes a random-walk Metropolis algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Yulin, and Yayong Tang. "Metropolis-Hastings Algorithm with Delayed Acceptance and Rejection." Review of Educational Theory 2, no. 2 (2019): 7. http://dx.doi.org/10.30564/ret.v2i2.682.

Full text
Abstract:
Metropolis-Hastings algorithms are slowed down by the computation of complex target distributions. To solve this problem, one can use the delayed acceptance Metropolis-Hastings algorithm (MHDA) of Christen and Fox (2005). However, the acceptance rate of a proposed value will always be less than in the standard Metropolis-Hastings. We can fix this problem by using the Metropolis-Hastings algorithm with delayed rejection (MHDR) proposed by Tierney and Mira (1999). In this paper, we combine the ideas of MHDA and MHDR to propose a new MH algorithm, named the Metropolis-Hastings algorithm with delayed acceptance and rejection (MHDAR). The new algorithm reduces the computational cost by division of the prior or likelihood functions and increase the acceptance probability by delay rejection of the second stage. We illustrate those accelerating features by a realistic example.
APA, Harvard, Vancouver, ISO, and other styles
5

Müller, Christian, Holger Diedam, Thomas Mrziglod, and Andreas Schuppert. "A neural network assisted Metropolis adjusted Langevin algorithm." Monte Carlo Methods and Applications 26, no. 2 (2020): 93–111. http://dx.doi.org/10.1515/mcma-2020-2060.

Full text
Abstract:
AbstractIn this paper, we derive a Markov chain Monte Carlo (MCMC) algorithm supported by a neural network. In particular, we use the neural network to substitute derivative calculations made during a Metropolis adjusted Langevin algorithm (MALA) step with inexpensive neural network evaluations. Using a complex, high-dimensional blood coagulation model and a set of measurements, we define a likelihood function on which we evaluate the new MCMC algorithm. The blood coagulation model is a dynamic model, where derivative calculations are expensive and hence limit the efficiency of derivative-based MCMC algorithms. The MALA adaptation greatly reduces the time per iteration, while only slightly affecting the sample quality. We also test the new algorithm on a 2-dimensional example with a non-convex shape, a case where the MALA algorithm has a clear advantage over other state of the art MCMC algorithms. To assess the impact of the new algorithm, we compare the results to previously generated results of the MALA and the random walk Metropolis Hastings (RWMH).
APA, Harvard, Vancouver, ISO, and other styles
6

Roberts, Gareth O., and Jeffrey S. Rosenthal. "Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits." Journal of Applied Probability 53, no. 2 (2016): 410–20. http://dx.doi.org/10.1017/jpr.2016.9.

Full text
Abstract:
Abstract We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.
APA, Harvard, Vancouver, ISO, and other styles
7

LIMA, F. W. S. "MIXED ALGORITHMS IN THE ISING MODEL ON DIRECTED BARABÁSI–ALBERT NETWORKS." International Journal of Modern Physics C 17, no. 06 (2006): 785–93. http://dx.doi.org/10.1142/s0129183106008753.

Full text
Abstract:
On directed Barabási–Albert networks with two and seven neighbours selected by each added site, the Ising model does not seem to show a spontaneous magnetisation. Instead, the decay time for flipping of the magnetisation follows an Arrhenius law for Metropolis and Glauber algorithms, but for Wolff cluster flipping the magnetisation decays exponentially with time. On these networks the magnetisation behaviour of the Ising model, with Glauber, HeatBath, Metropolis, Wolf or Swendsen–Wang algorithm competing against Kawasaki dynamics, is studied by Monte Carlo simulations. We show that the model exhibits the phenomenon of self-organisation (= stationary equilibrium) defined in Ref. 8 when Kawasaki dynamics is not dominant in its competition with Glauber, HeatBath and Swendsen–Wang algorithms. Only for Wolff cluster flipping the magnetisation, this phenomenon occurs after an exponentially decay of magnetisation with time. The Metropolis results are independent of competition. We also study the same process of competition described above but with Kawasaki dynamics at the same temperature as the other algorithms. The obtained results are similar for Wolff cluster flipping, Metropolis and Swendsen–Wang algorithms but different for HeatBath.
APA, Harvard, Vancouver, ISO, and other styles
8

Liang, Faming, and Ick-Hoon Jin. "A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants." Neural Computation 25, no. 8 (2013): 2199–234. http://dx.doi.org/10.1162/neco_a_00466.

Full text
Abstract:
Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals.
APA, Harvard, Vancouver, ISO, and other styles
9

Roberts, G. O. "A note on acceptance rate criteria for CLTS for Metropolis–Hastings algorithms." Journal of Applied Probability 36, no. 04 (1999): 1210–17. http://dx.doi.org/10.1017/s0021900200017976.

Full text
Abstract:
This paper considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis–Hastings algorithms which are constructed in terms of a rejection probability (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis-adjusted Langevin algorithm. The examples are rather specialized, although, in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used.
APA, Harvard, Vancouver, ISO, and other styles
10

Roberts, G. O. "A note on acceptance rate criteria for CLTS for Metropolis–Hastings algorithms." Journal of Applied Probability 36, no. 4 (1999): 1210–17. http://dx.doi.org/10.1239/jap/1032374766.

Full text
Abstract:
This paper considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis–Hastings algorithms which are constructed in terms of a rejection probability (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis-adjusted Langevin algorithm. The examples are rather specialized, although, in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Algorithme Metropolis"

1

Merhi, Bleik Josephine. "Modeling, estimation and simulation into two statistical models : quantile regression and blind deconvolution." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2506.

Full text
Abstract:
Cette thèse est consacrée à l’estimation de deux modèles statistiques : le modèle des quantiles de régression simultanés et le modèle de déconvolution aveugle. Elle se compose donc de deux parties. Dans la première partie, nous nous intéressons à l’estimation simultanée de plusieurs quantiles de régression par l’approche Bayésienne. En supposant que le terme d’erreur suit la distribution de Laplace asymétrique et en utilisant la relation entre deux quantiles distincts de cette distribution, nous proposons une méthode simple entièrement Bayésienne qui satisfait la propriété non croisée des quantiles. Pour la mise en œuvre, nous utilisons l’algorithme de Gibbs avec une étape de Metropolis-Hastings pour simuler les paramètres inconnus suivant leur distribution conditionnelle a posteriori. Nous montrons la performance et la compétitivité de la méthode sous-jacente par rapport à d’autres méthodes en fournissant des exemples de simulation. Dans la deuxième partie, nous nous concentrons sur la restoration du filtre inverse et du niveau de bruit d’un modèle de déconvolution aveugle bruyant dans un environnement paramétrique. Après la caractérisation du niveau de bruit et du filtre inverse, nous proposons une nouvelle procédure d’estimation plus simple à mettre en œuvre que les autres méthodes existantes. De plus, nous considérons l’estimation de la distribution discrète inconnue du signal d’entrée. Nous obtenons une forte cohérence et une normalité asymptotique pour toutes nos estimations. En incluant une comparaison avec une autre méthode, nous effectuons une étude de simulation cohérente qui démontre empiriquement la performance informatique de nos procédures d’estimation<br>This thesis is dedicated to the estimation of two statistical models: the simultaneous regression quantiles model and the blind deconvolution model. It therefore consists of two parts. In the first part, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples. In the second part, we focus on recovering both the inverse filter and the noise level of a noisy blind deconvolution model in a parametric setting. After the characterization of both the true noise level and inverse filter, we provide a new estimation procedure that is simpler to implement compared with other existing methods. As well, we consider the estimation of the unknown discrete distribution of the input signal. We derive strong consistency and asymptotic normality for all our estimates. Including a comparison with another method, we perform a consistent simulation study that demonstrates empirically the computational performance of our estimation procedures
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Huu Du. "System Reliability : Inference for Common Cause Failure Model in Contexts of Missing Information." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS530.

Full text
Abstract:
Le bon fonctionnement de l’ensemble d’un système industriel est parfois fortement dépendant de la fiabilité de certains éléments qui le composent. Une défaillance de l’un de ces éléments peut conduire à une défaillance totale du système avec des conséquences qui peuvent être catastrophiques en particulier dans le secteur de l’industrie nucléaire ou dans le secteur de l’industrie aéronautique. Pour réduire ce risque de panne catastrophique, une stratégie consiste à dupliquer les éléments sensibles dans le dispositif. Ainsi, si l’un de ces éléments tombe en panne, un autre pourra prendre le relais et le bon fonctionnement du système pourra être maintenu. Cependant, on observe couramment des situations qui conduisent à des défaillances simultanées d’éléments du système : on parle de défaillance de cause commune. Analyser, modéliser, prédire ce type d’événement revêt donc une importance capitale et sont l’objet des travaux présentés dans cette thèse. Il existe de nombreux modèles pour les défaillances de cause commune. Des méthodes d’inférence pour étudier les paramètres de ces modèles ont été proposées. Dans cette thèse, nous considérons la situation où l’inférence est menée sur la base de données manquantes. Nous étudions en particulier le modèle BFR (Binomial Failure Rate) et la méthode des facteurs alpha. En particulier, une approche bayésienne est développée en s’appuyant sur des techniques algorithmiques (Metropolis, IBF). Dans le domaine du nucléaire, les données de défaillances sont peu abondantes et des techniques particulières d’extrapolations de données doivent être mis en oeuvre pour augmenter l’information. Nous proposons dans le cadre de ces stratégies, des techniques de prédiction des défaillances de cause commune. L’actualité récente a mis en évidence l’importance de la fiabilité des systèmes redondants et nous espérons que nos travaux contribueront à une meilleure compréhension et prédiction des risques de catastrophes majeures<br>The effective operation of an entire industrial system is sometimes strongly dependent on the reliability of its components. A failure of one of these components can lead to the failure of the system with consequences that can be catastrophic, especially in the nuclear industry or in the aeronautics industry. To reduce this risk of catastrophic failures, a redundancy policy, consisting in duplicating the sensitive components in the system, is often applied. When one of these components fails, another will take over and the normal operation of the system can be maintained. However, some situations that lead to simultaneous failures of components in the system could be observed. They are called common cause failure (CCF). Analyzing, modeling, and predicting this type of failure event are therefore an important issue and are the subject of the work presented in this thesis. We investigate several methods to deal with the statistical analysis of CCF events. Different algorithms to estimate the parameters of the models and to make predictive inference based on various type of missing data are proposed. We treat confounded data using a BFR (Binomial Failure Rare) model. An EM algorithm is developed to obtain the maximum likelihood estimates (MLE) for the parameters of the model. We introduce the modified-Beta distribution to develop a Bayesian approach. The alpha-factors model is considered to analyze uncertainties in CCF. We suggest a new formalism to describe uncertainty and consider Dirichlet distributions (nested, grouped) to make a Bayesian analysis. Recording of CCF cause data leads to incomplete contingency table. For a Bayesian analysis of this type of tables, we propose an algorithm relying on inverse Bayes formula (IBF) and Metropolis-Hasting algorithm. We compare our results with those obtained with the alpha- decomposition method, a recent method proposed in the literature. Prediction of catastrophic event is addressed and mapping strategies are described to suggest upper bounds of prediction intervals with pivotal method and Bayesian techniques. Recent events have highlighted the importance of reliability redundant systems and we hope that our work will contribute to a better understanding and prediction of the risks of major CCF events
APA, Harvard, Vancouver, ISO, and other styles
3

Martinez, Marie-José. "Modèles linéaires généralisés à effets aléatoires : contributions au choix de modèle et au modèle de mélange." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00388820.

Full text
Abstract:
Ce travail est consacré à l'étude des modèles linéaires généralisés à effets aléatoires (GL2M). Dans ces modèles, sous une hypothèse de distribution normale des effets aléatoires, la vraisemblance basée sur la distribution marginale du vecteur à expliquer n'est pas, en général, calculable de façon formelle. Dans la première partie de notre travail, nous revisitons différentes méthodes d'estimation non exactes par le biais d'approximations réalisées à différents niveaux selon les raisonnements. La deuxième partie est consacrée à la mise en place de critères de sélection de modèles au sein des GL2M. Nous revenons sur deux méthodes d'estimation nécessitant la construction de modèles linéarisés et nous proposons des critères basés sur la vraisemblance marginale calculée dans le modèle linéarisé obtenu à la convergence de la procédure d'estimation. La troisième et dernière partie s'inscrit dans le cadre des modèles de mélanges de GL2M. Les composants du mélange sont définis par des GL2M et traduisent différents états possibles des individus. Dans le cadre de la loi exponentielle, nous proposons une méthode d'estimation des paramètres du mélange basée sur une linéarisation spécifique à cette loi. Nous proposons ensuite une méthode plus générale puisque s'appliquant à un mélange de GL2M quelconques. Cette méthode s'appuie sur une étape de Metropolis-Hastings pour construire un algorithme de type MCEM. Les différentes méthodes développées sont testées par simulations.
APA, Harvard, Vancouver, ISO, and other styles
4

Ounaissi, Daoud. "Méthodes quasi-Monte Carlo et Monte Carlo : application aux calculs des estimateurs Lasso et Lasso bayésien." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10043/document.

Full text
Abstract:
La thèse contient 6 chapitres. Le premier chapitre contient une introduction à la régression linéaire et aux problèmes Lasso et Lasso bayésien. Le chapitre 2 rappelle les algorithmes d’optimisation convexe et présente l’algorithme FISTA pour calculer l’estimateur Lasso. La statistique de la convergence de cet algorithme est aussi donnée dans ce chapitre en utilisant l’entropie et l’estimateur de Pitman-Yor. Le chapitre 3 est consacré à la comparaison des méthodes quasi-Monte Carlo et Monte Carlo dans les calculs numériques du Lasso bayésien. Il sort de cette comparaison que les points de Hammersely donne les meilleurs résultats. Le chapitre 4 donne une interprétation géométrique de la fonction de partition du Lasso bayésien et l’exprime en fonction de la fonction Gamma incomplète. Ceci nous a permis de donner un critère de convergence pour l’algorithme de Metropolis Hastings. Le chapitre 5 présente l’estimateur bayésien comme la loi limite d’une équation différentielle stochastique multivariée. Ceci nous a permis de calculer le Lasso bayésien en utilisant les schémas numériques semi implicite et explicite d’Euler et les méthodes de Monte Carlo, Monte Carlo à plusieurs couches (MLMC) et l’algorithme de Metropolis Hastings. La comparaison des coûts de calcul montre que le couple (schéma semi-implicite d’Euler, MLMC) gagne contre les autres couples (schéma, méthode). Finalement dans le chapitre 6 nous avons trouvé la vitesse de convergence du Lasso bayésien vers le Lasso lorsque le rapport signal/bruit est constant et le bruit tend vers 0. Ceci nous a permis de donner de nouveaux critères pour la convergence de l’algorithme de Metropolis Hastings<br>The thesis contains 6 chapters. The first chapter contains an introduction to linear regression, the Lasso and the Bayesian Lasso problems. Chapter 2 recalls the convex optimization algorithms and presents the Fista algorithm for calculating the Lasso estimator. The properties of the convergence of this algorithm is also given in this chapter using the entropy estimator and Pitman-Yor estimator. Chapter 3 is devoted to comparison of Monte Carlo and quasi-Monte Carlo methods in numerical calculations of Bayesian Lasso. It comes out of this comparison that the Hammersely points give the best results. Chapter 4 gives a geometric interpretation of the partition function of the Bayesian lasso expressed as a function of the incomplete Gamma function. This allowed us to give a convergence criterion for the Metropolis Hastings algorithm. Chapter 5 presents the Bayesian estimator as the law limit a multivariate stochastic differential equation. This allowed us to calculate the Bayesian Lasso using numerical schemes semi-implicit and explicit Euler and methods of Monte Carlo, Monte Carlo multilevel (MLMC) and Metropolis Hastings algorithm. Comparing the calculation costs shows the couple (semi-implicit Euler scheme, MLMC) wins against the other couples (scheme method). Finally in chapter 6 we found the Lasso convergence rate of the Bayesian Lasso when the signal / noise ratio is constant and when the noise tends to 0. This allowed us to provide a new criteria for the convergence of the Metropolis algorithm Hastings
APA, Harvard, Vancouver, ISO, and other styles
5

Mainguy, Thomas. "Processus de substitution markoviens : un modèle statistique pour la linguistique." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066354/document.

Full text
Abstract:
Ce travail de thèse propose une nouvelle approche au traitement des langues naturelles. Plutôt qu'essayer d'estimer directement la probabilité d'une phrase quelconque, nous identifions des structures syntaxiques dans le langage, qui peuvent être utilisées pour modifier et créer de nouvelles phrases à partir d'un échantillon initial. L'étude des structures syntaxiques est accomplie avec des ensembles de substitution Markoviens, ensembles de chaînes de caractères qui peuvent être échangées sans affecter la distribution. Ces ensembles définissent des processus de substitution Markoviens qui modélisent l'indépendance conditionnelle de certaines chaînes vis-À-Vis de leur contexte. Ce point de vue décompose l'analyse du langage en deux parties, une phase de sélection de modèle, où les ensembles de substitution sont sélectionnés, et une phase d'estimation des paramètres, où les fréquences pour chaque ensemble sont estimées. Nous montrons que ces processus constituent des familles exponentielles quand la structure du langage est fixée. Lorsque la structure du langage est inconnue, nous proposons des méthodes pour identifier des ensembles de substitution à partir d'un échantillon, et pour estimer les paramètres de la distribution. Les ensembles de substitution ont quelques relations avec les grammaires hors-Contexte, qui peuvent être utilisées pour aider l'analyse. Nous construisons alors des dynamiques invariantes pour les processus de substitution. Elles peuvent être utilisées pour calculer l'estimateur du maximum de vraisemblance. En effet, les processus de substitution peuvent être vus comme la limite thermodynamique de la mesure invariante d'une dynamique de crossing-Over<br>This thesis proposes a new approach to natural language processing. Rather than trying to estimate directly the probability distribution of a random sentence, we will detect syntactic structures in the language, which can be used to modify and create new sentences from an initial sample.The study of syntactic structures will be done using Markov substitute sets, sets of strings that can be freely substituted in any sentence without affecting the whole distribution. These sets define the notion of Markov substitute processes, modelling conditional independence of certain substrings (given by the sets) with respect to their context. This point of view splits the issue of language analysis into two parts, a model selection stage where Markov substitute sets are selected, and a parameter estimation stage where the actual frequencies for each set are estimated.We show that these substitute processes form exponential families of distributions, when the language structure (the Markov substitute sets) is fixed. On the other hand, when the language structure is unknown, we propose methods to identify Markov substitute sets from a statistical sample, and to estimate the parameters of the distribution. Markov substitute sets show some connections with context-Free grammars, that can be used to help the analysis. We then proceed to build invariant dynamics for Markov substitute processes. They can among other things be used to effectively compute the maximum likelihood estimate. Indeed, Markov substitute models can be seen as the thermodynamical limit of the invariant measure of crossing-Over dynamics
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Quoc Thong. "Modélisation probabiliste d’impression à l’échelle micrométrique." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10039/document.

Full text
Abstract:
Nous développons des modèles probabilistes pour l’impression à l’échelle micrométrique. Tenant compte de l’aléa de la forme des points qui composent les impressions, les modèles proposés pourront être ultérieurement exploités dans différentes applications dont l’authentification de documents imprimés. Une analyse de l’impression sur différents supports papier et par différentes imprimantes a été effectuée. Cette étude montre que la grande variété de forme dépend de la technologie et du papier. Le modèle proposé tient compte à la fois de la distribution du niveau de gris et de la répartition spatiale de l’encre sur le papier. Concernant le niveau de gris, les modèles des surfaces encrées/vierges sont obtenues en sélectionnant les distributions dans un ensemble de lois de forme similaire aux histogrammes et à l’aide de K-S critère. Le modèle de répartition spatiale de l’encre est binaire. Le premier modèle consiste en un champ de variables indépendantes de Bernoulli non-stationnaire dont les paramètres forment un noyau gaussien généralisé. Un second modèle de répartition spatiale des particules d’encre est proposé, il tient compte de la dépendance des pixels à l’aide d’un modèle de Markov non stationnaire. Deux méthodes d’estimation ont été développées, l’une approchant le maximum de vraisemblance par un algorithme de Quasi Newton, la seconde approchant le critère de l’erreur quadratique moyenne minimale par l’algorithme de Metropolis within Gibbs. Les performances des estimateurs sont évaluées et comparées sur des images simulées. La précision des modélisations est analysée sur des jeux d’images d’impression à l’échelle micrométrique obtenues par différentes imprimantes<br>We develop the probabilistic models of the print at the microscopic scale. We study the shape randomness of the dots that originates the prints, and the new models could improve many applications such as the authentication. An analysis was conducted on various papers, printers. The study shows a large variety of shape that depends on the printing technology and paper. The digital scan of the microscopic print is modeled in: the gray scale distribution, and the spatial binary process modeling the printed/blank spatial distribution. We seek the best parametric distribution that takes account of the distributions of the blank and printed areas. Parametric distributions are selected from a set of distributions with shapes close to the histograms and with the Kolmogorov-Smirnov divergence. The spatial binary model handles the wide diversity of dot shape and the range of variation of spatial density of inked particles. At first, we propose a field of independent and non-stationary Bernoulli variables whose parameters form a Gaussian power. The second spatial binary model encompasses, in addition to the first model, the spatial dependence of the inked area through an inhomogeneous Markov model. Two iterative estimation methods are developed; a quasi-Newton algorithm which approaches the maximum likelihood and the Metropolis-Hasting within Gibbs algorithm that approximates the minimum mean square error estimator. The performances of the algorithms are evaluated and compared on simulated images. The accuracy of the models is analyzed on the microscopic scale printings coming from various printers. Results show the good behavior of the estimators and the consistency of the models
APA, Harvard, Vancouver, ISO, and other styles
7

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
8

Guidi, Henrique Santos. "Estudo do resfriamento em um sistema com múltiplos estados fundamentais." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-23042008-215856/.

Full text
Abstract:
Estudamos um sistema de dois níveis acoplados como um modelo que imita o comportamento de líquidos super-resfriados. Em equilíbrio o modelo apresenta uma fase líquida e uma fase cristalina com diversos estados fundamentais. O modelo é definido numa rede quadrada e a cada sítio é associada uma variável estocástica de Ising. A característica que torna este modelo particularmente interessante é que ele apresenta estados metaestáveis duráveis que podem desaparecer dentro do tempo acessível para as simulações numéricas. Para imitar o processo de formação dos vidros, realizamos simulações de Monte Carlo a taxas de resfriamento constante. Apresentamos também simulações para resfriamentos súbitos a temperatura abaixo da temperatura de fusão.<br>We study a coupled two level systems as a model that imitate the behavior of supercooled liquids that become structural glasses under cooling. In the equilibrium the model shows a liquid phase and a crystalline phase with many grouond states. The model is defined on a square lattice and to each site a stochastic Ising variable is associated. The feature that makes this model particularly interesting is that it display durable metastables states which can vanish within the time available for numerical simulations. In order to imitate the glass former process, we perform Monte Carlo simulations at constant cooling rate. We present also simulations for quenchs to temperatures below the melting temperature.
APA, Harvard, Vancouver, ISO, and other styles
9

Taylor, Katarzyna B. "Exact algorithms for simulation of diffusions with discontinuous drift and robust curvature metropolis-adjusted Langevin algorithms." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/72262/.

Full text
Abstract:
In our work we propose new Exact Algorithms for simulation of diffusions with discontinuous drift and new methodology for simulating Brownian motion jointly with its local time. In the second part of the thesis we introduce Metropolis-adjusted Langevin algorithm which uses local geometry and we prove geometric ergodicity in case of benchmark distributions with light tails.
APA, Harvard, Vancouver, ISO, and other styles
10

Tremblay, Marie. "Estimation des paramètres des modèles de culture : application au modèle STICS Tournesol." Toulouse 3, 2004. http://www.theses.fr/2004TOU30020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Algorithme Metropolis"

1

Allen, Michael P., and Dominic J. Tildesley. Monte Carlo methods. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198803195.003.0004.

Full text
Abstract:
The estimation of integrals by Monte Carlo sampling is introduced through a simple example. The chapter then explains importance sampling, and the use of the Metropolis and Barker forms of the transition matrix defined in terms of the underlying matrix of the Markov chain. The creation of an appropriately weighted set of states in the canonical ensemble is described in detail and the method is extended to the isothermal–isobaric, grand canonical and semi-grand ensembles. The Monte Carlo simulation of molecular fluids and fluids containing flexible molecules using a reptation algorithm is discussed. The parallel tempering or replica exchange method for more efficient exploration of the phase space is introduced, and recent advances including solute tempering and convective replica exchange algorithms are described.
APA, Harvard, Vancouver, ISO, and other styles
2

Bedard, Mylene. On the robustness of optimal scaling for random walk Metropolis algorithms. 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bi, Xiaojun, Brian Smith, Tom Ouyang, and Shumin Zhai. Soft Keyboard Performance Optimization. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198799603.003.0006.

Full text
Abstract:
Optimization techniques have played a vital role in improving the performance (i.e., input speed and accuracy) of soft keyboards. This chapter introduces the challenges, methodologies, and results of keyboard performance optimization. Leveraging the robust human motor control phenomena manifested in text entry, we used the Metropolis random walk algorithm, and Pareto multi-objective optimization method to optimize the keyboard layout and a soft keyboard decoder. The optimization led to layouts that shorten finger travel distance and improve the input speed as well as accuracy over the Qwerty layout, and a soft keyboard decoder with improved correction and completion ability.
APA, Harvard, Vancouver, ISO, and other styles
4

The Monte Carlo Method in the Physical Sciences: Celebrating the 50th Anniversary of the Metropolis Algorithm (AIP Conference Proceedings). American Institute of Physics, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

The Monte Carlo method in the physical sciences: Celebrating the 50th anniversary of the Metropolis Algorithm : Los Alamos, New Mexico, 9-11 June 2003. American Institute of Physics, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Algorithme Metropolis"

1

Tanner, Martin A. "Metropolis Algorithms." In Encyclopedia of Applied and Computational Mathematics. Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-540-70529-1_328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Winkler, Gerhard. "Metropolis Algorithms." In Image Analysis, Random Fields and Markov Chain Monte Carlo Methods. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55760-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Winkler, Gerhard. "Metropolis Algorithms." In Image Analysis, Random Fields and Dynamic Monte Carlo Methods. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-97522-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Voss-Böhme, Anja. "Metropolis Algorithm." In Encyclopedia of Systems Biology. Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Robert, Christian P., and George Casella. "Metropolis–Hastings Algorithms." In Introducing Monte Carlo Methods with R. Springer New York, 2009. http://dx.doi.org/10.1007/978-1-4419-1576-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Robert, Christian P., and George Casella. "Algorithmes de Metropolis-Hastings." In Méthodes de Monte-Carlo avec R. Springer Paris, 2011. http://dx.doi.org/10.1007/978-2-8178-0181-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Jun S. "Metropolis Algorithm and Beyond." In Springer Series in Statistics. Springer New York, 2004. http://dx.doi.org/10.1007/978-0-387-76371-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Robert, Christian P., and George Casella. "The Metropolis—Hastings Algorithm." In Springer Texts in Statistics. Springer New York, 2004. http://dx.doi.org/10.1007/978-1-4757-4145-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Robert, Christian P., and George Casella. "The Metropolis—Hastings Algorithm." In Springer Texts in Statistics. Springer New York, 1999. http://dx.doi.org/10.1007/978-1-4757-3071-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hoff, Peter D. "Nonconjugate priors and Metropolis-Hastings algorithms." In Springer Texts in Statistics. Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-92407-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithme Metropolis"

1

Wang, Shouda, Weijie Zheng, and Benjamin Doerr. "Choosing the Right Algorithm With Hints From Complexity Theory." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/234.

Full text
Abstract:
Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the best possible algorithm in a certain broad class of black-box optimizers can give fruitful indications in which direction to search for good established optimization heuristics. We demonstrate this approach on the recently proposed DLB benchmark, for which the only known results are O(n^3) runtimes for several classic evolutionary algorithms and an O(n^2 log n) runtime for an estimation-of-distribution algorithm. Our finding that the unary unbiased black-box complexity is only O(n^2) suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time. Since we also prove that better runtimes cannot be obtained in the class of unary unbiased algorithms, we shift our attention to algorithms that use the information of more parents to generate new solutions. An artificial algorithm of this type having an O(n log n) runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem also in time O(n log n). Our experiments show a remarkably good performance of the Metropolis algorithm, clearly the best of all algorithms regarded for reasonable problem sizes.
APA, Harvard, Vancouver, ISO, and other styles
2

Okamoto, Yuko. "Metropolis Algorithms in Generalized Ensemble." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dewing, Mark. "Metropolis with noise: The penalty method." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hübler, Christian, Hans-Peter Kriegel, Karsten Borgwardt, and Zoubin Ghahramani. "Metropolis Algorithms for Representative Subgraph Sampling." In 2008 Eighth IEEE International Conference on Data Mining (ICDM). IEEE, 2008. http://dx.doi.org/10.1109/icdm.2008.124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ceperley, D. M. "Metropolis Methods for Quantum Monte Carlo Simulations." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Landau, David P. "The Metropolis Monte Carlo Method in Statistical Physics." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Novotny, M. A. "Algorithms for Faster and Larger Dynamic Metropolis Simulations." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berg, Bernd A. "Biased Metropolis Sampling for Rugged Free Energy Landscapes." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luengo, David, and Luca Martino. "Fully adaptive Gaussian mixture Metropolis-Hastings algorithm." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6638846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bazavov, Alexei, and Bernd A. Berg. "Biased Metropolis-Heatbath Algorithms for Lattice Gauge Theory." In XXIIIrd International Symposium on Lattice Field Theory. Sissa Medialab, 2005. http://dx.doi.org/10.22323/1.020.0110.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Algorithme Metropolis"

1

Graves, Todd L. Automatic step size selection in randon walk Metropolis algorithms. Office of Scientific and Technical Information (OSTI), 2011. http://dx.doi.org/10.2172/1057119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gelfand, Saul B., and Sanjoy K. Mitter. Metropolis-type Annealing Algorithms for Global Optimization in IRd. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada459610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!