To see the other types of publications on this topic, follow the link: Monte Carlo Metropolis.

Dissertations / Theses on the topic 'Monte Carlo Metropolis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo Metropolis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Edström, Filip. "Parametrization of Reactive Force Field using Metropolis Monte Carlo." Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-161972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zeppilli, Giulia. "Alcune applicazioni del Metodo Monte Carlo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3091/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rönnby, Karl. "Monte Carlo Simulations for Chemical Systems." Thesis, Linköpings universitet, Matematiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132811.

Full text
Abstract:
This thesis investigates dierent types of Monte Carlo estimators for use in computationof chemical system, mainly to be used in calculating surface growthand evolution of SiC. Monte Carlo methods are a class of algorithms using randomsampling to numerical solve problems and are used in many cases. Threedierent types of Monte Carlo methods are studied, a simple Monte Carlo estimatorand two types of Markov chain Monte Carlo Metropolis algorithm MonteCarlo and kinetic Monte Carlo. The mathematical background is given for allmethods and they are tested both on smaller system, with known results tocheck their mathematical and chemical soundness and on larger surface systemas an example on how they could be used
APA, Harvard, Vancouver, ISO, and other styles
4

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
5

Paixão, Everton Luiz Martins da. "Estudos de nanoestruturas magnéticas - nanodiscos com impurezas e nanofitas - Via Monte Carlo Metropolis." Universidade Federal de Juiz de Fora (UFJF), 2013. https://repositorio.ufjf.br/jspui/handle/ufjf/4894.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-08T13:54:44Z No. of bitstreams: 1 evertonluizmartinsdapaixao.pdf: 7260792 bytes, checksum: 8a1a1accd3cfb65d925f1429e761f688 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-26T19:03:05Z (GMT) No. of bitstreams: 1 evertonluizmartinsdapaixao.pdf: 7260792 bytes, checksum: 8a1a1accd3cfb65d925f1429e761f688 (MD5)
Made available in DSpace on 2017-06-26T19:03:05Z (GMT). No. of bitstreams: 1 evertonluizmartinsdapaixao.pdf: 7260792 bytes, checksum: 8a1a1accd3cfb65d925f1429e761f688 (MD5) Previous issue date: 2013-04-19
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Neste trabalho usamos o método de Monte Carlo Metropolis aplicado em sistemas magnéticos para estudar nanoestruturas como nanodiscos e nanofios de Permalloy. Dividimos em duas partes. Primeiramente, estudou-se o comportamento do núcleo do vórtice rodeado por anéis de impurezas em nanodiscos de permalloy. Variamos o raio e a espessura dos anéis e medimos o limite do campo aplicado para que o núcleo do vórtice passe através destes anéis. Em segundo lugar, temos estudado os estados fundamentais (de menor energia) para nanofios para uma pequena região do espaço de fase. Podemos identificar as configurações de rotação associadas a estes estados.
In this work we have used Monte Carlo Metropolis method applied in magnetic systems to study nanostructures like permalloy nanodisks and nanowires. We divided it in two parts. First, we have studied the behavior of the vortex core surrounded by rings of the impurities in permalloy nanodisks. We have varied the radius and the thickness of the rings and we measure the limit of the applied field for that the vortex core passes through the ring. Second, we have studied the ground states (lowest-energy state) for nanowires to a small region of the phase space. We can identify the spin configurations associated to the these states.
APA, Harvard, Vancouver, ISO, and other styles
6

Ounaissi, Daoud. "Méthodes quasi-Monte Carlo et Monte Carlo : application aux calculs des estimateurs Lasso et Lasso bayésien." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10043/document.

Full text
Abstract:
La thèse contient 6 chapitres. Le premier chapitre contient une introduction à la régression linéaire et aux problèmes Lasso et Lasso bayésien. Le chapitre 2 rappelle les algorithmes d’optimisation convexe et présente l’algorithme FISTA pour calculer l’estimateur Lasso. La statistique de la convergence de cet algorithme est aussi donnée dans ce chapitre en utilisant l’entropie et l’estimateur de Pitman-Yor. Le chapitre 3 est consacré à la comparaison des méthodes quasi-Monte Carlo et Monte Carlo dans les calculs numériques du Lasso bayésien. Il sort de cette comparaison que les points de Hammersely donne les meilleurs résultats. Le chapitre 4 donne une interprétation géométrique de la fonction de partition du Lasso bayésien et l’exprime en fonction de la fonction Gamma incomplète. Ceci nous a permis de donner un critère de convergence pour l’algorithme de Metropolis Hastings. Le chapitre 5 présente l’estimateur bayésien comme la loi limite d’une équation différentielle stochastique multivariée. Ceci nous a permis de calculer le Lasso bayésien en utilisant les schémas numériques semi implicite et explicite d’Euler et les méthodes de Monte Carlo, Monte Carlo à plusieurs couches (MLMC) et l’algorithme de Metropolis Hastings. La comparaison des coûts de calcul montre que le couple (schéma semi-implicite d’Euler, MLMC) gagne contre les autres couples (schéma, méthode). Finalement dans le chapitre 6 nous avons trouvé la vitesse de convergence du Lasso bayésien vers le Lasso lorsque le rapport signal/bruit est constant et le bruit tend vers 0. Ceci nous a permis de donner de nouveaux critères pour la convergence de l’algorithme de Metropolis Hastings
The thesis contains 6 chapters. The first chapter contains an introduction to linear regression, the Lasso and the Bayesian Lasso problems. Chapter 2 recalls the convex optimization algorithms and presents the Fista algorithm for calculating the Lasso estimator. The properties of the convergence of this algorithm is also given in this chapter using the entropy estimator and Pitman-Yor estimator. Chapter 3 is devoted to comparison of Monte Carlo and quasi-Monte Carlo methods in numerical calculations of Bayesian Lasso. It comes out of this comparison that the Hammersely points give the best results. Chapter 4 gives a geometric interpretation of the partition function of the Bayesian lasso expressed as a function of the incomplete Gamma function. This allowed us to give a convergence criterion for the Metropolis Hastings algorithm. Chapter 5 presents the Bayesian estimator as the law limit a multivariate stochastic differential equation. This allowed us to calculate the Bayesian Lasso using numerical schemes semi-implicit and explicit Euler and methods of Monte Carlo, Monte Carlo multilevel (MLMC) and Metropolis Hastings algorithm. Comparing the calculation costs shows the couple (semi-implicit Euler scheme, MLMC) wins against the other couples (scheme method). Finally in chapter 6 we found the Lasso convergence rate of the Bayesian Lasso when the signal / noise ratio is constant and when the noise tends to 0. This allowed us to provide a new criteria for the convergence of the Metropolis algorithm Hastings
APA, Harvard, Vancouver, ISO, and other styles
7

Cunha, João Victor de Souza. "Aplicação de Monte Carlo para a geração de ensembles e análise termodinâmica da interação biomolecular." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-25112016-143220/.

Full text
Abstract:
As interações moleculares, em especial as de caráter não-covalente, são processos-chave em vários aspectos da biologia celular e molecular, desde a comunicação entre as células ou da velocidade e especificidade das reações enzimáticas. Portanto, há a necessidade de estudar e criar métodos preditivos para calcular a afinidade entre moléculas nos processos de interação, os quais encontram uma gama de aplicações, incluindo a descoberta de novos fármacos. No geral, entre esses valores de afinidade, o mais importante é a energia livre de ligação, que normalmente é determinada por modos computacionalmente rápidos, porém sem uma forte base teórica, ou por cálculos muito complexos, utilizando dinâmica molecular, onde mesmo com um grande poder de determinação da afinidade, é muito custoso computacionalmente. O objetivo deste trabalho é avaliar um modelo menos custoso computacionalmente e que promova um aprofundamento na avaliação de resultados obtidos a partir de simulações de docking molecular. Para esta finalidade, o método de Monte Carlo é empregado para a amostragem de orientações e conformações do ligante do sítio ativo macromolecular. A avaliação desta metodologia demonstrou que é possível calcular grandezas entrópicas e entálpicas e analisar a capacidade interativa entre complexos proteína-ligante de forma satisfatória para o complexo lisozima do bacteriófago T4.
The molecular interactions, especially the ones with a non-covalent nature, are key processes in general aspects of cellular and molecular biology, including cellular communication and velocity and specificity of enzymatic reactions. So, there is a strong need for studies and development of methods for the calculation of the affinity on interaction processes, since these have a wide range of applications like rational drug design. The free energy of binding is the most important measure among the affinity measurements. It can be calculated by quick computational means, but lacking on strong theoretical basis or by complex calculations using molecular dynamics, where one can compute accurate results but at the price of an increased computer power. The aim of this project is to evaluate a computationally inexpensive model which can improve the results from molecular docking simulations. For this end, the Monte Carlo method is implemented to sample different ligand configurations inside the macromolecular binding site. The evaluation of this methodology showed that is possible to calculate entropy and enthalpy, along analyzing the interactive capacity between receptor-ligands complexes in a satisfactory way for the bacteriophage T4.
APA, Harvard, Vancouver, ISO, and other styles
8

Bäckström, Nils, Jonathan Löfgren, and Vilhelm Rydén. "Study of Magnetic Nanostructures using Micromagnetic Simulations and Monte Carlo Methods." Thesis, Uppsala universitet, Materialfysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-227804.

Full text
Abstract:
We perform micromagnetic simulations in MuMax3 on various magneticnanostructures to study their magnetic state and response to external fields. Theinteraction and ordering of nanomagnetic arrays is investigated by calculating themagnetostatic energies for various configurations. These energies are then used inMonte Carlo simulation to study the thermal behaviour of systems of nanomagneticarrays. We find that the magnetic state of the nanostructures are related to theirshape and size and furthermore affect the emergent properties of the system, givingrise to temperature dependent ordering among the individual structures. Results fromboth micromagnetic and statistical mechanic simulations agree well with availableexperimental data, although the Monte Carlo algorithm encounter problems at lowsimulation temperatures.
APA, Harvard, Vancouver, ISO, and other styles
9

Lima, J?nior Francisco Biagione de. "Simula??es de Monte Carlo para os modelos Ising e Blume-Capel em redes complexa." Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18606.

Full text
Abstract:
Made available in DSpace on 2015-03-03T15:15:29Z (GMT). No. of bitstreams: 1 FranciscoBLJ_DISSERT.pdf: 1340368 bytes, checksum: 23cf640d31d17bdd88ad96134433ceb1 (MD5) Previous issue date: 2013-11-29
We studied the Ising model ferromagnetic as spin-1/2 and the Blume-Capel model as spin-1, > 0 on small world network, using computer simulation through the Metropolis algorithm. We calculated macroscopic quantities of the system, such as internal energy, magnetization, specific heat, magnetic susceptibility and Binder cumulant. We found for the Ising model the same result obtained by Koreans H. Hong, Beom Jun Kim and M. Y. Choi [6] and critical behavior similar Blume-Capel model
?Neste trabalho estudamos o modelo de Ising ferromagn?tico com spin-1/2 e o modelo Blume-Capel com spin-1, ? > 0 em rede mundo pequeno, usando simula??o computacional atrav?s do algoritmo de Metropolis. Calculamos grandezas macrosc?picas do sistema, tais como a energia interna, a magnetiza??o, o calor espec?fico, a susceptibilidade magn?tica e o cumulante de Binder. Encontramos para o modelo de Ising o mesmo resultado obtido pelos Coreanos H. Hong, Beom Jun Kim e M. Y. Choi [6] e um comportamento cr?tico similar o modelo Blume-Capel.
APA, Harvard, Vancouver, ISO, and other styles
10

Caballero, Nolte Rafael Eduardo. "Monte Carlo - Metropolis Investigations of Shape and Matrix Effects in 2D and 3D Spin-Crossover Nanoparticles." Master's thesis, Pontificia Universidad Católica del Perú, 2017. http://tesis.pucp.edu.pe/repositorio/handle/123456789/8646.

Full text
Abstract:
Se estudia un modelo tipo Ising tomando en cuenta las interacciones de corto y largo alcance, así mismo como el posible efecto de la superficie del sistema y la forma del mismo sobre las propiedades magnéticas del material. Esto se realiza para investigar el comportamiento de los sistemas compuestos por nanopartículas ordenadas en una matriz. Ademas se analiza el papel que juega la relación entre numero de partículas en la superficie con las que se encuentran en el volumen de la matriz con respecto al comportamiento de histeresis del sistema.
An Ising model is studied, taking into account short and long range interactions, as well as the possible effect of the system surface and its shape on the magnetic properties of the material. This is done to investigate the behavior of systems composed of nanoparticles ordered in a matrix. In addition, the role of the relationship between the number of particles on the surface and those in the volume of the matrix with respect to the behavior of system hysteresis is analyzed.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
11

Potter, Christopher C. J. "Kernel Selection for Convergence and Efficiency in Markov Chain Monte Carol." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/249.

Full text
Abstract:
Markov Chain Monte Carlo (MCMC) is a technique for sampling from a target probability distribution, and has risen in importance as faster computing hardware has made possible the exploration of hitherto difficult distributions. Unfortunately, this powerful technique is often misapplied by poor selection of transition kernel for the Markov chain that is generated by the simulation. Some kernels are used without being checked against the convergence requirements for MCMC (total balance and ergodicity), but in this work we prove the existence of a simple proxy for total balance that is not as demanding as detailed balance, the most widely used standard. We show that, for discrete-state MCMC, that if a transition kernel is equivalent when it is “reversed” and applied to data which is also “reversed”, then it satisfies total balance. We go on to prove that the sequential single-variable update Metropolis kernel, where variables are simply updated in order, does indeed satisfy total balance for many discrete target distributions, such as the Ising model with uniform exchange constant. Also, two well-known papers by Gelman, Roberts, and Gilks (GRG)[1, 2] have proposed the application of the results of an interesting mathematical proof to the realistic optimization of Markov Chain Monte Carlo computer simulations. In particular, they advocated tuning the simulation parameters to select an acceptance ratio of 0.234 . In this paper, we point out that although the proof is valid, its result’s application to practical computations is not advisable, as the simulation algorithm considered in the proof is so inefficient that it produces very poor results under all circumstances. The algorithm used by Gelman, Roberts, and Gilks is also shown to introduce subtle time-dependent correlations into the simulation of intrinsically independent variables. These correlations are of particular interest since they will be present in all simulations that use multi-dimensional MCMC moves.
APA, Harvard, Vancouver, ISO, and other styles
12

Waseda, Osamu. "Atomic scale investigation of ageing in metals." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI150/document.

Full text
Abstract:
Selon la théorie de Cottrell et Bilby, les dislocations à travers leur champ de contrainte interagissent avec les atomes de soluté qui s’agrègent au cœur et autour des dislocations (atmosphère de Cottrell). Ces atmosphères « bloquent » les dislocations et fragilisent le matériau. Dans cette thèse, les techniques de simulations à l’échelle atomique telles que la Dynamique Moléculaire, les simulations Monte Carlo Cinétique, Monte Carlo Métropolis ont été développées qui permettent de prendre en compte les interactions entre plusieurs centaines d’atomes de carbone et la dislocation, pour étudier la cinétique de formation ainsi que la structure d’une atmosphère de Cottrell. Par ailleurs, la technique de simulation est appliquée à deux autres problématiques: premièrement, il est connu que les atomes de C dans la ferrite se mettent en ordre (mise en ordre de Zener). La stabilité de cette phase est étudiée en fonction de la température et la concentration de C. Deuxièmement, la ségrégation des atomes de soluté dans les nano-cristaux de Ni ainsi que la stabilité des nano-cristaux avec les atomes de soluté dans les joints de grain à haute température est étudiée
The objective of the thesis was to understand the microscopic features at the origin of ageing in metals. The originality of this contribution was the com- bination of three complementary computational techniques : (1) Metropolis Monte Carlo (MMC), (2) Atomic Kinetic Monte Carlo (AKMC), and, (3) Molecular Dynamics (MD). It consisted of four main sections : Firstly the ordering occurring in bulk alpha-iron via MMC and MD was studied. Various carbon contents and temperatures were investigated in order to obtain a “phase diagram”. Secondly, the generation of systems containing a dislocation interacting with many carbon atoms, namely a Cottrell Atmosphere, with MMC technique was described. The equilibrium structure of the atmosphere and the stress field around the atmospheres proves that the stress field around the dislocation was affected but not cancelled out by the atmosphere. Thirdly, the kinetics of the carbon migration and Cottrell atmosphere evolution were investigated via AKMC. The activation energies for carbon atom migration were calculated from the local stress field and the arrangement of the neigh- bouring carbon atoms. Lastly, an application of the combined use of MMC and MD to describe grain boundary segregation of solute atoms in fcc nickel was presented. The grain growth was inhibited due to the solute atoms in the grain boundary
APA, Harvard, Vancouver, ISO, and other styles
13

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Full text
Abstract:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
APA, Harvard, Vancouver, ISO, and other styles
14

Almeida, Alexandre Barbosa de. "Predição de estrutura terciária de proteínas com técnicas multiobjetivo no algoritmo de monte carlo." Universidade Federal de Goiás, 2016. http://repositorio.bc.ufg.br/tede/handle/tede/5872.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-08-05T17:38:42Z No. of bitstreams: 2 Dissertação - Alexandre Barbosa de Almeida - 2016.pdf: 11943401 bytes, checksum: 94f2e941bbde05e098c40f40f0f2f69c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-09T11:57:53Z (GMT) No. of bitstreams: 2 Dissertação - Alexandre Barbosa de Almeida - 2016.pdf: 11943401 bytes, checksum: 94f2e941bbde05e098c40f40f0f2f69c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2016-08-09T11:57:53Z (GMT). No. of bitstreams: 2 Dissertação - Alexandre Barbosa de Almeida - 2016.pdf: 11943401 bytes, checksum: 94f2e941bbde05e098c40f40f0f2f69c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-06-17
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
Proteins are vital for the biological functions of all living beings on Earth. However, they only have an active biological function in their native structure, which is a state of minimum energy. Therefore, protein functionality depends almost exclusively on the size and shape of its native conformation. However, less than 1% of all known proteins in the world has its structure solved. In this way, various methods for determining protein structures have been proposed, either in vitro or in silico experiments. This work proposes a new in silico method called Monte Carlo with Dominance, which addresses the problem of protein structure prediction from the point of view of ab initio and multi-objective optimization, considering both protein energetic and structural aspects. The software GROMACS was used for the ab initio treatment to perform Molecular Dynamics simulations, while the framework ProtPred-GROMACS (2PG) was used for the multi-objective optimization problem, employing genetic algorithms techniques as heuristic solutions. Monte Carlo with Dominance, in this sense, is like a variant of the traditional Monte Carlo Metropolis method. The aim is to check if protein tertiary structure prediction is improved when structural aspects are taken into account. The energy criterion of Metropolis and energy and structural criteria of Dominance were compared using RMSD calculation between the predicted and native structures. It was found that Monte Carlo with Dominance obtained better solutions for two of three proteins analyzed, reaching a difference about 53% in relation to the prediction by Metropolis.
As proteínas são vitais para as funções biológicas de todos os seres na Terra. Entretanto, somente apresentam função biológica ativa quando encontram-se em sua estrutura nativa, que é o seu estado de mínima energia. Portanto, a funcionalidade de uma proteína depende, quase que exclusivamente, do tamanho e da forma de sua conformação nativa. Porém, de todas as proteínas conhecidas no mundo, menos de 1% tem a sua estrutura resolvida. Deste modo, vários métodos de determinação de estruturas de proteínas têm sido propostos, tanto para experimentos in vitro quanto in silico. Este trabalho propõe um novo método in silico denominado Monte Carlo com Dominância, o qual aborda o problema da predição de estrutura de proteínas sob o ponto de vista ab initio e de otimização multiobjetivo, considerando, simultaneamente, os aspectos energéticos e estruturais da proteína. Para o tratamento ab initio utiliza-se o software GROMACS para executar as simulações de Dinâmica Molecular, enquanto que para o problema da otimização multiobjetivo emprega-se o framework ProtPred-GROMACS (2PG), o qual utiliza algoritmos genéticos como técnica de soluções heurísticas. O Monte Carlo com Dominância, nesse sentido, é como uma variante do tradicional método de Monte Carlo Metropolis. Assim, o objetivo é o de verificar se a predição da estrutura terciária de proteínas é aprimorada levando-se em conta também os aspectos estruturais. O critério energético de Metropolis e os critérios energéticos e estruturais da Dominância foram comparados empregando o cálculo de RMSD entre as estruturas preditas e as nativas. Foi verificado que o método de Monte Carlo com Dominância obteve melhores soluções para duas de três proteínas analisadas, chegando a cerca de 53% de diferença da predição por Metropolis.
APA, Harvard, Vancouver, ISO, and other styles
15

Rizzi, Leandro Gutierrez. "Simulações numéricas de Monte Carlo aplicadas no estudo das transições de fase do modelo de Ising dipolar bidimensional." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/59/59135/tde-23052009-134513/.

Full text
Abstract:
O modelo de Ising dipolar bidimensional inclui, além da interação ferromagnética entre os primeiros vizinhos, interações de longo alcance entre os momentos de dipolo magnético dos spins. A presença da interação dipolar muda completamente o sistema, apresentando um rico diagrama de fase, cujas características têm originado inúmeros estudos na literatura. Além disso, a possibilidade de explicar fenômenos observados em filmes magnéticos ultrafinos, os quais possuem diversas aplicações em àreas tecnológicas, também motiva o estudo deste modelo. O estado fundamental ferromagnético do modelo de Ising puro é alterado para uma série de fases do tipo faixas, as quais consistem em domínios ferromagnéticos de largura $h$ com magnetizações opostas. A largura das faixas depende da razao $\\delta$ das intensidades dos acoplamentos ferromagnético e dipolar. Através de simulações de Monte Carlo e técnicas de repesagem em histogramas múltiplos identificamos as temperaturas críticas de tamanho finito para as transições de fase quando $\\delta=2$, o que corresponde a $h=2$. Calculamos o calor específico e a susceptibilidade do parâmetro de ordem, no intervalo de temperaturas onde as transições são observadas, para diferentes tamanhos de rede. As técnicas de repesagem permitem-nos explorar e identificar máximos distintos nessas funções da temperatura e, desse modo, estimar as temperaturas críticas de tamanho finito com grande precisão. Apresentamos evidências numéricas da existência de uma fase nemática de Ising para tamanhos grandes de rede. Em nossas simulações, observamos esta fase para tamanhos de rede a partir de $L=48$. Para verificar o quanto a interação dipolar de longo alcance afeta as estimativas físicas, nós calculamos o tempo de autocorrelação integrado nas séries temporais da energia. Inferimos daí quão severo é o critical slowing down (decaimento lento crítico) para esse sistema próximo às transições de fase termodinâmicas. Os resultados obtidos utilizando um algoritmo de atualização local foram comparados com os resultados obtidos utilizando o algoritmo multicanônico.
Two-dimensional spin model with nearest-neighbor ferromagnetic interaction and long-range dipolar interactions exhibit a rich phase diagram, whose characteristics have been exploited by several studies in the recent literature. Furthermore, the possibility of explain observed phenomena in ultrathin magnetic films, which have many technological applications, also motivates the study of this model. The presence of dipolar interaction term changes the ferromagnetic ground state expected for the pure Ising model to a series of striped phases, which consist of ferromagnetic domains of width $h$ with opposite magnetization. The width of the stripes depends on the ratio $\\delta$ of the ferromagnetic and dipolar couplings. Monte Carlo simulations and reweighting multiple histograms techniques allow us to identify the finite-size critical temperatures of the phase transitions when $\\delta=2$, which corresponds to $h=2$. We calculate, for different lattice sizes, the specific heat and susceptibility of the order parameter around the transition temperatures by means of reweighting techniques. This allows us to identify in these observables, as functions of temperature, the distinct maxima and thereby to estimate the finite-size critical temperatures with high precision. We present numerical evidence of the existence of a Ising nematic phase for large lattice sizes. Our results show that simulations need to be performed for lattice sizes at least as large as $L=48$ to clearly observe the Ising nematic phase. To access how the long-range dipolar interaction may affect physical estimates we also evaluate the integrated autocorrelation time in energy time series. This allows us to infer how severe is the critical slowing down for this system with long-range interaction and nearby thermodynamic phase transitions. The results obtained using a local update algorithm are compared with results obtained using the multicanonical algorithm.
APA, Harvard, Vancouver, ISO, and other styles
16

Michel, Manon. "Irreversible Markov chains by the factorized Metropolis filter : algorithms and applications in particle systems and spin models." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE039/document.

Full text
Abstract:
Cette thèse porte sur le développement et l'application en physique statistique d'un nouveau paradigme pour les méthodes sans rejet de Monte-Carlo par chaînes de Markov irréversibles, grâce à la mise en œuvre du filtre factorisé de Metropolis et du concept de lifting. Les deux premiers chapitres présentent la méthode de Monte-Carlo et ses différentes applications à des problèmes de physique statistique. Une des principales limites de ces méthodes se rencontre dans le voisinage des transitions de phase, où des phénomènes de ralentissement dynamique entravent fortement la thermalisation des systèmes. Le troisième chapitre présente la nouvelle classe des algorithmes de Metropolis factorisés et irréversibles. Se fondant sur le concept de lifting des chaînes de Markov, le filtre factorisé de Metropolis permet de décomposer un potentiel multidimensionnel en plusieurs autres unidimensionnels. De là, il est possible de définir un algorithme sans rejet de Monte-Carlo par chaînes de Markov irréversibles. Le quatrième chapitre examine les performances de ce nouvel algorithme dans une grande variété de systèmes. Des accélérations du temps de thermalisation sont observées dans des systèmes bidimensionnels de particules molles, des systèmes bidimensionnels de spins XY ferromagnétiques et des systèmes tridimensionnels de verres de spins XY. Finalement, une réduction importante du ralentissement critique est exposée pour un système tridimensionnel de spins Heisenberg ferromagnétiques
This thesis deals with the development and application in statistical physics of a general framework for irreversible and rejection-free Markov-chain Monte Carlo methods, through the implementation of the factorized Metropolis filter and the lifting concept. The first two chapters present the Markov-chain Monte Carlo method and its different implementations in statistical physics. One of the main limitations of Markov-chain Monte Carlo methods arises around phase transitions, where phenomena of dynamical slowing down greatly impede the thermalization of the system. The third chapter introduces the new class of irreversible factorized Metropolis algorithms. Building on the concept of lifting of Markov chains, the factorized Metropolis filter allows to decompose a multidimensional potential into several unidimensional ones. From there, it is possible to define a rejection-free and completely irreversible Markov-chain Monte Carlo algorithm. The fourth chapter reviews the performance of the irreversible factorized algorithm in a wide variety of systems. Clear accelerations of the thermalization time are observed in bidimensional soft-particle systems, bidimensional ferromagnetic XY spin systems and three-dimensional XY spin glasses. Finally, an important reduction of the critical slowing down is exhibited in three-dimensional ferromagnetic Heisenberg spin systems
APA, Harvard, Vancouver, ISO, and other styles
17

Feldt, Jonas. "Hybrid Simulation Methods for Systems in Condensed Phase." Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2018. http://hdl.handle.net/11858/00-1735-0000-002E-E3F2-B.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.

Full text
Abstract:
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
APA, Harvard, Vancouver, ISO, and other styles
19

Toyinbo, Peter Ayo. "Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3673.

Full text
Abstract:
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.
APA, Harvard, Vancouver, ISO, and other styles
20

Fu, Shuting. "Bayesian Logistic Regression Model with Integrated Multivariate Normal Approximation for Big Data." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/451.

Full text
Abstract:
The analysis of big data is of great interest today, and this comes with challenges of improving precision and efficiency in estimation and prediction. We study binary data with covariates from numerous small areas, where direct estimation is not reliable, and there is a need to borrow strength from the ensemble. This is generally done using Bayesian logistic regression, but because there are numerous small areas, the exact computation for the logistic regression model becomes challenging. Therefore, we develop an integrated multivariate normal approximation (IMNA) method for binary data with covariates within the Bayesian paradigm, and this procedure is assisted by the empirical logistic transform. Our main goal is to provide the theory of IMNA and to show that it is many times faster than the exact logistic regression method with almost the same accuracy. We apply the IMNA method to the health status binary data (excellent health or otherwise) from the Nepal Living Standards Survey with more than 60,000 households (small areas). We estimate the proportion of Nepalese in excellent health condition for each household. For these data IMNA gives estimates of the household proportions as precise as those from the logistic regression model and it is more than fifty times faster (20 seconds versus 1,066 seconds), and clearly this gain is transferable to bigger data problems.
APA, Harvard, Vancouver, ISO, and other styles
21

Alves, Andressa Schneider. "Algoritmos para o encaixe de moldes com formato irregular em tecidos listrados." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142744.

Full text
Abstract:
Esta tese tem como objetivo principal a proposição de solução para o problema do encaixe de moldes em tecidos listrados da indústria do vestuário. Os moldes são peças com formato irregular que devem ser dispostos sobre a matéria-prima, neste caso o tecido, para a etapa posterior de corte. No problema específico do encaixe em tecidos listrados, o local em que os moldes são posicionados no tecido deve garantir que, após a confecção da peça, as listras apresentem continuidade. Assim, a fundamentação teórica do trabalho abrange temas relacionados à moda e ao design do vestuário, como os tipos e padronagens de tecidos listrados, e as possibilidades de rotação e colocação dos moldes sobre tecidos listrados. Na fundamentação teórica também são abordados temas da pesquisa em otimização combinatória como: características dos problemas bidimensionais de corte e encaixe e algoritmos utilizados por diversos autores para solucionar o problema. Ainda na parte final da fundamentação teórica são descritos o método Cadeia de Markov Monte Carlo e o algoritmo de Metropolis-Hastings. Com base na pesquisa bibliográfica, foram propostos dois algoritmos distintos para lidar com o problema de encaixe de moldes em tecidos listrados: algoritmo com pré-processamento e algoritmo de busca do melhor encaixe utilizando o algoritmo de Metropolis-Hastings. Ambos foram implementados no software Riscare Listrado, que é uma continuidade do software Riscare para tecidos lisos desenvolvido em Alves (2010). Para testar o desempenho dos dois algoritmos foram utilizados seis problemas benchmarks da literatura e proposto um novo problema denominado de camisa masculina. Os problemas benchmarks da literatura foram propostos para matéria-prima lisa e o problema camisa masculina especificamente para tecidos listrados. Entre os dois algoritmos desenvolvidos, o algoritmo de busca do melhor encaixe apresentou resultados com melhores eficiências de utilização do tecido para todos os problemas propostos. Quando comparado aos melhores resultados publicados na literatura para matéria-prima lisa, o algoritmo de busca do melhor encaixe apresentou encaixes com eficiências inferiores, porém com resultados superiores ao recomendado pela literatura específica da área de moda para tecidos estampados.
This thesis proposes the solution for the packing problem of patterns on striped fabric in clothing industry. The patterns are pieces with irregular form that should be placed on raw material which is, in this case, the fabric. This fabric is cut after packing. In the specific problem of packing on striped fabric, the position that patterns are put in the fabric should ensure that, after the clothing sewing, the stripes should present continuity. Thus, the theoretical foundation of this project includes subjects about fashion and clothing design, such as types and rapports of striped fabric, and the possibilities of rotation and the correct place to put the patterns on striped fabric. In the theoretical foundation, there are also subjects about research in combinatorial optimization as: characteristics about bi-dimensional packing and cutting problems and algorithms used for several authors to solve the problem. In addition, the Markov Chain Monte Carlo method and the Metropolis-Hastings algorithm are described at end of theoretical foundation. Based on the bibliographic research, two different algorithms for the packing problem with striped fabric are proposed: algorithm with pre-processing step and algorithm of searching the best packing using the Metropolis-Hastings algorithm. Both algorithms are implemented in the Striped Riscare software, which is a continuity of Riscare software for clear fabrics developed in the Masters degree of the author. Both algorithms performances are tested with six literature benchmark problems and a new problem called “male shirt” is proposed here. The benchmark problems of literature were iniatially proposed for clear raw material and the male shirt problem, specifically for striped fabrics. Between the two developed algorithms, the algorithm of searching the best packing has shown better results with better efficiencies of the fabric usage for all the problems tested. When compared to the best results published in the literature for clear raw material, the algorithm of searching the best packing has shown packings with lower efficiencies. However, it showed results higher than recommended for the specific literature of fashion design for patterned fabrics.
APA, Harvard, Vancouver, ISO, and other styles
22

Szymczak, Marcin. "Programming language semantics as a foundation for Bayesian inference." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28993.

Full text
Abstract:
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
APA, Harvard, Vancouver, ISO, and other styles
23

Jersild, Annika Lee. "Relative Role of Uncertainty for Predictions of Future Southeastern U.S. Pine Carbon Cycling." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/71748.

Full text
Abstract:
Predictions of how forest productivity and carbon sequestration will respond to climate change are essential for making forest management decisions and adapting to future climate. However, current predictions can include considerable uncertainty that is not well quantified. To address the need for better quantification of uncertainty, we calculated and compared ecosystem model parameter, ecosystem model process, climate model, and climate scenario uncertainty for predictions of Southeastern U.S. pine forest productivity. We applied a data assimilation using Metropolis-Hastings Markov Chain Monte Carlo to fuse diverse datasets with the Physiological Principles Predicting Growth model. The spatially and temporally diverse data sets allowed for novel constraints on ecosystem model parameters and allowed for the quantification of uncertainty associated with parameterization and model structure (process). Overall, we found that the uncertainty is higher for parameter and process model uncertainty than the climate model uncertainty. We determined that climate change will result in a likely increase in terrestrial carbon storage and that higher emission scenarios increase the uncertainty in our predictions. In addition, we determined regional variations in biomass accumulation due to a response to the change in frost days, temperature, and vapor pressure deficit. Since the uncertainty associated with ecosystem model parameter and process uncertainty was larger than the uncertainty associated with climate predictions, our results indicate that better constraining parameters in ecosystem models and improving the mathematical structure of ecosystem models can improve future predictions of forest productivity and carbon sequestration.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Al-Motasem, Al-Asqalani Ahmed Tamer. "Nanoclusters in Diluted Fe-Based Alloys Containing Vacancies, Copper and Nickel: Structure, Energetics and Thermodynamics." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-89355.

Full text
Abstract:
The formation of nano–sized precipitates is considered to be the origin of hardening and embrittlement of ferritic steel used as structural material for pressure vessels of nuclear reactors, since these nanoclusters hinder the motion of dislocations within the grains of the polycrystalline bcc–Fe matrix. Previous investigations showed that these small precipitates are coherent and may consist of Cu, Ni, other foreign atoms, and vacancies. In this work a combination of on–lattice simulated annealing based on Metropolis Monte Carlo simulations and off–lattice relaxation by Molecular Dynamics is applied in order to determine the structure, energetics and thermodynamics of coherent clusters in bcc–Fe. The most recent interatomic potentials for Fe–Cu–Ni alloys are used. The atomic structure and the formation energy of the most stable configurations as well as their total and monomer binding energy are calculated. Atomistic simulation results show that pure (vacancy and copper) as well as mixed (vacancy-copper, copper-nickel and vacancy-copper-nickel) clusters show facets which correspond to the main crystallographic planes. Besides facets, mixed clusters exhibit a core-shell structure. In the case of v_lCu_m, a core of vacancy cluster coated with copper atoms is found. In binary Cum_Ni_n, Ni atoms cover the outer surface of copper cluster. Ternary v_lCu_mNi_n clusters show a core–shell structure with vacancies in the core coated by a shell of Cu atoms, followed by a shell of Ni atoms. It has been shown qualitatively that these core–shell structures are formed in order to minimize the interface energy between the cluster and the bcc-Fe matrix. Pure nickel consist of an agglomeration of Ni atoms at second nearest neighbor distance, whereas vacancy-nickel are formed by a vacancy cluster surrounded by a nickel agglomeration. Both types of clusters are called quasi-cluster because of their non-compact structure. The atomic configurations of quasiclusters can be understood by the peculiarities of the binding between Ni atoms and vacancies. In all clusters investigated Ni atoms may be nearest neighbors of Cu atoms but never nearest neighbors of vacancies or other Ni atoms. The structure of the clusters found in the present work is consistent with experimental observations and with results of pairwise calculations. In agreement with experimental observations and with recent results of atomic kinetic Monte Carlo simulation it is shown that the presence of Ni atoms promotes the nucleation of clusters containing vacancies and Cu. For pure vacancy and pure copper clusters an atomistic nucleation model is established, and for typical irradiation conditions the nucleation free energy and the critical size for cluster formation have been estimated. For further application in rate theory and object kinetic Monte Carlo simulations compact and physically–based fit formulae are derived from the atomistic data for the total and the monomer binding energy. The fit is based on the structure of the clusters (core-shell and quasi-cluster) and on the classical capillary model.
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Yu. "Inference and prediction in a multiple structural break model of economic time series." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/244.

Full text
Abstract:
This thesis develops a new Bayesian approach to structural break modeling. The focuses of the approach are the modeling of in-sample structural breaks and forecasting time series allowing out-of-sample breaks. Our model has some desirable features. First, the number of regimes is not fixed and is treated as a random variable in our model. Second, our model adopts a hierarchical prior for regime coefficients, which allows for the regime coefficients of one regime to contain information about regime coefficients of other regimes. However, the regime coefficients can be analytically integrated out of the posterior distribution and therefore we only need to deal with one level of the hierarchy. Third, the implementation of our model is simple and the computational cost is low. Our model is applied to two different time series: S&P 500 monthly returns and U.S. real GDP quarterly growth rates. We linked breaks detected by our model to certain historical events.
APA, Harvard, Vancouver, ISO, and other styles
26

Frühwirth-Schnatter, Sylvia, and Rudolf Frühwirth. "Bayesian Inference in the Multinomial Logit Model." Austrian Statistical Society, 2012. http://epub.wu.ac.at/5629/1/186%2D751%2D1%2DSM.pdf.

Full text
Abstract:
The multinomial logit model (MNL) possesses a latent variable representation in terms of random variables following a multivariate logistic distribution. Based on multivariate finite mixture approximations of the multivariate logistic distribution, various data-augmented Metropolis-Hastings algorithms are developed for a Bayesian inference of the MNL model.
APA, Harvard, Vancouver, ISO, and other styles
27

VASCONCELOS, Josimar Mendes de. "Equações simultâneas no contexto clássico e bayesiano: uma abordagem à produção de soja." Universidade Federal Rural de Pernambuco, 2011. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5012.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-07-07T12:44:03Z No. of bitstreams: 1 Josimar Mendes de Vasconcelos.pdf: 4725831 bytes, checksum: 716f4b6bc6100003772271db252915b7 (MD5)
Made available in DSpace on 2016-07-07T12:44:03Z (GMT). No. of bitstreams: 1 Josimar Mendes de Vasconcelos.pdf: 4725831 bytes, checksum: 716f4b6bc6100003772271db252915b7 (MD5) Previous issue date: 2011-08-08
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
The last years has increased the quantity of researchers and search scientific in the plantation, production and value of the soybeans in the Brazil, in grain. In front of this, the present dissertation looks for to analyze the data and estimate models that explain, of satisfactory form, the variability observed of the quantity produced and value of the production of soya in grain in the Brazil, in the field of the study. For the development of these analyses is used the classical and Bayesian inference, in the context of simultaneous equations by the tools of indirect square minimum in two practices. In the classical inference uses the estimator of square minima in two practices. In the Bayesian inference worked the method of Mountain Carlo via Chain of Markov with the algorithms of Gibbs and Metropolis-Hastings by means of the technician of simultaneous equations. In the study, consider the variable area harvested, quantity produced, value of the production and gross inner product, in which it adjusted the model with the variable answer quantity produced and afterwards the another variable answer value of the production for finally do the corrections and obtain the final result, in the classical and Bayesian method. Through of the detours normalized, statistics of the proof-t, criteria of information Akaike and Schwarz normalized stands out the good application of the method of Mountain Carlo via Chain of Markov by the algorithm of Gibbs, also is an efficient method in the modelado and of easy implementation in the statistical softwares R & WinBUGS, as they already exist smart libraries to compile the method. Therefore, it suggests work the method of Mountain Carlo via chain of Markov through the method of Gibbs to estimate the production of soya in grain.
Nos últimos anos tem aumentado a quantidade de pesquisadores e pesquisas científicas na plantação, produção e valor de soja no Brasil, em grão. Diante disso, a presente dissertação busca analisar os dados e ajustar modelos que expliquem, de forma satisfatória, a variabilidade observada da quantidade produzida e valor da produção de soja em grão no Brasil, no campo do estudo. Para o desenvolvimento dessas análises é utilizada a inferência clássica e bayesiana, no contexto de equações simultâneas através da ferramenta de mínimos quadrados em dois estágios. Na inferência clássica utiliza-se o estimador de mínimos quadrados em dois estágios. Na inferência bayesiana trabalhou-se o método de Monte Carlo via Cadeia de Markov com os algoritmos de Gibbs e Metropolis-Hastings por meio da técnica de equações simultâneas. No estudo, consideram-se as variáveis área colhida, quantidade produzida, valor da produção e produto interno bruto, no qual ajustou-se o modelo com a variável resposta quantidade produzida e depois a variável resposta valor da produção para finalmente fazer as correções e obter o resultado final, no método clássico e bayesiano. Através, dos desvios padrão, estatística do teste-t, critérios de informação Akaike e Schwarz normalizados destaca-se a boa aplicação do método de Monte Carlo via Cadeia de Markov pelo algoritmo de Gibbs, também é um método eficiente na modelagem e de fácil implementação nos softwares estatísticos R & WinBUGS, pois já existem bibliotecas prontas para compilar o método. Portanto, sugere-se trabalhar o método de Monte Carlo via cadeia de Markov através do método de Gibbs para estimar a produção de soja em grão, no Brasil.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Yi. "Time-Varying Coefficient Models for Recurrent Events." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/97999.

Full text
Abstract:
I have developed time-varying coefficient models for recurrent event data to evaluate the temporal profiles for recurrence rate and covariate effects. There are three major parts in this dissertation. The first two parts propose a mixed Poisson process model with gamma frailties for single type recurrent events. The third part proposes a Bayesian joint model based on multivariate log-normal frailties for multi-type recurrent events. In the first part, I propose an approach based on penalized B-splines to obtain smooth estimation for both time-varying coefficients and the log baseline intensity. An EM algorithm is developed for parameter estimation. One issue with this approach is that the estimating procedure is conditional on smoothing parameters, which have to be selected by cross-validation or optimizing certain performance criterion. The procedure can be computationally demanding with a large number of time-varying coefficients. To achieve objective estimation of smoothing parameters, I propose a mixed-model representation approach for penalized splines. Spline coefficients are treated as random effects and smoothing parameters are to be estimated as variance components. An EM algorithm embedded with penalized quasi-likelihood approximation is developed to estimate the model parameters. The third part proposes a Bayesian joint model with time-varying coefficients for multi-type recurrent events. Bayesian penalized splines are used to estimate time-varying coefficients and the log baseline intensity. One challenge in Bayesian penalized splines is that the smoothness of a spline fit is considerably sensitive to the subjective choice of hyperparameters. I establish a procedure to objectively determine the hyperparameters through a robust prior specification. A Markov chain Monte Carlo procedure based on Metropolis-adjusted Langevin algorithms is developed to sample from the high-dimensional distribution of spline coefficients. The procedure includes a joint sampling scheme to achieve better convergence and mixing properties. Simulation studies in the second and third part have confirmed satisfactory model performance in estimating time-varying coefficients under different curvature and event rate conditions. The models in the second and third part were applied to data from a commercial truck driver naturalistic driving study. The application results reveal that drivers with 7-hours-or-less sleep prior to a shift have a significantly higher intensity after 8 hours of on-duty driving and that their intensity remains higher after taking a break. In addition, the results also show drivers' self-selection on sleep time, total driving hours in a shift, and breaks. These applications provide crucial insight into the impact of sleep time on driving performance for commercial truck drivers and highlights the on-road safety implications of insufficient sleep and breaks while driving. This dissertation provides flexible and robust tools to evaluate the temporal profile of intensity for recurrent events.
PHD
APA, Harvard, Vancouver, ISO, and other styles
29

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-226748.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice
Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert
APA, Harvard, Vancouver, ISO, and other styles
30

Tamatoro, Johng-Ay. "Approche stochastique de l'analyse du « residual moveout » pour la quantification de l'incertitude dans l'imagerie sismique." Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3044/document.

Full text
Abstract:
Le principale objectif de l'imagerie sismique pétrolière telle qu'elle est réalisée de nos jours est de fournir une image représentative des quelques premiers kilomètres du sous-sol. Cette image permettra la localisation des structures géologiques formant les réservoirs où sont piégées les ressources en hydrocarbures. Pour pouvoir caractériser ces réservoirs et permettre la production des hydrocarbures, le géophysicien utilise la migration-profondeur qui est un outil d'imagerie sismique qui sert à convertir des données-temps enregistrées lors des campagnes d'acquisition sismique en des images-profondeur qui seront exploitées par l'ingénieur-réservoir avec l'aide de l'interprète sismique et du géologue. Lors de la migration profondeur, les évènements sismiques (réflecteurs,…) sont replacés à leurs positions spatiales correctes. Une migration-profondeur pertinente requiert une évaluation précise modèle de vitesse. La précision du modèle de vitesse utilisé pour une migration est jugée au travers l'alignement horizontal des évènements présents sur les Common Image Gather (CIG). Les évènements non horizontaux (Residual Move Out) présents sur les CIG sont dus au ratio du modèle de vitesse de migration par la vitesse effective du milieu. L'analyse du Residual Move Out (RMO) a pour but d'évaluer ce ratio pour juger de la pertinence du modèle de vitesse et permettre sa mise à jour. Les CIG qui servent de données pour l'analyse du RMO sont solutions de problèmes inverses mal posés, et sont corrompues par du bruit. Une analyse de l'incertitude s'avère nécessaire pour améliorer l'évaluation des résultats obtenus. Le manque d'outils d'analyse de l'incertitude dans l'analyse du RMO en fait sa faiblesse. L'analyse et la quantification de l'incertitude pourrait aider à la prise de décisions qui auront des impacts socio-économiques importantes. Ce travail de thèse a pour but de contribuer à l'analyse et à la quantification de l'incertitude dans l'analyse des paramètres calculés pendant le traitement des données sismiques et particulièrement dans l'analyse du RMO. Pour atteindre ces objectifs plusieurs étapes ont été nécessaires. Elles sont entre autres :- L’appropriation des différents concepts géophysiques nécessaires à la compréhension du problème (organisation des données de sismique réflexion, outils mathématiques et méthodologiques utilisés);- Présentations des méthodes et outils pour l'analyse classique du RMO;- Interprétation statistique de l’analyse classique;- Proposition d’une approche stochastique;Cette approche stochastique consiste en un modèle statistique hiérarchique dont les paramètres sont :- la variance traduisant le niveau de bruit dans les données estimée par une méthode basée sur les ondelettes, - une fonction qui traduit la cohérence des amplitudes le long des évènements estimée par des méthodes de lissages de données,- le ratio qui est considéré comme une variable aléatoire et non comme un paramètre fixe inconnue comme c'est le cas dans l'approche classique de l'analyse du RMO. Il est estimé par des méthodes de simulations de Monte Carlo par Chaîne de Markov.L'approche proposée dans cette thèse permet d'obtenir autant de cartes de valeurs du paramètre qu'on le désire par le biais des quantiles. La méthodologie proposée est validée par l'application à des données synthétiques et à des données réelles. Une étude de sensibilité de l'estimation du paramètre a été réalisée. L'utilisation de l'incertitude de ce paramètre pour quantifier l'incertitude des positions spatiales des réflecteurs est présentée dans ce travail de thèse
The main goal of the seismic imaging for oil exploration and production as it is done nowadays is to provide an image of the first kilometers of the subsurface to allow the localization and an accurate estimation of hydrocarbon resources. The reservoirs where these hydrocarbons are trapped are structures which have a more or less complex geology. To characterize these reservoirs and allow the production of hydrocarbons, the geophysicist uses the depth migration which is a seismic imaging tool which serves to convert time data recorded during seismic surveys into depth images which will be exploited by the reservoir engineer with the help of the seismic interpreter and the geologist. During the depth migration, seismic events (reflectors, diffractions, faults …) are moved to their correct locations in space. Relevant depth migration requires an accurate knowledge of vertical and horizontal seismic velocity variations (velocity model). Usually the so-called Common-Image-Gathers (CIGs) serve as a tool to verify correctness of the velocity model. Often the CIGs are computed in the surface offset (distance between shot point and receiver) domain and their flatness serve as criteria of the velocity model correctness. Residual moveout (RMO) of the events on CIGs due to the ratio of migration velocity model and effective velocity model indicates incorrectness of the velocity model and is used for the velocity model updating. The post-stacked images forming the CIGs which are used as data for the RMO analysis are the results of an inverse problem and are corrupt by noises. An uncertainty analysis is necessary to improve evaluation of the results. Dealing with the uncertainty is a major issue, which supposes to help in decisions that have important social and commercial implications. The goal of this thesis is to contribute to the uncertainty analysis and its quantification in the analysis of various parameters computed during the seismic processing and particularly in RMO analysis. To reach these goals several stages were necessary. We began by appropriating the various geophysical concepts necessary for the understanding of:- the organization of the seismic data ;- the various processing ;- the various mathematical and methodological tools which are used (chapters 2 and 3). In the chapter 4, we present different tools used for the conventional RMO analysis. In the fifth one, we give a statistical interpretation of the conventional RMO analysis and we propose a stochastic approach of this analysis. This approach consists in hierarchical statistical model where the parameters are: - the variance which express the noise level in the data ;- a functional parameter which express coherency of the amplitudes along events ; - the ratio which is assume to be a random variable and not an unknown fixed parameter as it is the case in conventional approach. The adjustment of data to the model done by using smoothing methods of data, combined with the using of the wavelets for the estimation of allow to compute the posterior distribution of given the data by the empirical Bayes methods. An estimation of the parameter is obtained by using Markov Chain Monte Carlo simulations of its posterior distribution. The various quantiles of these simulations provide different estimations of . The proposed methodology is validated in the sixth chapter by its application on synthetic data and real data. A sensitivity analysis of the estimation of the parameter was done. The using of the uncertainty of this parameter to quantify the uncertainty of the spatial positions of reflectors is presented in this thesis
APA, Harvard, Vancouver, ISO, and other styles
31

Joly, Jean-Luc. "Contributions à la génération aléatoire pour des classes d'automates finis." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2012/document.

Full text
Abstract:
Le concept d’automate, central en théorie des langages, est l’outil d’appréhension naturel et efficace de nombreux problèmes concrets. L’usage intensif des automates finis dans un cadre algorithmique s ’illustre par de nombreux travaux de recherche. La correction et l’ évaluation sont les deux questions fondamentales de l’algorithmique. Une méthode classique d’ évaluation s’appuie sur la génération aléatoire contrôlée d’instances d’entrée. Les travaux d´écrits dans cette thèse s’inscrivent dans ce cadre et plus particulièrement dans le domaine de la génération aléatoire uniforme d’automates finis.L’exposé qui suit propose d’abord la construction d’un générateur aléatoire d’automates à pile déterministes, real time. Cette construction s’appuie sur la méthode symbolique. Des résultats théoriques et une étude expérimentale sont exposés.Un générateur aléatoire d’automates non-déterministes illustre ensuite la souplesse d’utilisation de la méthode de Monte-Carlo par Chaînes de Markov (MCMC) ainsi que la mise en œuvre de l’algorithme de Metropolis - Hastings pour l’ échantillonnage à isomorphisme près. Un résultat sur le temps de mélange est donné dans le cadre général .L’ échantillonnage par méthode MCMC pose le problème de l’évaluation du temps de mélange dans la chaîne. En s’inspirant de travaux antérieurs pour construire un générateur d’automates partiellement ordonnés, on montre comment différents outils statistiques permettent de s’attaquer à ce problème
The concept of automata, central to language theory, is the natural and efficient tool to apprehendvarious practical problems.The intensive use of finite automata in an algorithmic framework is illustrated by numerous researchworks.The correctness and the evaluation of performance are the two fundamental issues of algorithmics.A classic method to evaluate an algorithm is based on the controlled random generation of inputs.The work described in this thesis lies within this context and more specifically in the field of theuniform random generation of finite automata.The following presentation first proposes to design a deterministic, real time, pushdown automatagenerator. This design builds on the symbolic method. Theoretical results and an experimental studyare given.This design builds on the symbolic method. Theoretical results and an experimental study are given.A random generator of non deterministic automata then illustrates the flexibility of the Markov ChainMonte Carlo methods (MCMC) as well as the implementation of the Metropolis-Hastings algorithm tosample up to isomorphism. A result about the mixing time in the general framework is given.The MCMC sampling methods raise the problem of the mixing time in the chain. By drawing on worksalready completed to design a random generator of partially ordered automata, this work shows howvarious statistical tools can form a basis to address this issue
APA, Harvard, Vancouver, ISO, and other styles
32

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Technische Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A20754.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice.
Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert.
APA, Harvard, Vancouver, ISO, and other styles
33

Fachat, André. "A Comparison of Random Walks with Different Types of Acceptance Probabilities." Doctoral thesis, Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100235.

Full text
Abstract:
In this thesis random walks similar to the Metropolis algorithm are investigated. Special emphasis is laid on different types of acceptance probabilities, namely Metropolis, Tsallis and Threshold Accepting. Equilibrium and relaxation properties as well as performance aspects in stochastic optimization are investigated. Analytical investigation of a simple system mimicking an harmonic oscillator yields that a variety of acceptance probabilities, including the abovementioned, result in an equilibrium distribution that is widely dominated by an exponential function. In the last chapter an optimal optimization schedule for the Tsallis acceptance probability for the idealized barrier is investigated
In dieser Dissertation werden Random Walks ähnlich dem Metropolis Algorithmus untersucht. Es werden verschiedene Akzeptanzwahrscheinlichkeiten untersucht, dabei werden Metropolis, Tsallis und Threshold Accepting besonders betrachtet. Gleichgewichts- und Relaxationseigenschaften sowie Performanceaspekte im Bereich der stochastischen Optimierung werden untersucht. Die Analytische Betrachtung eines simplen, dem harmonischen Oszillator ähnlichen Systems zeigt, dass eine Reihe von Akzeptanzwahrscheinlichkeiten, eingeschlossen die oben Erwähnten, eine Gleichgewichtsverteilung ausbilden, die von einer Exponentialfunktion dominiert wird. Im letzten Kapitel wird der optimale Schedule für die Tsallis Akzeptanzwahrscheinlichkeit für eine idealisierte Barriere untersucht
APA, Harvard, Vancouver, ISO, and other styles
34

Bachouch, Achref. "Numerical Computations for Backward Doubly Stochastic Differential Equations and Nonlinear Stochastic PDEs." Thesis, Le Mans, 2014. http://www.theses.fr/2014LEMA1034/document.

Full text
Abstract:
L’objectif de cette thèse est l’étude d’un schéma numérique pour l’approximation des solutions d’équations différentielles doublement stochastiques rétrogrades (EDDSR). Durant les deux dernières décennies, plusieurs méthodes ont été proposées afin de permettre la résolution numérique des équations différentielles stochastiques rétrogrades standards. Dans cette thèse, on propose une extension de l’une de ces méthodes au cas doublement stochastique. Notre méthode numérique nous permet d’attaquer une large gamme d’équations aux dérivées partielles stochastiques (EDPS) nonlinéaires. Ceci est possible par le biais de leur représentation probabiliste en termes d’EDDSRs. Dans la dernière partie, nous étudions une nouvelle méthode des particules dans le cadre des études de protection en neutroniques
The purpose of this thesis is to study a numerical method for backward doubly stochastic differential equations (BDSDEs in short). In the last two decades, several methods were proposed to approximate solutions of standard backward stochastic differential equations. In this thesis, we propose an extension of one of these methods to the doubly stochastic framework. Our numerical method allows us to tackle a large class of nonlinear stochastic partial differential equations (SPDEs in short), thanks to their probabilistic interpretation. In the last part, we study a new particle method in the context of shielding studies
APA, Harvard, Vancouver, ISO, and other styles
35

Ozkan, Pelin. "Analysis Of Stochastic And Non-stochastic Volatility Models." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605421/index.pdf.

Full text
Abstract:
Changing in variance or volatility with time can be modeled as deterministic by using autoregressive conditional heteroscedastic (ARCH) type models, or as stochastic by using stochastic volatility (SV) models. This study compares these two kinds of models which are estimated on Turkish / USA exchange rate data. First, a GARCH(1,1) model is fitted to the data by using the package E-views and then a Bayesian estimation procedure is used for estimating an appropriate SV model with the help of Ox code. In order to compare these models, the LR test statistic calculated for non-nested hypotheses is obtained.
APA, Harvard, Vancouver, ISO, and other styles
36

Calmet, Claire. "Inférences sur l'histoire des populations à partir de leur diversité génétique : étude de séquences démographiques de type fondation-explosion." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2002. http://tel.archives-ouvertes.fr/tel-00288526.

Full text
Abstract:
L'étude de la démographie dans une perspective historique participe à la compréhension des processus évolutifs. Les données de diversité génétique sont potentiellement informatives quant au passé démographique des populations: en effet, ce passé est enregistré avec perte d'information par les marqueurs moléculaires, par l'intermédiaire de leur histoire généalogique et mutationnelle. L'acquisition de données de diversité génétique est de plus en plus rapide et aisée, et concerne potentiellement n'importe quel organisme d'intérêt. D'où un effort dans la dernière décennie pour développer les outils statistiques permettant d'extraire l'information démographique des données de typage génétique.
La présente thèse propose une extension de la méthode d'inférence bayésienne développée en 1999 par M. Beaumont. Comme la méthode originale, (i) elle est basée sur le coalescent de Kingman avec variations d'effectif, (ii) elle utilise l'algorithme de Metropolis-Hastings pour échantillonner selon la loi a posteriori des paramètres d'intérêt et (iii) elle permet de traiter des données de typage à un ou plusieurs microsatellites indépendants. La version étendue généralise les modèles démographique et mutationnel supposés dans la méthode initiale: elle permet d'inférer les paramètres d'un modèle de fondation-explosion pour la population échantillonnée et d'un modèle mutationnel à deux phases, pour les marqueurs microsatellites typés. C'est la première fois qu'une méthode probabiliste exacte incorpore pour les microsatellites un modèle mutationnel autorisant des sauts.
Le modèle démographique et mutationnel est exploré. L'analyse de jeux de données simulés permet d'illustrer et de comparer la loi a posteriori des paramètres pour des scénarios historiques: par exemple une stabilité démographique, une croissance exponentielle et une fondation-explosion. Une typologie des lois a posteriori est proposée. Des recommandations sur l'effort de typage dans les études empiriques sont données: un unique marqueur microsatellite peut conduire à une loi a posteriori très structurée. Toutefois, les zones de forte densité a posteriori représentent des scénarios de différents types. 50 génomes haploides typés à 5 marqueurs microsatellites suffisent en revanche à détecter avec certitude (99% de la probabilité a posteriori) une histoire de fondation-explosion tranchée. Les conséquences de la violation des hypothèses du modèle démographique sont discutées, ainsi que les interactions entre processus et modèle mutationnel. En particulier, il est établi que le fait de supposer un processus mutationnel conforme au modèle SMM, alors que ce processus est de type TPM, peut générer un faux signal de déséquilibre génétique. La modélisation des sauts mutationnels permet de supprimer ce faux signal.
La méthode est succinctement appliquée à l'étude de deux histoires de fondation-explosion: l'introduction du chat Felis catus sur les îles Kerguelen et celle du surmulot Rattus norvegicus sur les îles du large de la Bretagne. Il est d'abord montré que la méthode fréquentiste développée par Cornuet et Luikart (1996) ne permet pas de détecter les fondations récentes et drastiques qu'ont connu ces populations. Cela est vraisemblablement dû à des effets contraires de la fondation et de l'explosion, sur les statistiques utilisées dans cette méthode.
La méthode bayésienne ne détecte pas non plus la fondation si l'on force une histoire démographique en marche d'escalier, pour la même raison. La fondation et l'explosion deviennent détectables si le modèle démographique les autorise. Toutefois, les dépendances entre les paramètres du modèle empêchent de les inférer marginalement avec précision. Toute information a priori sur un paramètre contraint fortement les valeurs des autres paramètres. Ce constat confirme le potentiel de populations d'histoire documentée pour l'estimation indirecte des paramètres d'un modèle de mutation des marqueurs.
APA, Harvard, Vancouver, ISO, and other styles
37

Santos, Marconio Silva dos. "Modelagem estoc?stica da distribui??o de probabilidade da precipita??o pluvial via m?todos computacionalmente intensivos." PROGRAMA DE P?S-GRADUA??O EM CI?NCIAS CLIM?TICAS, 2017. https://repositorio.ufrn.br/jspui/handle/123456789/24953.

Full text
Abstract:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2018-03-21T12:37:29Z No. of bitstreams: 1 MarconioSilvaDosSantos_TESE.pdf: 17070388 bytes, checksum: 46702d837c8c304ffc379088625742aa (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2018-03-23T15:26:03Z (GMT) No. of bitstreams: 1 MarconioSilvaDosSantos_TESE.pdf: 17070388 bytes, checksum: 46702d837c8c304ffc379088625742aa (MD5)
Made available in DSpace on 2018-03-23T15:26:03Z (GMT). No. of bitstreams: 1 MarconioSilvaDosSantos_TESE.pdf: 17070388 bytes, checksum: 46702d837c8c304ffc379088625742aa (MD5) Previous issue date: 2017-11-24
Neste trabalho, ? feita uma modelagem estat?stica da precipita??o pluvial. Este ? um trabalho metodol?gico que utiliza simula??es estoc?sticas para estimar as distribui??es de probabilidades envolvidas na modelagem dessa vari?vel atmosf?rica. A fim de estimar os par?metros dessas distribui??es, foram utilizados m?todos de Monte Carlo via cadeias de Markov para gerar amostras sint?ticas de tamanho grande a partir de dados observados. Os m?todos utilizados foram o algoritmo de Metropolis-Hastings e o amostrador de Gibbs. As simula??es foram feitas sob a hip?tese de que os dias de um mesmo per?odo do ano (m?s ou esta??o chuvosa) podem ser considerados como identicamente distribu?dos em rela??o ? probabilidade de ocorrer precipita??o. Essa pesquisa possibilitou a produ??o de quatro artigos. O primeiro artigo utilizou o algoritmo de Metropolis-Hastings para modelar a probabilidade de ocorr?ncia de precipita??o em um dia qualquer do m?s. As simula??es desse artigo foram feitas com dados observados de algumas cidades brasileiras. Os demais artigos utilizaram o amostrador de Gibbs e os m?todos propostos foram aplicados em cidades da regi?o Nordeste do Brasil. No segundo artigo, as distribui??es Beta e Binomial foram utilizadas para modelar o n?mero de dias do m?s com ocorr?ncia de precipita??o. No terceiro artigo, a distribui??o de Poisson foi utilizada para modelar o n?mero de dias com valores extremos de precipita??o na esta??o chuvosa. Um m?todo alternativo para estimar esses valores extremos e sua distribui??o ? apresentado no quarto artigo, utilizando a distribui??o Gama. De acordo com os resultados dessas pesquisas, amostrador de Gibbs foi considerado adequado para estimar as distribui??es na modelagem da precipita??o em cidades para as quais h? poucos dados hist?ricos.
In this work, it was made a statistical modeling of precipitation. This is a methodological work that uses stochastic simulations to estimate the probability distributions related to this atmospheric variable. In order to estimate the parameters of these distributions, Markov chain Monte Carlo methods were used to generate large size synthetic samples from observed data. The used methods were the Metropolis-Hastings algorithm and the Gibbs sampler. The simulations were performed under the hypothesis that the days of of the same period of the year (month or rainy season) can be considered to be identically distributed concernig the probability of precipitation. This research allowed the production of four papers. The first paper used the Metropolis-Hastings algorithm to model the probability of occurrence of precipitation on any day of the month. The simulations of this paper were perfomed with observed data of some Brazilian cities. The other papers used the Gibbs sampler and the proposed methods were applied to data from cities in the Northeast Brazil. In the second paper, Beta and Binomial distributions were used to model the number of days of the month with occurrence of precipitation. In the third paper, the Poisson distribution was used to model the number of days with precipitation extreme values in the rainy season. An alternative method for estimating these extreme values and their distribution is presented in the fourth paper, using the Gamma distribution. According to the results obtained by these researches, the Gibbs sampler was considered to be adequate to estimate distributions in the modeling of precipitation on cities for which there are few historical data.
APA, Harvard, Vancouver, ISO, and other styles
38

Ureten, Suzan. "Single and Multiple Emitter Localization in Cognitive Radio Networks." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35692.

Full text
Abstract:
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
APA, Harvard, Vancouver, ISO, and other styles
39

Bylin, Johan. "Best practice of extracting magnetocaloric properties in magnetic simulations." Thesis, Uppsala universitet, Materialteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388356.

Full text
Abstract:
In this thesis, a numerical study of simulating and computing the magnetocaloric properties of magnetic materials is presented. The main objective was to deduce the optimal procedure to obtain the isothermal change in entropy of magnetic systems, by evaluating two different formulas of entropy extraction, one relying on the magnetization of the material and the other on the magnet's heat capacity. The magnetic systems were simulated using two different Monte Carlo algorithms, the Metropolis and Wang-Landau procedures. The two entropy methods proved to be comparably similar to one another. Both approaches produced reliable and consistent results, though finite size effects could occur if the simulated system became too small. Erroneous fluctuations that invalidated the results did not seem stem from discrepancies between the entropy methods but mainly from the computation of the heat capacity itself. Accurate determination of the heat capacity via an internal energy derivative generated excellent results, while a heat capacity obtained from a variance formula of the internal energy rendered the extracted entropy unusable. The results acquired from the Metropolis algorithm were consistent, accurate and dependable, while all of those produced via the Wang-Landau method exhibited intrinsic fluctuations of varying severity. The Wang-Landau method also proved to be computationally ineffective compared to the Metropolis algorithm, rendering the method not suitable for magnetic simulations of this type.
APA, Harvard, Vancouver, ISO, and other styles
40

Eid, Abdelrahman. "Stochastic simulations for graphs and machine learning." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I018.

Full text
Abstract:
Bien qu’il ne soit pas pratique d’étudier la population dans de nombreux domaines et applications, l’échantillonnage est une méthode nécessaire permettant d’inférer l’information.Cette thèse est consacrée au développement des algorithmes d’échantillonnage probabiliste pour déduire l’ensemble de la population lorsqu’elle est trop grande ou impossible à obtenir.Les techniques Monte Carlo par chaîne de markov (MCMC) sont l’un des outils les plus importants pour l’échantillonnage à partir de distributions de probabilités surtout lorsque ces distributions ont des constantes de normalisation difficiles à évaluer.Le travail de cette thèse s’intéresse principalement aux techniques d’échantillonnage pour les graphes. Deux méthodes pour échantillonner des sous-arbres uniformes à partir de graphes en utilisant les algorithmes de Metropolis-Hastings sont présentées dans le chapitre 2. Les méthodes proposées visent à échantillonner les arbres selon une distribution à partir d’un graphe où les sommets sont marqués. L’efficacité de ces méthodes est prouvée mathématiquement. De plus, des études de simulation ont été menées et ont confirmé les résultats théoriques de convergence vers la distribution d’équilibre.En continuant à travailler sur l’échantillonnage des graphes, une méthode est présentée au chapitre 3 pour échantillonner des ensembles de sommets similaires dans un graphe arbitraire non orienté en utilisant les propriétés des processus des points permanents PPP. Notre algorithme d’échantillonnage des ensembles de k sommets est conçu pour surmonter le problème de la complexité de calcul lors du calcul du permanent par échantillonnage d’une distribution conjointe dont la distribution marginale est un kPPP.Enfin, dans le chapitre 4, nous utilisons les définitions des méthodes MCMC et de la vitesse de convergence pour estimer la bande passante du noyau utilisée pour la classification dans l’apprentissage machine supervisé. Une méthode simple et rapide appelée KBER est présentée pour estimer la bande passante du noyau de la fonction de base radiale RBF en utilisant la courbure moyenne de Ricci de graphes
While it is impractical to study the population in many domains and applications, sampling is a necessary method allows to infer information. This thesis is dedicated to develop probability sampling algorithms to infer the whole population when it is too large or impossible to be obtained. Markov chain Monte Carlo (MCMC) techniques are one of the most important tools for sampling from probability distributions especially when these distributions haveintractable normalization constants.The work of this thesis is mainly interested in graph sampling techniques. Two methods in chapter 2 are presented to sample uniform subtrees from graphs using Metropolis-Hastings algorithms. The proposed methods aim to sample trees according to a distribution from a graph where the vertices are labelled. The efficiency of these methods is proved mathematically. Additionally, simulation studies were conducted and confirmed the theoretical convergence results to the equilibrium distribution.Continuing to the work on graph sampling, a method is presented in chapter 3 to sample sets of similar vertices in an arbitrary undirected graph using the properties of the Permanental Point processes PPP. Our algorithm to sample sets of k vertices is designed to overcome the problem of computational complexity when computing the permanent by sampling a joint distribution whose marginal distribution is a kPPP.Finally in chapter 4, we use the definitions of the MCMC methods and convergence speed to estimate the kernel bandwidth used for classification in supervised Machine learning. A simple and fast method called KBER is presented to estimate the bandwidth of the Radial basis function RBF kernel using the average Ricci curvature of graphs
APA, Harvard, Vancouver, ISO, and other styles
41

Γιαννόπουλος, Νικόλαος. "Μελετώντας τον αλγόριθμο Metropolis-Hastings." Thesis, 2012. http://hdl.handle.net/10889/5920.

Full text
Abstract:
Η παρούσα διπλωματική διατριβή εντάσσεται ερευνητικά στην περιοχή της Υπολογιστικής Στατιστικής, καθώς ασχολούμαστε με τη μελέτη μεθόδων προσομοίωσης από κάποια κατανομή π (κατανομή στόχο) και τον υπολογισμό σύνθετων ολοκληρωμάτων. Σε πολλά πραγματικά προβλήματα, όπου η μορφή της π είναι ιδιαίτερα πολύπλοκή ή/και η διάσταση του χώρου καταστάσεων μεγάλη, η προσομοίωση από την π δεν μπορεί να γίνει με απλές τεχνικές καθώς επίσης και ο υπολογισμός των ολοκληρωμάτων είναι πάρα πολύ δύσκολο αν όχι αδύνατο να γίνει αναλυτικά. Γι’ αυτό, καταφεύγουμε σε τεχνικές Monte Carlo (MC) και Markov Chain Monte Carlo (MCMC), οι οποίες προσομοιώνουν τιμές τυχαίων μεταβλητών και εκτιμούν τα ολοκληρώματα μέσω κατάλληλων συναρτήσεων των προσομοιωμένων τιμών. Οι τεχνικές MC παράγουν ανεξάρτητες παρατηρήσεις είτε απ’ ευθείας από την κατανομή-στόχο π είτε από κάποια διαφορετική κατανομή-πρότασης g. Οι τεχνικές MCMC προσομοιώνουν αλυσίδες Markov με στάσιμη κατανομή την και επομένως οι παρατηρήσεις είναι εξαρτημένες. Στα πλαίσια αυτής της εργασίας θα ασχοληθούμε κυρίως με τον αλγόριθμο Metropolis-Hastings που είναι ένας από τους σημαντικότερους, αν όχι ο σημαντικότερος, MCMC αλγόριθμους. Πιο συγκεκριμένα, στο Κεφάλαιο 2 γίνεται μια σύντομη αναφορά σε γνωστές τεχνικές MC, όπως η μέθοδος Αποδοχής-Απόρριψης, η μέθοδος Αντιστροφής και η μέθοδος Δειγματοληψίας σπουδαιότητας καθώς επίσης και σε τεχνικές MCMC, όπως ο αλγόριθμός Metropolis-Hastings, o Δειγματολήπτης Gibbs και η μέθοδος Metropolis Within Gibbs. Στο Κεφάλαιο 3 γίνεται αναλυτική αναφορά στον αλγόριθμο Metropolis-Hastings. Αρχικά, παραθέτουμε μια σύντομη ιστορική αναδρομή και στη συνέχεια δίνουμε μια αναλυτική περιγραφή του. Παρουσιάζουμε κάποιες ειδικές μορφές τού καθώς και τις βασικές ιδιότητες που τον χαρακτηρίζουν. Το κεφάλαιο ολοκληρώνεται με την παρουσίαση κάποιων εφαρμογών σε προσομοιωμένα καθώς και σε πραγματικά δεδομένα. Το τέταρτο κεφάλαιο ασχολείται με μεθόδους εκτίμησης της διασποράς του εργοδικού μέσου ο οποίος προκύπτει από τις MCMC τεχνικές. Ιδιαίτερη αναφορά γίνεται στις μεθόδους Batch means και Spectral Variance Estimators. Τέλος, το Κεφάλαιο 5 ασχολείται με την εύρεση μιας κατάλληλης κατανομή πρότασης για τον αλγόριθμό Metropolis-Hastings. Παρόλο που ο αλγόριθμος Metropolis-Hastings μπορεί να συγκλίνει για οποιαδήποτε κατανομή πρότασης αρκεί να ικανοποιεί κάποιες βασικές υποθέσεις, είναι γνωστό ότι μία κατάλληλη επιλογή της κατανομής πρότασης βελτιώνει τη σύγκλιση του αλγόριθμου. Ο προσδιορισμός της βέλτιστής κατανομής πρότασης για μια συγκεκριμένη κατανομή στόχο είναι ένα πολύ σημαντικό αλλά εξίσου δύσκολο πρόβλημα. Το πρόβλημα αυτό έχει προσεγγιστεί με πολύ απλοϊκές τεχνικές (trial-and-error τεχνικές) αλλά και με adaptive αλγόριθμούς που βρίσκουν μια "καλή" κατανομή πρότασης αυτόματα.
This thesis is part of research in Computational Statistics, as we deal with the study of methods of modeling some distribution π (target distribution) and calculate complex integrals. In many real problems, where the form of π is very complex and / or the size of large state space, simulation of π can not be done with simple techniques as well as the calculation of the integrals is very difficult if not impossible to done analytically. So we resort to techniques Monte Carlo (MC) and Markov Chain Monte Carlo (MCMC), which simulate values ​​of random variables and estimate the integrals by appropriate functions of the simulated values. These techniques produce MC independent observations either directly from the distribution n target or a different distribution motion-g. MCMC techniques simulate Markov chains with stationary distribution and therefore the observations are dependent. As part of this work we will deal mainly with the Metropolis-Hastings algorithm is one of the greatest, if not the most important, MCMC algorithms. More specifically, in Chapter 2 is a brief reference to known techniques MC, such as Acceptance-Rejection method, the inversion method and importance sampling methods as well as techniques MCMC, as the algorithm Metropolis-Hastings, o Gibbs sampler and method Metropolis Within Gibbs. Chapter 3 is a detailed report on the algorithm Metropolis-Hastings. First, we present a brief history and then give a detailed description. Present some specific forms as well as the basic properties that characterize them. The chapter concludes with a presentation of some applications on simulated and real data. The fourth chapter deals with methods for estimating the dispersion of ergodic average, derived from the MCMC techniques. Particular reference is made to methods Batch means and Spectral Variance Estimators. Finally, Chapter 5 deals with finding a suitable proposal for the allocation algorithm Metropolis-Hastings. Although the Metropolis-Hastings algorithm can converge on any distribution motion sufficient to satisfy some basic assumptions, it is known that an appropriate selection of the distribution proposal improves the convergence of the algorithm. Determining the optimal allocation proposal for a specific distribution target is a very important but equally difficult problem. This problem has been approached in a very simplistic techniques (trial-and-error techniques) but also with adaptive algorithms that find a "good" allocation proposal automatically.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Yuxin. "Massively Parallel Dimension Independent Adaptive Metropolis." Thesis, 2015. http://hdl.handle.net/10754/552902.

Full text
Abstract:
This work considers black-box Bayesian inference over high-dimensional parameter spaces. The well-known and widely respected adaptive Metropolis (AM) algorithm is extended herein to asymptotically scale uniformly with respect to the underlying parameter dimension, by respecting the variance, for Gaussian targets. The result- ing algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justified a posteriori). Asymptoti- cally in dimension, this massively parallel dimension-independent adaptive Metropolis (MPDIAM) GPU implementation exhibits a factor of four improvement versus the CPU-based Intel MKL version alone, which is itself already a factor of three improve- ment versus the serial version. The scaling to multiple CPUs and GPUs exhibits a form of strong scaling in terms of the time necessary to reach a certain convergence criterion, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. This is illustrated by e ciently sampling from several Gaussian and non-Gaussian targets for dimension d 1000.
APA, Harvard, Vancouver, ISO, and other styles
43

Atchadé, Yves F. "Quelques contributions sur les méthodes de Monte Carlo." Thèse, 2003. http://hdl.handle.net/1866/14581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Jung, Maarten Lars. "Reaction Time Modeling in Bayesian Cognitive Models of Sequential Decision-Making Using Markov Chain Monte Carlo Sampling." 2020. https://tud.qucosa.de/id/qucosa%3A74048.

Full text
Abstract:
In this thesis, a new approach for generating reaction time predictions for Bayesian cognitive models of sequential decision-making is proposed. The method is based on a Markov chain Monte Carlo algorithm that, by utilizing prior distributions and likelihood functions of possible action sequences, generates predictions about the time needed to choose one of these sequences. The plausibility of the reaction time predictions produced by this algorithm was investigated for simple exemplary distributions as well as for prior distributions and likelihood functions of a Bayesian model of habit learning. Simulations showed that the reaction time distributions generated by the Markov chain Monte Carlo sampler exhibit key characteristics of reaction time distributions typically observed in decision-making tasks. The introduced method can be easily applied to various Bayesian models for decision-making tasks with any number of choice alternatives. It thus provides the means to derive reaction time predictions for models where this has not been possible before.
In dieser Arbeit wird ein neuer Ansatz zum Generieren von Reaktionszeitvorhersagen für bayesianische Modelle sequenzieller Entscheidungsprozesse vorgestellt. Der Ansatz basiert auf einem Markov-Chain-Monte-Carlo-Algorithmus, der anhand von gegebenen A-priori-Verteilungen und Likelihood-Funktionen von möglichen Handlungssequenzen Vorhersagen über die Dauer einer Entscheidung für eine dieser Handlungssequenzen erstellt. Die Plausibilität der mit diesem Algorithmus generierten Reaktionszeitvorhersagen wurde für einfache Beispielverteilungen sowie für A-priori-Verteilungen und Likelihood-Funktionen eines bayesianischen Modells zur Beschreibung von Gewohnheitslernen untersucht. Simulationen zeigten, dass die vom Markov-Chain-Monte-Carlo-Sampler erzeugten Reaktionszeitverteilungen charakteristische Eigenschaften von typischen Reaktionszeitverteilungen im Kontext sequenzieller Entscheidungsprozesse aufweisen. Das Verfahren lässt sich problemlos auf verschiedene bayesianische Modelle für Entscheidungsparadigmen mit beliebig vielen Handlungsalternativen anwenden und eröffnet damit die Möglichkeit, Reaktionszeitvorhersagen für Modelle abzuleiten, für die dies bislang nicht möglich war.
APA, Harvard, Vancouver, ISO, and other styles
45

Kyimba, Eloi-Alain Kyalondawa. "Comparison of Monte Carlo Metropolis, Swendsen-Wang, and Wolff algorithms in the critical region for the 2-dimensional Ising model." 2006. http://www.lib.ncsu.edu/theses/available/etd-03152007-235327/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Fontaine, Simon. "MCMC adaptatifs à essais multiples." Thèse, 2019. http://hdl.handle.net/1866/22547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

De, Freitas Allan. "A Monte-Carlo approach to dominant scatterer tracking of a single extended target in high range-resolution radar." Diss., 2013. http://hdl.handle.net/2263/33372.

Full text
Abstract:
In high range-resolution (HRR) radar systems, the returns from a single target may fall in multiple adjacent range bins which individually vary in amplitude. A target following this representation is commonly referred to as an extended target and results in more information about the target. However, extracting this information from the radar returns is challenging due to several complexities. These complexities include the single dimensional nature of the radar measurements, complexities associated with the scattering of electromagnetic waves, and complex environments in which radar systems are required to operate. There are several applications of HRR radar systems which extract target information with varying levels of success. A commonly used application is that of imaging referred to as synthetic aperture radar (SAR) and inverse SAR (ISAR) imaging. These techniques combine multiple single dimension measurements in order to obtain a single two dimensional image. These techniques rely on rotational motion between the target and the radar occurring during the collection of the single dimension measurements. In the case of ISAR, the radar is stationary while motion is induced by the target. There are several difficulties associated with the unknown motion of the target when standard Doppler processing techniques are used to synthesise ISAR images. In this dissertation, a non-standard Dop-pler approach, based on Bayesian inference techniques, was considered to address the difficulties. The target and observations were modelled with a non-linear state space model. Several different Bayesian techniques were implemented to infer the hidden states of the model, which coincide with the unknown characteristics of the target. A simulation platform was designed in order to analyse the performance of the implemented techniques. The implemented techniques were capable of successfully tracking a randomly generated target in a controlled environment. The influence of varying several parameters, related to the characteristics of the target and the implemented techniques, was explored. Finally, a comparison was made between standard Doppler processing and the Bayesian methods proposed.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
48

(6563222), Boqian Zhang. "Efficient Path and Parameter Inference for Markov Jump Processes." Thesis, 2019.

Find full text
Abstract:
Markov jump processes are continuous-time stochastic processes widely used in a variety of applied disciplines. Inference typically proceeds via Markov chain Monte Carlo (MCMC), the state-of-the-art being a uniformization-based auxiliary variable Gibbs sampler. This was designed for situations where the process parameters are known, and Bayesian inference over unknown parameters is typically carried out by incorporating it into a larger Gibbs sampler. This strategy of sampling parameters given path, and path given parameters can result in poor Markov chain mixing.

In this thesis, we focus on the problem of path and parameter inference for Markov jump processes.

In the first part of the thesis, a simple and efficient MCMC algorithm is proposed to address the problem of path and parameter inference for Markov jump processes. Our scheme brings Metropolis-Hastings approaches for discrete-time hidden Markov models to the continuous-time setting, resulting in a complete and clean recipe for parameter and path inference in Markov jump processes. In our experiments, we demonstrate superior performance over Gibbs sampling, a more naive Metropolis-Hastings algorithm we propose, as well as another popular approach, particle Markov chain Monte Carlo. We also show our sampler inherits geometric mixing from an ‘ideal’ sampler that is computationally much more expensive.

In the second part of the thesis, a novel collapsed variational inference algorithm is proposed. Our variational inference algorithm leverages ideas from discrete-time Markov chains, and exploits a connection between Markov jump processes and discrete-time Markov chains through uniformization. Our algorithm proceeds by marginalizing out the parameters of the Markov jump process, and then approximating the distribution over the trajectory with a factored distribution over segments of a piecewise-constant function. Unlike MCMC schemes that marginalize out transition times of a piecewise-constant process, our scheme optimizes the discretization of time, resulting in significant computational savings. We apply our ideas to synthetic data as well as a dataset of check-in recordings, where we demonstrate superior performance over state-of-the-art MCMC methods.

APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Mingqi. "Population SAMC, ChIP-chip Data Analysis and Beyond." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8752.

Full text
Abstract:
This dissertation research consists of two topics, population stochastics approximation Monte Carlo (Pop-SAMC) for Baysian model selection problems and ChIP-chip data analysis. The following two paragraphs give a brief introduction to each of the two topics, respectively. Although the reversible jump MCMC (RJMCMC) has the ability to traverse the space of possible models in Bayesian model selection problems, it is prone to becoming trapped into local mode, when the model space is complex. SAMC, proposed by Liang, Liu and Carroll, essentially overcomes the difficulty in dimension-jumping moves, by introducing a self-adjusting mechanism. However, this learning mechanism has not yet reached its maximum efficiency. In this dissertation, we propose a Pop-SAMC algorithm; it works on population chains of SAMC, which can provide a more efficient self-adjusting mechanism and make use of crossover operator from genetic algorithms to further increase its efficiency. Under mild conditions, the convergence of this algorithm is proved. The effectiveness of Pop-SAMC in Bayesian model selection problems is examined through a change-point identification example and a large-p linear regression variable selection example. The numerical results indicate that Pop- SAMC outperforms both the single chain SAMC and RJMCMC significantly. In the ChIP-chip data analysis study, we developed two methodologies to identify the transcription factor binding sites: Bayesian latent model and population-based test. The former models the neighboring dependence of probes by introducing a latent indicator vector; The later provides a nonparametric method for evaluation of test scores in a multiple hypothesis test by making use of population information of samples. Both methods are applied to real and simulated datasets. The numerical results indicate the Bayesian latent model can outperform the existing methods, especially when the data contain outliers, and the use of population information can significantly improve the power of multiple hypothesis tests.
APA, Harvard, Vancouver, ISO, and other styles
50

Mitsakakis, Nikolaos. "Bayesian Methods in Gaussian Graphical Models." Thesis, 2010. http://hdl.handle.net/1807/24831.

Full text
Abstract:
This thesis contributes to the field of Gaussian Graphical Models by exploring either numerically or theoretically various topics of Bayesian Methods in Gaussian Graphical Models and by providing a number of interesting results, the further exploration of which would be promising, pointing to numerous future research directions. Gaussian Graphical Models are statistical methods for the investigation and representation of interdependencies between components of continuous random vectors. This thesis aims to investigate some issues related to the application of Bayesian methods for Gaussian Graphical Models. We adopt the popular $G$-Wishart conjugate prior $W_G(\delta,D)$ for the precision matrix. We propose an efficient sampling method for the $G$-Wishart distribution based on the Metropolis Hastings algorithm and show its validity through a number of numerical experiments. We show that this method can be easily used to estimate the Deviance Information Criterion, providing a computationally inexpensive approach for model selection. In addition, we look at the marginal likelihood of a graphical model given a set of data. This is proportional to the ratio of the posterior over the prior normalizing constant. We explore methods for the estimation of this ratio, focusing primarily on applying the Monte Carlo simulation method of path sampling. We also explore numerically the effect of the completion of the incomplete matrix $D^{\mathcal{V}}$, hyperparameter of the $G$-Wishart distribution, for the estimation of the normalizing constant. We also derive a series of exact and approximate expressions for the Bayes Factor between two graphs that differ by one edge. A new theoretical result regarding the limit of the normalizing constant multiplied by the hyperparameter $\delta$ is given and its implications to the validity of an improper prior and of the subsequent Bayes Factor are discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography