To see the other types of publications on this topic, follow the link: Metropolis-Hastings algoritm.

Dissertations / Theses on the topic 'Metropolis-Hastings algoritm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Metropolis-Hastings algoritm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

BARROS, Kleber Napoleão Nunes de Oliveira. "Abordagem clássica e Bayesiana em modelos simétricos transformados aplicados à estimativa de crescimento em altura de Eucalyptus urophylla no Polo Gesseiro do Araripe-PE." Universidade Federal Rural de Pernambuco, 2010. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5142.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-08-01T17:35:24Z No. of bitstreams: 1 Kleber Napoleao Nunes de Oliveira Barros.pdf: 2964667 bytes, checksum: a3c757cb7ed16fc9c38b7834b6e0fa29 (MD5)<br>Made available in DSpace on 2016-08-01T17:35:24Z (GMT). No. of bitstreams: 1 Kleber Napoleao Nunes de Oliveira Barros.pdf: 2964667 bytes, checksum: a3c757cb7ed16fc9c38b7834b6e0fa29 (MD5) Previous issue date: 2010-02-22<br>It is presented in this work the growth model nonlinear Chapman-Richards with distribution of errors following the new class of symmetric models processed and Bayesian inference for the parameters. The objective was to apply this structure, via Metropolis-Hastings algorithm, in order to select the equation that best predicted heights of clones of Eucalyptus urophilla experiment established at the Agronomic Institute of Pernambuco (IPA) in the city of Araripina . The Gypsum Pole of Araripe is an industrial zone, located on the upper interior of Pernambuco, which consumes large amount of wood from native vegetation (caatinga) for calcination of gypsum. In this scenario, there is great need for a solution, economically and environmentally feasible that allows minimizing the pressure on native vegetation. The generus Eucalyptus presents itself as an alternative for rapid development and versatility. The height has proven to be an important factor in prognosis of productivity and selection of clones best adapted. One of the main growth curves, is the Chapman-Richards model with normal distribution for errors. However, some alternatives have been proposed in order to reduce the influence of atypical observations generated by this model. The data were taken from a plantation, with 72 months. Were performed inferences and diagnostics for processed and unprocessed model with many distributions symmetric. After selecting the best equation, was shown some convergence of graphics and other parameters that show the fit to the data model transformed symmetric Student’s t with 5 degrees of freedom in the parameters using Bayesian inference.<br>É abordado neste trabalho o modelo de crescimento não linear de Chapman-Richards com distribuição dos erros seguindo a nova classe de modelos simétricos transformados e inferência Bayesiana para os parâmetros. O objetivo foi aplicar essa estrutura, via algoritmo de Metropolis-Hastings, afim de selecionar a equação que melhor estimasse as alturas de clones de Eucalyptus urophilla provenientes de experimento implantado no Instituto Agronômico de Pernambuco (IPA), na cidade de Araripina. O Polo Gesseiro do Araripe é uma zona industrial, situada no alto sertão pernambucano, que consume grande quantidade de lenha proveniente da vegetação nativa (caatinga) para calcinação da gipsita. Nesse cenário, há grande necessidade de uma solução, econômica e ambientalmente, viável que possibilite uma minimização da pressão sobre a flora nativa. O gênero Eucalyptus se apresenta como alternativa, pelo seu rápido desenvolvimento e versatilidade. A altura tem se revelado fator importante na prognose de produtividade e seleção de clones melhores adaptados. Uma das principais curvas de crescimento, é o modelo de Chapman- Richards com distribuição normal para os erros. No entanto, algumas alternativas tem sido propostas afim de reduzir a influência de observações atípicas geradas por este modelo. Os dados foram retirados de uma plantação, com 72 meses. Foram realizadas as inferências e diagnósticos para modelo transformado e não transformado com diversas distribuições simétricas. Após a seleção da melhor equação, foram mostrados alguns gráficos da convergência dos parâmetros e outros que comprovam o ajuste aos dados do modelo simétrico transformado t de Student com 5 graus de liberdade utilizando inferência Bayesiana nos parâmetros.
APA, Harvard, Vancouver, ISO, and other styles
2

Þorgeirsson, Sverrir. "Bayesian parameter estimation in Ecolego using an adaptive Metropolis-Hastings-within-Gibbs algorithm." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-304259.

Full text
Abstract:
Ecolego is scientific software that can be used to model diverse systems within fields such as radioecology and pharmacokinetics. The purpose of this research is to develop an algorithm for estimating the probability density functions of unknown parameters of Ecolego models. In order to do so, a general-purpose adaptive Metropolis-Hastings-within-Gibbs algorithm is developed and tested on some examples of Ecolego models. The algorithm works adequately on those models, which indicates that the algorithm could be integrated successfully into future versions of Ecolego.
APA, Harvard, Vancouver, ISO, and other styles
3

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
4

Gendre, Victor Hugues. "Predicting short term exchange rates with Bayesian autoregressive state space models: an investigation of the Metropolis Hastings algorithm forecasting efficiency." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437399395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cercone, Maria Grazia. "Origini e sviluppi del calcolo delle probabilità ed alcune applicazioni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13500/.

Full text
Abstract:
Questa tesi ripercorre l’evoluzione del calcolo delle probabilità attraverso i secoli partendo dalle origini e giungendo al XX secolo, segnato dall’introduzione della teoria sulle Catene di Markov che, fornendo importanti concetti matematici, troverà in seguito applicazione in ambiti diversi tra loro. Il primo capitolo è interamente dedicato ad illustrare l’excursus storico del calcolo delle probabilità; si parte dalle antiche civiltà, dove l’idea di probabilità sorse intorno a questioni riguardanti la vita comune e il gioco d’azzardo, per arrivare al XX secolo in cui si formarono le tre scuole di pensiero: frequentista, soggettivista e assiomatica. Il secondo capitolo esamina la figura di A. A. Markov e l’importante contributo apportato al calcolo delle probabilità attraverso l’introduzione della teoria delle Catene di Markov. Si trattano poi importanti applicazioni della stessa quali l’Algoritmo di Metropolis-Hastings, il Campionamento di Gibbs e la procedura di Simulated Annealing. Il terzo capitolo analizza un’ulteriore applicazione della teoria markoviana: l’algoritmo di link analysis ranking, noto come algoritmo PageRank, alla base del motore di ricerca Google.
APA, Harvard, Vancouver, ISO, and other styles
6

Volfson, Alexander. "Exploring the optimal Transformation for Volatility." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-theses/472.

Full text
Abstract:
This paper explores the fit of a stochastic volatility model, in which the Box-Cox transformation of the squared volatility follows an autoregressive Gaussian distribution, to the continuously compounded daily returns of the Australian stock index. Estimation was difficult, and over-fitting likely, because more variables are present than data. We developed a revised model that held a couple of these variables fixed and then, further, a model which reduced the number of variables significantly by grouping trading days. A Metropolis-Hastings algorithm was used to simulate the joint density and derive estimated volatilities. Though autocorrelations were higher with a smaller Box-Cox transformation parameter, the fit of the distribution was much better.
APA, Harvard, Vancouver, ISO, and other styles
7

VASCONCELOS, Josimar Mendes de. "Equações simultâneas no contexto clássico e bayesiano: uma abordagem à produção de soja." Universidade Federal Rural de Pernambuco, 2011. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5012.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-07-07T12:44:03Z No. of bitstreams: 1 Josimar Mendes de Vasconcelos.pdf: 4725831 bytes, checksum: 716f4b6bc6100003772271db252915b7 (MD5)<br>Made available in DSpace on 2016-07-07T12:44:03Z (GMT). No. of bitstreams: 1 Josimar Mendes de Vasconcelos.pdf: 4725831 bytes, checksum: 716f4b6bc6100003772271db252915b7 (MD5) Previous issue date: 2011-08-08<br>Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq<br>The last years has increased the quantity of researchers and search scientific in the plantation, production and value of the soybeans in the Brazil, in grain. In front of this, the present dissertation looks for to analyze the data and estimate models that explain, of satisfactory form, the variability observed of the quantity produced and value of the production of soya in grain in the Brazil, in the field of the study. For the development of these analyses is used the classical and Bayesian inference, in the context of simultaneous equations by the tools of indirect square minimum in two practices. In the classical inference uses the estimator of square minima in two practices. In the Bayesian inference worked the method of Mountain Carlo via Chain of Markov with the algorithms of Gibbs and Metropolis-Hastings by means of the technician of simultaneous equations. In the study, consider the variable area harvested, quantity produced, value of the production and gross inner product, in which it adjusted the model with the variable answer quantity produced and afterwards the another variable answer value of the production for finally do the corrections and obtain the final result, in the classical and Bayesian method. Through of the detours normalized, statistics of the proof-t, criteria of information Akaike and Schwarz normalized stands out the good application of the method of Mountain Carlo via Chain of Markov by the algorithm of Gibbs, also is an efficient method in the modelado and of easy implementation in the statistical softwares R & WinBUGS, as they already exist smart libraries to compile the method. Therefore, it suggests work the method of Mountain Carlo via chain of Markov through the method of Gibbs to estimate the production of soya in grain.<br>Nos últimos anos tem aumentado a quantidade de pesquisadores e pesquisas científicas na plantação, produção e valor de soja no Brasil, em grão. Diante disso, a presente dissertação busca analisar os dados e ajustar modelos que expliquem, de forma satisfatória, a variabilidade observada da quantidade produzida e valor da produção de soja em grão no Brasil, no campo do estudo. Para o desenvolvimento dessas análises é utilizada a inferência clássica e bayesiana, no contexto de equações simultâneas através da ferramenta de mínimos quadrados em dois estágios. Na inferência clássica utiliza-se o estimador de mínimos quadrados em dois estágios. Na inferência bayesiana trabalhou-se o método de Monte Carlo via Cadeia de Markov com os algoritmos de Gibbs e Metropolis-Hastings por meio da técnica de equações simultâneas. No estudo, consideram-se as variáveis área colhida, quantidade produzida, valor da produção e produto interno bruto, no qual ajustou-se o modelo com a variável resposta quantidade produzida e depois a variável resposta valor da produção para finalmente fazer as correções e obter o resultado final, no método clássico e bayesiano. Através, dos desvios padrão, estatística do teste-t, critérios de informação Akaike e Schwarz normalizados destaca-se a boa aplicação do método de Monte Carlo via Cadeia de Markov pelo algoritmo de Gibbs, também é um método eficiente na modelagem e de fácil implementação nos softwares estatísticos R & WinBUGS, pois já existem bibliotecas prontas para compilar o método. Portanto, sugere-se trabalhar o método de Monte Carlo via cadeia de Markov através do método de Gibbs para estimar a produção de soja em grão, no Brasil.
APA, Harvard, Vancouver, ISO, and other styles
8

Merhi, Bleik Josephine. "Modeling, estimation and simulation into two statistical models : quantile regression and blind deconvolution." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2506.

Full text
Abstract:
Cette thèse est consacrée à l’estimation de deux modèles statistiques : le modèle des quantiles de régression simultanés et le modèle de déconvolution aveugle. Elle se compose donc de deux parties. Dans la première partie, nous nous intéressons à l’estimation simultanée de plusieurs quantiles de régression par l’approche Bayésienne. En supposant que le terme d’erreur suit la distribution de Laplace asymétrique et en utilisant la relation entre deux quantiles distincts de cette distribution, nous proposons une méthode simple entièrement Bayésienne qui satisfait la propriété non croisée des quantiles. Pour la mise en œuvre, nous utilisons l’algorithme de Gibbs avec une étape de Metropolis-Hastings pour simuler les paramètres inconnus suivant leur distribution conditionnelle a posteriori. Nous montrons la performance et la compétitivité de la méthode sous-jacente par rapport à d’autres méthodes en fournissant des exemples de simulation. Dans la deuxième partie, nous nous concentrons sur la restoration du filtre inverse et du niveau de bruit d’un modèle de déconvolution aveugle bruyant dans un environnement paramétrique. Après la caractérisation du niveau de bruit et du filtre inverse, nous proposons une nouvelle procédure d’estimation plus simple à mettre en œuvre que les autres méthodes existantes. De plus, nous considérons l’estimation de la distribution discrète inconnue du signal d’entrée. Nous obtenons une forte cohérence et une normalité asymptotique pour toutes nos estimations. En incluant une comparaison avec une autre méthode, nous effectuons une étude de simulation cohérente qui démontre empiriquement la performance informatique de nos procédures d’estimation<br>This thesis is dedicated to the estimation of two statistical models: the simultaneous regression quantiles model and the blind deconvolution model. It therefore consists of two parts. In the first part, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples. In the second part, we focus on recovering both the inverse filter and the noise level of a noisy blind deconvolution model in a parametric setting. After the characterization of both the true noise level and inverse filter, we provide a new estimation procedure that is simpler to implement compared with other existing methods. As well, we consider the estimation of the unknown discrete distribution of the input signal. We derive strong consistency and asymptotic normality for all our estimates. Including a comparison with another method, we perform a consistent simulation study that demonstrates empirically the computational performance of our estimation procedures
APA, Harvard, Vancouver, ISO, and other styles
9

Alves, Andressa Schneider. "Algoritmos para o encaixe de moldes com formato irregular em tecidos listrados." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142744.

Full text
Abstract:
Esta tese tem como objetivo principal a proposição de solução para o problema do encaixe de moldes em tecidos listrados da indústria do vestuário. Os moldes são peças com formato irregular que devem ser dispostos sobre a matéria-prima, neste caso o tecido, para a etapa posterior de corte. No problema específico do encaixe em tecidos listrados, o local em que os moldes são posicionados no tecido deve garantir que, após a confecção da peça, as listras apresentem continuidade. Assim, a fundamentação teórica do trabalho abrange temas relacionados à moda e ao design do vestuário, como os tipos e padronagens de tecidos listrados, e as possibilidades de rotação e colocação dos moldes sobre tecidos listrados. Na fundamentação teórica também são abordados temas da pesquisa em otimização combinatória como: características dos problemas bidimensionais de corte e encaixe e algoritmos utilizados por diversos autores para solucionar o problema. Ainda na parte final da fundamentação teórica são descritos o método Cadeia de Markov Monte Carlo e o algoritmo de Metropolis-Hastings. Com base na pesquisa bibliográfica, foram propostos dois algoritmos distintos para lidar com o problema de encaixe de moldes em tecidos listrados: algoritmo com pré-processamento e algoritmo de busca do melhor encaixe utilizando o algoritmo de Metropolis-Hastings. Ambos foram implementados no software Riscare Listrado, que é uma continuidade do software Riscare para tecidos lisos desenvolvido em Alves (2010). Para testar o desempenho dos dois algoritmos foram utilizados seis problemas benchmarks da literatura e proposto um novo problema denominado de camisa masculina. Os problemas benchmarks da literatura foram propostos para matéria-prima lisa e o problema camisa masculina especificamente para tecidos listrados. Entre os dois algoritmos desenvolvidos, o algoritmo de busca do melhor encaixe apresentou resultados com melhores eficiências de utilização do tecido para todos os problemas propostos. Quando comparado aos melhores resultados publicados na literatura para matéria-prima lisa, o algoritmo de busca do melhor encaixe apresentou encaixes com eficiências inferiores, porém com resultados superiores ao recomendado pela literatura específica da área de moda para tecidos estampados.<br>This thesis proposes the solution for the packing problem of patterns on striped fabric in clothing industry. The patterns are pieces with irregular form that should be placed on raw material which is, in this case, the fabric. This fabric is cut after packing. In the specific problem of packing on striped fabric, the position that patterns are put in the fabric should ensure that, after the clothing sewing, the stripes should present continuity. Thus, the theoretical foundation of this project includes subjects about fashion and clothing design, such as types and rapports of striped fabric, and the possibilities of rotation and the correct place to put the patterns on striped fabric. In the theoretical foundation, there are also subjects about research in combinatorial optimization as: characteristics about bi-dimensional packing and cutting problems and algorithms used for several authors to solve the problem. In addition, the Markov Chain Monte Carlo method and the Metropolis-Hastings algorithm are described at end of theoretical foundation. Based on the bibliographic research, two different algorithms for the packing problem with striped fabric are proposed: algorithm with pre-processing step and algorithm of searching the best packing using the Metropolis-Hastings algorithm. Both algorithms are implemented in the Striped Riscare software, which is a continuity of Riscare software for clear fabrics developed in the Masters degree of the author. Both algorithms performances are tested with six literature benchmark problems and a new problem called “male shirt” is proposed here. The benchmark problems of literature were iniatially proposed for clear raw material and the male shirt problem, specifically for striped fabrics. Between the two developed algorithms, the algorithm of searching the best packing has shown better results with better efficiencies of the fabric usage for all the problems tested. When compared to the best results published in the literature for clear raw material, the algorithm of searching the best packing has shown packings with lower efficiencies. However, it showed results higher than recommended for the specific literature of fashion design for patterned fabrics.
APA, Harvard, Vancouver, ISO, and other styles
10

Zeppilli, Giulia. "Alcune applicazioni del Metodo Monte Carlo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3091/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Szymczak, Marcin. "Programming language semantics as a foundation for Bayesian inference." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28993.

Full text
Abstract:
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
APA, Harvard, Vancouver, ISO, and other styles
12

Frühwirth-Schnatter, Sylvia, and Rudolf Frühwirth. "Bayesian Inference in the Multinomial Logit Model." Austrian Statistical Society, 2012. http://epub.wu.ac.at/5629/1/186%2D751%2D1%2DSM.pdf.

Full text
Abstract:
The multinomial logit model (MNL) possesses a latent variable representation in terms of random variables following a multivariate logistic distribution. Based on multivariate finite mixture approximations of the multivariate logistic distribution, various data-augmented Metropolis-Hastings algorithms are developed for a Bayesian inference of the MNL model.
APA, Harvard, Vancouver, ISO, and other styles
13

Datta, Sagnik. "Fully bayesian structure learning of bayesian networks and their hypergraph extensions." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2283.

Full text
Abstract:
Dans cette thèse, j’aborde le problème important de l’estimation de la structure des réseaux complexes, à l’aide de la classe des modèles stochastiques dits réseaux Bayésiens. Les réseaux Bayésiens permettent de représenter l’ensemble des relations d’indépendance conditionnelle. L’apprentissage statistique de la structure de ces réseaux complexes par les réseaux Bayésiens peut révéler la structure causale sous-jacente. Il peut également servir pour la prédiction de quantités qui sont difficiles, coûteuses, ou non éthiques comme par exemple le calcul de la probabilité de survenance d’un cancer à partir de l’observation de quantités annexes, plus faciles à obtenir. Les contributions de ma thèse consistent en : (A) un logiciel développé en langage C pour l’apprentissage de la structure des réseaux bayésiens; (B) l’introduction d’un nouveau "jumping kernel" dans l’algorithme de "Metropolis-Hasting" pour un échantillonnage rapide de réseaux; (C) l’extension de la notion de réseaux Bayésiens aux structures incluant des boucles et (D) un logiciel spécifique pour l’apprentissage des structures cycliques. Notre principal objectif est l’apprentissage statistique de la structure de réseaux complexes représentée par un graphe et par conséquent notre objet d’intérêt est cette structure graphique. Un graphe est constitué de nœuds et d’arcs. Tous les paramètres apparaissant dans le modèle mathématique et différents de ceux qui caractérisent la structure graphique sont considérés comme des paramètres de nuisance<br>In this thesis, I address the important problem of the determination of the structure of complex networks, with the widely used class of Bayesian network models as a concrete vehicle of my ideas. The structure of a Bayesian network represents a set of conditional independence relations that hold in the domain. Learning the structure of the Bayesian network model that represents a domain can reveal insights into its underlying causal structure. Moreover, it can also be used for prediction of quantities that are difficult, expensive, or unethical to measure such as the probability of cancer based on other quantities that are easier to obtain. The contributions of this thesis include (A) a software developed in C language for structure learning of Bayesian networks; (B) introduction a new jumping kernel in the Metropolis-Hasting algorithm for faster sampling of networks (C) extending the notion of Bayesian networks to structures involving loops and (D) a software developed specifically to learn cyclic structures. Our primary objective is structure learning and thus the graph structure is our parameter of interest. We intend not to perform estimation of the parameters involved in the mathematical models
APA, Harvard, Vancouver, ISO, and other styles
14

Stanley, Leanne M. "Flexible Multidimensional Item Response Theory Models Incorporating Response Styles." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494316298549437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Toyinbo, Peter Ayo. "Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3673.

Full text
Abstract:
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.
APA, Harvard, Vancouver, ISO, and other styles
16

Ounaissi, Daoud. "Méthodes quasi-Monte Carlo et Monte Carlo : application aux calculs des estimateurs Lasso et Lasso bayésien." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10043/document.

Full text
Abstract:
La thèse contient 6 chapitres. Le premier chapitre contient une introduction à la régression linéaire et aux problèmes Lasso et Lasso bayésien. Le chapitre 2 rappelle les algorithmes d’optimisation convexe et présente l’algorithme FISTA pour calculer l’estimateur Lasso. La statistique de la convergence de cet algorithme est aussi donnée dans ce chapitre en utilisant l’entropie et l’estimateur de Pitman-Yor. Le chapitre 3 est consacré à la comparaison des méthodes quasi-Monte Carlo et Monte Carlo dans les calculs numériques du Lasso bayésien. Il sort de cette comparaison que les points de Hammersely donne les meilleurs résultats. Le chapitre 4 donne une interprétation géométrique de la fonction de partition du Lasso bayésien et l’exprime en fonction de la fonction Gamma incomplète. Ceci nous a permis de donner un critère de convergence pour l’algorithme de Metropolis Hastings. Le chapitre 5 présente l’estimateur bayésien comme la loi limite d’une équation différentielle stochastique multivariée. Ceci nous a permis de calculer le Lasso bayésien en utilisant les schémas numériques semi implicite et explicite d’Euler et les méthodes de Monte Carlo, Monte Carlo à plusieurs couches (MLMC) et l’algorithme de Metropolis Hastings. La comparaison des coûts de calcul montre que le couple (schéma semi-implicite d’Euler, MLMC) gagne contre les autres couples (schéma, méthode). Finalement dans le chapitre 6 nous avons trouvé la vitesse de convergence du Lasso bayésien vers le Lasso lorsque le rapport signal/bruit est constant et le bruit tend vers 0. Ceci nous a permis de donner de nouveaux critères pour la convergence de l’algorithme de Metropolis Hastings<br>The thesis contains 6 chapters. The first chapter contains an introduction to linear regression, the Lasso and the Bayesian Lasso problems. Chapter 2 recalls the convex optimization algorithms and presents the Fista algorithm for calculating the Lasso estimator. The properties of the convergence of this algorithm is also given in this chapter using the entropy estimator and Pitman-Yor estimator. Chapter 3 is devoted to comparison of Monte Carlo and quasi-Monte Carlo methods in numerical calculations of Bayesian Lasso. It comes out of this comparison that the Hammersely points give the best results. Chapter 4 gives a geometric interpretation of the partition function of the Bayesian lasso expressed as a function of the incomplete Gamma function. This allowed us to give a convergence criterion for the Metropolis Hastings algorithm. Chapter 5 presents the Bayesian estimator as the law limit a multivariate stochastic differential equation. This allowed us to calculate the Bayesian Lasso using numerical schemes semi-implicit and explicit Euler and methods of Monte Carlo, Monte Carlo multilevel (MLMC) and Metropolis Hastings algorithm. Comparing the calculation costs shows the couple (semi-implicit Euler scheme, MLMC) wins against the other couples (scheme method). Finally in chapter 6 we found the Lasso convergence rate of the Bayesian Lasso when the signal / noise ratio is constant and when the noise tends to 0. This allowed us to provide a new criteria for the convergence of the Metropolis algorithm Hastings
APA, Harvard, Vancouver, ISO, and other styles
17

Jin, Yan. "Bayesian Solution to the Analysis of Data with Values below the Limit of Detection (LOD)." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1227293204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-226748.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice<br>Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert
APA, Harvard, Vancouver, ISO, and other styles
19

Martinez, Marie-José. "Modèles linéaires généralisés à effets aléatoires : contributions au choix de modèle et au modèle de mélange." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00388820.

Full text
Abstract:
Ce travail est consacré à l'étude des modèles linéaires généralisés à effets aléatoires (GL2M). Dans ces modèles, sous une hypothèse de distribution normale des effets aléatoires, la vraisemblance basée sur la distribution marginale du vecteur à expliquer n'est pas, en général, calculable de façon formelle. Dans la première partie de notre travail, nous revisitons différentes méthodes d'estimation non exactes par le biais d'approximations réalisées à différents niveaux selon les raisonnements. La deuxième partie est consacrée à la mise en place de critères de sélection de modèles au sein des GL2M. Nous revenons sur deux méthodes d'estimation nécessitant la construction de modèles linéarisés et nous proposons des critères basés sur la vraisemblance marginale calculée dans le modèle linéarisé obtenu à la convergence de la procédure d'estimation. La troisième et dernière partie s'inscrit dans le cadre des modèles de mélanges de GL2M. Les composants du mélange sont définis par des GL2M et traduisent différents états possibles des individus. Dans le cadre de la loi exponentielle, nous proposons une méthode d'estimation des paramètres du mélange basée sur une linéarisation spécifique à cette loi. Nous proposons ensuite une méthode plus générale puisque s'appliquant à un mélange de GL2M quelconques. Cette méthode s'appuie sur une étape de Metropolis-Hastings pour construire un algorithme de type MCEM. Les différentes méthodes développées sont testées par simulations.
APA, Harvard, Vancouver, ISO, and other styles
20

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Technische Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A20754.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice.<br>Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert.
APA, Harvard, Vancouver, ISO, and other styles
21

Bachouch, Achref. "Numerical Computations for Backward Doubly Stochastic Differential Equations and Nonlinear Stochastic PDEs." Thesis, Le Mans, 2014. http://www.theses.fr/2014LEMA1034/document.

Full text
Abstract:
L’objectif de cette thèse est l’étude d’un schéma numérique pour l’approximation des solutions d’équations différentielles doublement stochastiques rétrogrades (EDDSR). Durant les deux dernières décennies, plusieurs méthodes ont été proposées afin de permettre la résolution numérique des équations différentielles stochastiques rétrogrades standards. Dans cette thèse, on propose une extension de l’une de ces méthodes au cas doublement stochastique. Notre méthode numérique nous permet d’attaquer une large gamme d’équations aux dérivées partielles stochastiques (EDPS) nonlinéaires. Ceci est possible par le biais de leur représentation probabiliste en termes d’EDDSRs. Dans la dernière partie, nous étudions une nouvelle méthode des particules dans le cadre des études de protection en neutroniques<br>The purpose of this thesis is to study a numerical method for backward doubly stochastic differential equations (BDSDEs in short). In the last two decades, several methods were proposed to approximate solutions of standard backward stochastic differential equations. In this thesis, we propose an extension of one of these methods to the doubly stochastic framework. Our numerical method allows us to tackle a large class of nonlinear stochastic partial differential equations (SPDEs in short), thanks to their probabilistic interpretation. In the last part, we study a new particle method in the context of shielding studies
APA, Harvard, Vancouver, ISO, and other styles
22

Tremblay, Marie. "Estimation des paramètres des modèles de culture : application au modèle STICS Tournesol." Toulouse 3, 2004. http://www.theses.fr/2004TOU30020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Joly, Jean-Luc. "Contributions à la génération aléatoire pour des classes d'automates finis." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2012/document.

Full text
Abstract:
Le concept d’automate, central en théorie des langages, est l’outil d’appréhension naturel et efficace de nombreux problèmes concrets. L’usage intensif des automates finis dans un cadre algorithmique s ’illustre par de nombreux travaux de recherche. La correction et l’ évaluation sont les deux questions fondamentales de l’algorithmique. Une méthode classique d’ évaluation s’appuie sur la génération aléatoire contrôlée d’instances d’entrée. Les travaux d´écrits dans cette thèse s’inscrivent dans ce cadre et plus particulièrement dans le domaine de la génération aléatoire uniforme d’automates finis.L’exposé qui suit propose d’abord la construction d’un générateur aléatoire d’automates à pile déterministes, real time. Cette construction s’appuie sur la méthode symbolique. Des résultats théoriques et une étude expérimentale sont exposés.Un générateur aléatoire d’automates non-déterministes illustre ensuite la souplesse d’utilisation de la méthode de Monte-Carlo par Chaînes de Markov (MCMC) ainsi que la mise en œuvre de l’algorithme de Metropolis - Hastings pour l’ échantillonnage à isomorphisme près. Un résultat sur le temps de mélange est donné dans le cadre général .L’ échantillonnage par méthode MCMC pose le problème de l’évaluation du temps de mélange dans la chaîne. En s’inspirant de travaux antérieurs pour construire un générateur d’automates partiellement ordonnés, on montre comment différents outils statistiques permettent de s’attaquer à ce problème<br>The concept of automata, central to language theory, is the natural and efficient tool to apprehendvarious practical problems.The intensive use of finite automata in an algorithmic framework is illustrated by numerous researchworks.The correctness and the evaluation of performance are the two fundamental issues of algorithmics.A classic method to evaluate an algorithm is based on the controlled random generation of inputs.The work described in this thesis lies within this context and more specifically in the field of theuniform random generation of finite automata.The following presentation first proposes to design a deterministic, real time, pushdown automatagenerator. This design builds on the symbolic method. Theoretical results and an experimental studyare given.This design builds on the symbolic method. Theoretical results and an experimental study are given.A random generator of non deterministic automata then illustrates the flexibility of the Markov ChainMonte Carlo methods (MCMC) as well as the implementation of the Metropolis-Hastings algorithm tosample up to isomorphism. A result about the mixing time in the general framework is given.The MCMC sampling methods raise the problem of the mixing time in the chain. By drawing on worksalready completed to design a random generator of partially ordered automata, this work shows howvarious statistical tools can form a basis to address this issue
APA, Harvard, Vancouver, ISO, and other styles
24

Zheng, Zhi-Long, and 鄭智隆. "Rates of convergence of Metropolis-Hastings algorithm." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/86531956484261783613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Γιαννόπουλος, Νικόλαος. "Μελετώντας τον αλγόριθμο Metropolis-Hastings". Thesis, 2012. http://hdl.handle.net/10889/5920.

Full text
Abstract:
Η παρούσα διπλωματική διατριβή εντάσσεται ερευνητικά στην περιοχή της Υπολογιστικής Στατιστικής, καθώς ασχολούμαστε με τη μελέτη μεθόδων προσομοίωσης από κάποια κατανομή π (κατανομή στόχο) και τον υπολογισμό σύνθετων ολοκληρωμάτων. Σε πολλά πραγματικά προβλήματα, όπου η μορφή της π είναι ιδιαίτερα πολύπλοκή ή/και η διάσταση του χώρου καταστάσεων μεγάλη, η προσομοίωση από την π δεν μπορεί να γίνει με απλές τεχνικές καθώς επίσης και ο υπολογισμός των ολοκληρωμάτων είναι πάρα πολύ δύσκολο αν όχι αδύνατο να γίνει αναλυτικά. Γι’ αυτό, καταφεύγουμε σε τεχνικές Monte Carlo (MC) και Markov Chain Monte Carlo (MCMC), οι οποίες προσομοιώνουν τιμές τυχαίων μεταβλητών και εκτιμούν τα ολοκληρώματα μέσω κατάλληλων συναρτήσεων των προσομοιωμένων τιμών. Οι τεχνικές MC παράγουν ανεξάρτητες παρατηρήσεις είτε απ’ ευθείας από την κατανομή-στόχο π είτε από κάποια διαφορετική κατανομή-πρότασης g. Οι τεχνικές MCMC προσομοιώνουν αλυσίδες Markov με στάσιμη κατανομή την και επομένως οι παρατηρήσεις είναι εξαρτημένες. Στα πλαίσια αυτής της εργασίας θα ασχοληθούμε κυρίως με τον αλγόριθμο Metropolis-Hastings που είναι ένας από τους σημαντικότερους, αν όχι ο σημαντικότερος, MCMC αλγόριθμους. Πιο συγκεκριμένα, στο Κεφάλαιο 2 γίνεται μια σύντομη αναφορά σε γνωστές τεχνικές MC, όπως η μέθοδος Αποδοχής-Απόρριψης, η μέθοδος Αντιστροφής και η μέθοδος Δειγματοληψίας σπουδαιότητας καθώς επίσης και σε τεχνικές MCMC, όπως ο αλγόριθμός Metropolis-Hastings, o Δειγματολήπτης Gibbs και η μέθοδος Metropolis Within Gibbs. Στο Κεφάλαιο 3 γίνεται αναλυτική αναφορά στον αλγόριθμο Metropolis-Hastings. Αρχικά, παραθέτουμε μια σύντομη ιστορική αναδρομή και στη συνέχεια δίνουμε μια αναλυτική περιγραφή του. Παρουσιάζουμε κάποιες ειδικές μορφές τού καθώς και τις βασικές ιδιότητες που τον χαρακτηρίζουν. Το κεφάλαιο ολοκληρώνεται με την παρουσίαση κάποιων εφαρμογών σε προσομοιωμένα καθώς και σε πραγματικά δεδομένα. Το τέταρτο κεφάλαιο ασχολείται με μεθόδους εκτίμησης της διασποράς του εργοδικού μέσου ο οποίος προκύπτει από τις MCMC τεχνικές. Ιδιαίτερη αναφορά γίνεται στις μεθόδους Batch means και Spectral Variance Estimators. Τέλος, το Κεφάλαιο 5 ασχολείται με την εύρεση μιας κατάλληλης κατανομή πρότασης για τον αλγόριθμό Metropolis-Hastings. Παρόλο που ο αλγόριθμος Metropolis-Hastings μπορεί να συγκλίνει για οποιαδήποτε κατανομή πρότασης αρκεί να ικανοποιεί κάποιες βασικές υποθέσεις, είναι γνωστό ότι μία κατάλληλη επιλογή της κατανομής πρότασης βελτιώνει τη σύγκλιση του αλγόριθμου. Ο προσδιορισμός της βέλτιστής κατανομής πρότασης για μια συγκεκριμένη κατανομή στόχο είναι ένα πολύ σημαντικό αλλά εξίσου δύσκολο πρόβλημα. Το πρόβλημα αυτό έχει προσεγγιστεί με πολύ απλοϊκές τεχνικές (trial-and-error τεχνικές) αλλά και με adaptive αλγόριθμούς που βρίσκουν μια "καλή" κατανομή πρότασης αυτόματα.<br>This thesis is part of research in Computational Statistics, as we deal with the study of methods of modeling some distribution π (target distribution) and calculate complex integrals. In many real problems, where the form of π is very complex and / or the size of large state space, simulation of π can not be done with simple techniques as well as the calculation of the integrals is very difficult if not impossible to done analytically. So we resort to techniques Monte Carlo (MC) and Markov Chain Monte Carlo (MCMC), which simulate values ​​of random variables and estimate the integrals by appropriate functions of the simulated values. These techniques produce MC independent observations either directly from the distribution n target or a different distribution motion-g. MCMC techniques simulate Markov chains with stationary distribution and therefore the observations are dependent. As part of this work we will deal mainly with the Metropolis-Hastings algorithm is one of the greatest, if not the most important, MCMC algorithms. More specifically, in Chapter 2 is a brief reference to known techniques MC, such as Acceptance-Rejection method, the inversion method and importance sampling methods as well as techniques MCMC, as the algorithm Metropolis-Hastings, o Gibbs sampler and method Metropolis Within Gibbs. Chapter 3 is a detailed report on the algorithm Metropolis-Hastings. First, we present a brief history and then give a detailed description. Present some specific forms as well as the basic properties that characterize them. The chapter concludes with a presentation of some applications on simulated and real data. The fourth chapter deals with methods for estimating the dispersion of ergodic average, derived from the MCMC techniques. Particular reference is made to methods Batch means and Spectral Variance Estimators. Finally, Chapter 5 deals with finding a suitable proposal for the allocation algorithm Metropolis-Hastings. Although the Metropolis-Hastings algorithm can converge on any distribution motion sufficient to satisfy some basic assumptions, it is known that an appropriate selection of the distribution proposal improves the convergence of the algorithm. Determining the optimal allocation proposal for a specific distribution target is a very important but equally difficult problem. This problem has been approached in a very simplistic techniques (trial-and-error techniques) but also with adaptive algorithms that find a "good" allocation proposal automatically.
APA, Harvard, Vancouver, ISO, and other styles
26

Mireuta, Matei. "Étude de la performance d’un algorithme Metropolis-Hastings avec ajustement directionnel." Thèse, 2011. http://hdl.handle.net/1866/6231.

Full text
Abstract:
Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Étant donné leur facilité d’application, ces méthodes sont largement répandues dans plusieurs communautés scientifiques et bien certainement en statistique, particulièrement en analyse bayésienne. Depuis l’apparition de la première méthode MCMC en 1953, le nombre de ces algorithmes a considérablement augmenté et ce sujet continue d’être une aire de recherche active. Un nouvel algorithme MCMC avec ajustement directionnel a été récemment développé par Bédard et al. (IJSS, 9 :2008) et certaines de ses propriétés restent partiellement méconnues. L’objectif de ce mémoire est de tenter d’établir l’impact d’un paramètre clé de cette méthode sur la performance globale de l’approche. Un second objectif est de comparer cet algorithme à d’autres méthodes MCMC plus versatiles afin de juger de sa performance de façon relative.<br>Markov Chain Monte Carlo algorithms (MCMC) have become popular tools for sampling from complex and/or high dimensional probability distributions. Given their relative ease of implementation, these methods are frequently used in various scientific areas, particularly in Statistics and Bayesian analysis. The volume of such methods has risen considerably since the first MCMC algorithm described in 1953 and this area of research remains extremely active. A new MCMC algorithm using a directional adjustment has recently been described by Bédard et al. (IJSS, 9:2008) and some of its properties remain unknown. The objective of this thesis is to attempt determining the impact of a key parameter on the global performance of the algorithm. Moreover, another aim is to compare this new method to existing MCMC algorithms in order to evaluate its performance in a relative fashion.
APA, Harvard, Vancouver, ISO, and other styles
27

Boisvert-Beaudry, Gabriel. "Efficacité des distributions instrumentales en équilibre dans un algorithme de type Metropolis-Hastings." Thèse, 2019. http://hdl.handle.net/1866/23794.

Full text
Abstract:
Dans ce mémoire, nous nous intéressons à une nouvelle classe de distributions instrumentales informatives dans le cadre de l'algorithme Metropolis-Hastings. Ces distributions instrumentales, dites en équilibre, sont obtenues en ajoutant de l'information à propos de la distribution cible à une distribution instrumentale non informative. Une chaîne de Markov générée par une distribution instrumentale en équilibre est réversible par rapport à la densité cible sans devoir utiliser une probabilité d'acceptation dans deux cas extrêmes: le cas local lorsque la variance instrumentale tend vers 0 et le cas global lorsqu'elle tend vers l'infini. Il est nécessaire d'approximer les distributions instrumentales en équilibre afin de pouvoir les utiliser en pratique. Nous montrons que le cas local mène au Metropolis-adjusted Langevin algorithm (MALA), tandis que le cas global mène à une légère modification du MALA. Ces résultats permettent de concevoir un nouvel algorithme généralisant le MALA grâce à l'ajout d'un nouveau paramètre. En fonction de celui-ci, l'algorithme peut utiliser l'équilibre local ou global ou encore une interpolation entre ces deux cas. Nous étudions ensuite la paramétrisation optimale de cet algorithme en fonction de la dimension de la distribution cible sous deux régimes: le régime asymptotique puis le régime en dimensions finies. Diverses simulations permettent d'illustrer les résultats théoriques obtenus. De plus, une application du nouvel algorithme à un problème de régression logistique bayésienne permet de comparer son efficacité à des algorithmes existants. Les résultats obtenus sont satisfaisants autant d'un point de vue théorique que computationnel.<br>In this master's thesis, we are interested in a new class of informed proposal distributions for Metropolis-Hastings algorithms. These new proposals, called balanced proposals, are obtained by adding information about the target density to an uninformed proposal distribution. A Markov chain generated by a balanced proposal is reversible with respect to the target density without the need for an acceptance probability in two extreme cases: the local case, where the proposal variance tends to zero, and the global case, where it tends to infinity. The balanced proposals need to be approximated to be used in practice. We show that the local case leads to the Metropolis-adjusted Langevin algorithm (MALA), while the global case leads to a small modification of the MALA. These results are used to create a new algorithm that generalizes the MALA by adding a new parameter. Depending on the value of this parameter, the new algorithm will use a locally balanced proposal, a globally balanced proposal, or an interpolation between these two cases. We then study the optimal choice for this parameter as a function of the dimension of the target distribution under two regimes: the asymptotic regime and a finite-dimensional regime. Simulations are presented to illustrate the theoretical results. Finally, we apply the new algorithm to a Bayesian logistic regression problem and compare its efficiency to existing algorithms. The results are satisfying on a theoretical and computational standpoint.
APA, Harvard, Vancouver, ISO, and other styles
28

Reddy, Chandan Rama. "Capacity Proportional Unstructured Peer-to-Peer Networks." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-878.

Full text
Abstract:
Existing methods to utilize capacity-heterogeneity in a P2P system either rely on constructing special overlays with capacity-proportional node degree or use topology adaptation to match a node's capacity with that of its neighbors. In existing P2P networks, which are often characterized by diverse node capacities and high churn, these methods may require large node degree or continuous topology adaptation, potentially making them infeasible due to their high overhead. In this thesis, we propose an unstructured P2P system that attempts to address these issues. We first prove that the overall throughput of search queries in a heterogeneous network is maximized if and only if traffic load through each node is proportional to its capacity. Our proposed system achieves this traffic distribution by biasing search walks using the Metropolis-Hastings algorithm, without requiring any special underlying topology. We then define two saturation metrics for measuring the performance of overlay networks: one for quantifying their ability to support random walks and the second for measuring their potential to handle the overhead caused by churn. Using simulations, we finally compare our proposed method with Gia, an existing system which uses topology adaptation, and find that the former performs better under all studied conditions, both saturation metrics, and such end-to-end parameters as query success rate, latency, and query-hits for various file replication schemes.
APA, Harvard, Vancouver, ISO, and other styles
29

Groiez, Assia. "Recyclage des candidats dans l'algorithme Metropolis à essais multiples." Thèse, 2014. http://hdl.handle.net/1866/10853.

Full text
Abstract:
Les méthodes de Monte Carlo par chaînes de Markov (MCCM) sont des méthodes servant à échantillonner à partir de distributions de probabilité. Ces techniques se basent sur le parcours de chaînes de Markov ayant pour lois stationnaires les distributions à échantillonner. Étant donné leur facilité d’application, elles constituent une des approches les plus utilisées dans la communauté statistique, et tout particulièrement en analyse bayésienne. Ce sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Depuis l’apparition de la première méthode MCCM en 1953 (la méthode de Metropolis, voir [10]), l’intérêt pour ces méthodes, ainsi que l’éventail d’algorithmes disponibles ne cessent de s’accroître d’une année à l’autre. Bien que l’algorithme Metropolis-Hastings (voir [8]) puisse être considéré comme l’un des algorithmes de Monte Carlo par chaînes de Markov les plus généraux, il est aussi l’un des plus simples à comprendre et à expliquer, ce qui en fait un algorithme idéal pour débuter. Il a été sujet de développement par plusieurs chercheurs. L’algorithme Metropolis à essais multiples (MTM), introduit dans la littérature statistique par [9], est considéré comme un développement intéressant dans ce domaine, mais malheureusement son implémentation est très coûteuse (en termes de temps). Récemment, un nouvel algorithme a été développé par [1]. Il s’agit de l’algorithme Metropolis à essais multiples revisité (MTM revisité), qui définit la méthode MTM standard mentionnée précédemment dans le cadre de l’algorithme Metropolis-Hastings sur un espace étendu. L’objectif de ce travail est, en premier lieu, de présenter les méthodes MCCM, et par la suite d’étudier et d’analyser les algorithmes Metropolis-Hastings ainsi que le MTM standard afin de permettre aux lecteurs une meilleure compréhension de l’implémentation de ces méthodes. Un deuxième objectif est d’étudier les perspectives ainsi que les inconvénients de l’algorithme MTM revisité afin de voir s’il répond aux attentes de la communauté statistique. Enfin, nous tentons de combattre le problème de sédentarité de l’algorithme MTM revisité, ce qui donne lieu à un tout nouvel algorithme. Ce nouvel algorithme performe bien lorsque le nombre de candidats générés à chaque itérations est petit, mais sa performance se dégrade à mesure que ce nombre de candidats croît.<br>Markov Chain Monte Carlo (MCMC) algorithms are methods that are used for sampling from probability distributions. These tools are based on the path of a Markov chain whose stationary distribution is the distribution to be sampled. Given their relative ease of application, they are one of the most popular approaches in the statistical community, especially in Bayesian analysis. These methods are very popular for sampling from complex and/or high dimensional probability distributions. Since the appearance of the first MCMC method in 1953 (the Metropolis algorithm, see [10]), the interest for these methods, as well as the range of algorithms available, continue to increase from one year to another. Although the Metropolis-Hastings algorithm (see [8]) can be considered as one of the most general Markov chain Monte Carlo algorithms, it is also one of the easiest to understand and explain, making it an ideal algorithm for beginners. As such, it has been studied by several researchers. The multiple-try Metropolis (MTM) algorithm , proposed by [9], is considered as one interesting development in this field, but unfortunately its implementation is quite expensive (in terms of time). Recently, a new algorithm was developed by [1]. This method is named the revisited multiple-try Metropolis algorithm (MTM revisited), which is obtained by expressing the MTM method as a Metropolis-Hastings algorithm on an extended space. The objective of this work is to first present MCMC methods, and subsequently study and analyze the Metropolis-Hastings and standard MTM algorithms to allow readers a better perspective on the implementation of these methods. A second objective is to explore the opportunities and disadvantages of the revisited MTM algorithm to see if it meets the expectations of the statistical community. We finally attempt to fight the sedentarity of the revisited MTM algorithm, which leads to a new algorithm. The latter performs efficiently when the number of generated candidates in a given iteration is small, but the performance of this new algorithm then deteriorates as the number of candidates in a given iteration increases.
APA, Harvard, Vancouver, ISO, and other styles
30

Maha, Petr. "Normální aproximace pro statistiku Gibbsových bodových procesů." Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-372941.

Full text
Abstract:
In this thesis, we deal with finite Gibbs point processes, especially the processes with densities with respect to a Poisson point process. The main aim of this work is to investigate a four-parametric marked point process of circular discs in three dimensions with two and three way point interactions. In the second chapter, our goal is to simulate such a process. For that purpose, the birth- death Metropolis-Hastings algorithm is presented including theoretical results. After that, the algorithm is applied on the disc process and numerical results for different choices of parameters are presented. The third chapter consists of two approaches for the estimation of parameters. First is the Takacs-Fiksel estimation procedure with a choice of weight functions as the derivatives of pseudolikelihood. The second one is the estimation procedure aiming for the optimal choice of weight functions for the estimation in order to provide better quality estimates. The theoretical background for both of these approaches is derived as well as detailed calculations for the disc process. The numerical results for both methods are presented as well as their comparison. 1
APA, Harvard, Vancouver, ISO, and other styles
31

Jung, Maarten Lars. "Reaction Time Modeling in Bayesian Cognitive Models of Sequential Decision-Making Using Markov Chain Monte Carlo Sampling." 2020. https://tud.qucosa.de/id/qucosa%3A74048.

Full text
Abstract:
In this thesis, a new approach for generating reaction time predictions for Bayesian cognitive models of sequential decision-making is proposed. The method is based on a Markov chain Monte Carlo algorithm that, by utilizing prior distributions and likelihood functions of possible action sequences, generates predictions about the time needed to choose one of these sequences. The plausibility of the reaction time predictions produced by this algorithm was investigated for simple exemplary distributions as well as for prior distributions and likelihood functions of a Bayesian model of habit learning. Simulations showed that the reaction time distributions generated by the Markov chain Monte Carlo sampler exhibit key characteristics of reaction time distributions typically observed in decision-making tasks. The introduced method can be easily applied to various Bayesian models for decision-making tasks with any number of choice alternatives. It thus provides the means to derive reaction time predictions for models where this has not been possible before.<br>In dieser Arbeit wird ein neuer Ansatz zum Generieren von Reaktionszeitvorhersagen für bayesianische Modelle sequenzieller Entscheidungsprozesse vorgestellt. Der Ansatz basiert auf einem Markov-Chain-Monte-Carlo-Algorithmus, der anhand von gegebenen A-priori-Verteilungen und Likelihood-Funktionen von möglichen Handlungssequenzen Vorhersagen über die Dauer einer Entscheidung für eine dieser Handlungssequenzen erstellt. Die Plausibilität der mit diesem Algorithmus generierten Reaktionszeitvorhersagen wurde für einfache Beispielverteilungen sowie für A-priori-Verteilungen und Likelihood-Funktionen eines bayesianischen Modells zur Beschreibung von Gewohnheitslernen untersucht. Simulationen zeigten, dass die vom Markov-Chain-Monte-Carlo-Sampler erzeugten Reaktionszeitverteilungen charakteristische Eigenschaften von typischen Reaktionszeitverteilungen im Kontext sequenzieller Entscheidungsprozesse aufweisen. Das Verfahren lässt sich problemlos auf verschiedene bayesianische Modelle für Entscheidungsparadigmen mit beliebig vielen Handlungsalternativen anwenden und eröffnet damit die Möglichkeit, Reaktionszeitvorhersagen für Modelle abzuleiten, für die dies bislang nicht möglich war.
APA, Harvard, Vancouver, ISO, and other styles
32

(6563222), Boqian Zhang. "Efficient Path and Parameter Inference for Markov Jump Processes." Thesis, 2019.

Find full text
Abstract:
<div>Markov jump processes are continuous-time stochastic processes widely used in a variety of applied disciplines. Inference typically proceeds via Markov chain Monte Carlo (MCMC), the state-of-the-art being a uniformization-based auxiliary variable Gibbs sampler. This was designed for situations where the process parameters are known, and Bayesian inference over unknown parameters is typically carried out by incorporating it into a larger Gibbs sampler. This strategy of sampling parameters given path, and path given parameters can result in poor Markov chain mixing.</div><div><br></div><div>In this thesis, we focus on the problem of path and parameter inference for Markov jump processes.</div><div><br></div><div>In the first part of the thesis, a simple and efficient MCMC algorithm is proposed to address the problem of path and parameter inference for Markov jump processes. Our scheme brings Metropolis-Hastings approaches for discrete-time hidden Markov models to the continuous-time setting, resulting in a complete and clean recipe for parameter and path inference in Markov jump processes. In our experiments, we demonstrate superior performance over Gibbs sampling, a more naive Metropolis-Hastings algorithm we propose, as well as another popular approach, particle Markov chain Monte Carlo. We also show our sampler inherits geometric mixing from an ‘ideal’ sampler that is computationally much more expensive.</div><div><br></div><div>In the second part of the thesis, a novel collapsed variational inference algorithm is proposed. Our variational inference algorithm leverages ideas from discrete-time Markov chains, and exploits a connection between Markov jump processes and discrete-time Markov chains through uniformization. Our algorithm proceeds by marginalizing out the parameters of the Markov jump process, and then approximating the distribution over the trajectory with a factored distribution over segments of a piecewise-constant function. Unlike MCMC schemes that marginalize out transition times of a piecewise-constant process, our scheme optimizes the discretization of time, resulting in significant computational savings. We apply our ideas to synthetic data as well as a dataset of check-in recordings, where we demonstrate superior performance over state-of-the-art MCMC methods.</div><div><br></div>
APA, Harvard, Vancouver, ISO, and other styles
33

Bégin, Jean-François. "New simulation schemes for the Heston model." Thèse, 2012. http://hdl.handle.net/1866/8752.

Full text
Abstract:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.<br>Financial stocks are often modeled by stochastic differential equations (SDEs). These equations could describe the behavior of the underlying asset as well as some of the model's parameters. For example, the Heston (1993) model, which is a stochastic volatility model, describes the behavior of the stock and the variance of the latter. The Heston model is very interesting since it has semi-closed formulas for some derivatives, and it is quite realistic. However, many simulation schemes for this model have problems when the Feller (1951) condition is violated. In this thesis, we introduce new simulation schemes to simulate price paths using the Heston model. These new algorithms are based on Broadie and Kaya's (2006) method. In order to increase the speed of the exact scheme of Broadie and Kaya, we use, among other things, Markov chains Monte Carlo (MCMC) algorithms and some well-chosen approximations. In our first algorithm, we modify the second step of the Broadie and Kaya's method in order to get faster schemes. Instead of using the second-order Newton method coupled with the inversion approach, we use a Metropolis-Hastings algorithm. The second algorithm is a small improvement of our latter scheme. Instead of using the real integrated variance over time p.d.f., we use Smith's (2007) approximation. This helps us decrease the dimension of our problem (from three to two). Our last algorithm is not based on MCMC methods. However, we still try to speed up the second step of Broadie and Kaya. In order to achieve this, we use a moment-matched gamma random variable. According to Stewart et al. (2007), it is possible to approximate a complex gamma convolution (somewhat near the representation given by Glasserman and Kim (2008) when T-t is close to zero) by a gamma distribution.
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, Mingqi. "Population SAMC, ChIP-chip Data Analysis and Beyond." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8752.

Full text
Abstract:
This dissertation research consists of two topics, population stochastics approximation Monte Carlo (Pop-SAMC) for Baysian model selection problems and ChIP-chip data analysis. The following two paragraphs give a brief introduction to each of the two topics, respectively. Although the reversible jump MCMC (RJMCMC) has the ability to traverse the space of possible models in Bayesian model selection problems, it is prone to becoming trapped into local mode, when the model space is complex. SAMC, proposed by Liang, Liu and Carroll, essentially overcomes the difficulty in dimension-jumping moves, by introducing a self-adjusting mechanism. However, this learning mechanism has not yet reached its maximum efficiency. In this dissertation, we propose a Pop-SAMC algorithm; it works on population chains of SAMC, which can provide a more efficient self-adjusting mechanism and make use of crossover operator from genetic algorithms to further increase its efficiency. Under mild conditions, the convergence of this algorithm is proved. The effectiveness of Pop-SAMC in Bayesian model selection problems is examined through a change-point identification example and a large-p linear regression variable selection example. The numerical results indicate that Pop- SAMC outperforms both the single chain SAMC and RJMCMC significantly. In the ChIP-chip data analysis study, we developed two methodologies to identify the transcription factor binding sites: Bayesian latent model and population-based test. The former models the neighboring dependence of probes by introducing a latent indicator vector; The later provides a nonparametric method for evaluation of test scores in a multiple hypothesis test by making use of population information of samples. Both methods are applied to real and simulated datasets. The numerical results indicate the Bayesian latent model can outperform the existing methods, especially when the data contain outliers, and the use of population information can significantly improve the power of multiple hypothesis tests.
APA, Harvard, Vancouver, ISO, and other styles
35

Atchadé, Yves F. "Quelques contributions sur les méthodes de Monte Carlo." Thèse, 2003. http://hdl.handle.net/1866/14581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

(8741097), Ritwik Bandyopadhyay. "ENSURING FATIGUE PERFORMANCE VIA LOCATION-SPECIFIC LIFING IN AEROSPACE COMPONENTS MADE OF TITANIUM ALLOYS AND NICKEL-BASE SUPERALLOYS." Thesis, 2020.

Find full text
Abstract:
<div>In this thesis, the role of location-specific microstructural features in the fatigue performance of the safety-critical aerospace components made of Nickel (Ni)-base superalloys and linear friction welded (LFW) Titanium (Ti) alloys has been studied using crystal plasticity finite element (CPFE) simulations, energy dispersive X-ray diffraction (EDD), backscatter electron (BSE) images and digital image correlation (DIC).</div><div><br></div><div>In order to develop a microstructure-sensitive fatigue life prediction framework, first, it is essential to build trust in the quantitative prediction from CPFE analysis by quantifying uncertainties in the mechanical response from CPFE simulations. Second, it is necessary to construct a unified fatigue life prediction metric, applicable to multiple material systems; and a calibration strategy of the unified fatigue life model parameter accounting for uncertainties originating from CPFE simulations and inherent in the experimental calibration dataset. To achieve the first task, a genetic algorithm framework is used to obtain the statistical distributions of the crystal plasticity (CP) parameters. Subsequently, these distributions are used in a first-order, second-moment method to compute the mean and the standard deviation for the stress along the loading direction (σ_load), plastic strain accumulation (PSA), and stored plastic strain energy density (SPSED). The results suggest that an ~10% variability in σ_load and 20%-25% variability in the PSA and SPSED values may exist due to the uncertainty in the CP parameter estimation. Further, the contribution of a specific CP parameter to the overall uncertainty is path-dependent and varies based on the load step under consideration. To accomplish the second goal, in this thesis, it is postulated that a critical value of the SPSED is associated with fatigue failure in metals and independent of the applied load. Unlike the classical approach of estimating the (homogenized) SPSED as the cumulative area enclosed within the macroscopic stress-strain hysteresis loops, CPFE simulations are used to compute the (local) SPSED at each material point within polycrystalline aggregates of 718Plus, an additively manufactured Ni-base superalloy. A Bayesian inference method is utilized to calibrate the critical SPSED, which is subsequently used to predict fatigue lives at nine different strain ranges, including strain ratios of 0.05 and -1, using nine statistically equivalent microstructures. For each strain range, the predicted lives from all simulated microstructures follow a log-normal distribution; for a given strain ratio, the predicted scatter is seen to be increasing with decreasing strain amplitude and are indicative of the scatter observed in the fatigue experiments. Further, the log-normal mean lives at each strain range are in good agreement with the experimental evidence. Since the critical SPSED captures the experimental data with reasonable accuracy across various loading regimes, it is hypothesized to be a material property and sufficient to predict the fatigue life.</div><div><br></div><div>Inclusions are unavoidable in Ni-base superalloys, which lead to two competing failure modes, namely inclusion- and matrix-driven failures. Each factor related to the inclusion, which may contribute to crack initiation, is isolated and systematically investigated within RR1000, a powder metallurgy produced Ni-base superalloy, using CPFE simulations. Specifically, the role of the inclusion stiffness, loading regime, loading direction, a debonded region in the inclusion-matrix interface, microstructural variability around the inclusion, inclusion size, dissimilar coefficient of thermal expansion (CTE), temperature, residual stress, and distance of the inclusion from the free surface are studied in the emergence of two failure modes. The CPFE analysis indicates that the emergence of a failure mode is an outcome of the complex interaction between the aforementioned factors. However, the possibility of a higher probability of failure due to inclusions is observed with increasing temperature, if the CTE of the inclusion is higher than the matrix, and vice versa. Any overall correlation between the inclusion size and its propensity for damage is not found, based on inclusion that is of the order of the mean grain size. Further, the CPFE simulations indicate that the surface inclusions are more damaging than the interior inclusions for similar surrounding microstructures. These observations are utilized to instantiate twenty realistic statistically equivalent microstructures of RR1000 – ten containing inclusions and remaining ten without inclusions. Using CPFE simulations with these microstructures at four different temperatures and three strain ranges for each temperature, the critical SPSED is calibrated as a function of temperature for RR1000. The results suggest that critical SPSED decreases almost linearly with increasing temperature and is appropriate to predict the realistic emergence of the competing failure modes as a function of applied strain range and temperature.</div><div><br></div><div>LFW process leads to the development of significant residual stress in the components, and the role of residual stress in the fatigue performance of materials cannot be overstated. Hence, to ensure fatigue performance of the LFW Ti alloys, residual strains in LFW of similar (Ti-6Al-4V welded to Ti-6Al-4V or Ti64-Ti64) and dissimilar (Ti-6Al-4V welded to Ti-5Al-5V-5Mo-3Cr or Ti64-Ti5553) Ti alloys have been characterized using EDD. For each type of LFW, one sample is chosen in the as-welded (AW) condition and another sample is selected after a post-weld heat treatment (HT). Residual strains have been separately studied in the alpha and beta phases of the material, and five components (three axial and two shear) have been reported in each case. In-plane axial components of the residual strains show a smooth and symmetric behavior about the weld center for the Ti64-Ti64 LFW samples in the AW condition, whereas these components in the Ti64-Ti5553 LFW sample show a symmetric trend with jump discontinuities. Such jump discontinuities, observed in both the AW and HT conditions of the Ti64-Ti5553 samples, suggest different strain-free lattice parameters in the weld region and the parent material. In contrast, the results from the Ti64-Ti64 LFW samples in both AW and HT conditions suggest nearly uniform strain-free lattice parameters throughout the weld region. The observed trends in the in-plane axial residual strain components have been rationalized by the corresponding microstructural changes and variations across the weld region via BSE images. </div><div><br></div><div>In the literature, fatigue crack initiation in the LFW Ti-6Al-4V specimens does not usually take place in the seemingly weakest location, i.e., the weld region. From the BSE images, Ti-6Al-4V microstructure, at a distance from the weld-center, which is typically associated with crack initiation in the literature, are identified in both AW and HT samples and found to be identical, specifically, equiaxed alpha grains with beta phases present at the alpha grain boundaries and triple points. Hence, subsequent fatigue performance in LFW Ti-6Al-4V is analyzed considering the equiaxed alpha microstructure.</div><div><br></div><div>The LFW components made of Ti-6Al-4V are often designed for high cycle fatigue performance under high mean stress or high R ratios. In engineering practice, mean stress corrections are employed to assess the fatigue performance of a material or structure; albeit this is problematic for Ti-6Al-4V, which experiences anomalous behavior at high R ratios. To address this problem, high cycle fatigue analyses are performed on two Ti-6Al-4V specimens with equiaxed alpha microstructures at a high R ratio. In one specimen, two micro-textured regions (MTRs) having their c-axes near-parallel and perpendicular to the loading direction are identified. High-resolution DIC is performed in the MTRs to study grain-level strain localization. In the other specimen, DIC is performed on a larger area, and crack initiation is observed in a random-textured region. To accompany the experiments, CPFE simulations are performed to investigate the mechanistic aspects of crack initiation, and the relative activity of different families of slip systems as a function of R ratio. A critical soft-hard-soft grain combination is associated with crack initiation indicating possible dwell effect at high R ratios, which could be attributed to the high-applied mean stress and high creep sensitivity of Ti-6Al-4V at room temperature. Further, simulations indicated more heterogeneous deformation, specifically the activation of multiple families of slip systems with fewer grains being plasticized, at higher R ratios. Such behavior is exacerbated within MTRs, especially the MTR composed of grains with their c-axes near parallel to the loading direction. These features of micro-plasticity make the high R ratio regime more vulnerable to fatigue damage accumulation and justify the anomalous mean stress behavior experienced by Ti-6Al-4V at high R ratios.</div><div><br></div>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!