To see the other types of publications on this topic, follow the link: Bayesian Modeling.

Dissertations / Theses on the topic 'Bayesian Modeling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bayesian Modeling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Joseph, Joshua Mason. "Nonparametric Bayesian behavior modeling." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45263.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.
Includes bibliographical references (p. 91-94).
As autonomous robots are increasingly used in complex, dynamic environments, it is crucial that the dynamic elements are modeled accurately. However, it is often difficult to generate good models due to either a lack of domain understanding or the domain being intractably large. In many domains, even defining the size of the model can be a challenge. While methods exist to cluster data of dynamic agents into common motion patterns, or "behaviors," assumptions of the number of expected behaviors must be made. This assumption can cause clustering processes to under-fit or over-fit the training data. In a poorly understood domain, knowing the number of expected behaviors a priori is unrealistic and in an extremely large domain, correctly fitting the training data is difficult. To overcome these obstacles, this thesis takes a Bayesian approach and applies a Dirichlet process (DP) prior over behaviors, which uses experience to reduce the likelihood of over-fitting or under-fitting the model complexity. Additionally, the DP maintains a probability mass associated with a novel behavior and can address countably infinite behaviors. This learning technique is applied to modeling agents driving in an urban setting. The learned DP-based driver behavior model is first demonstrated on a simulated city. Building on successful simulation results, the methodology is applied to GPS data of taxis driving around Boston. Accurate prediction of future vehicle behavior from the model is shown in both domains.
by Joshua Mason Joseph.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
2

Turner, Brandon Michael. "Likelihood-Free Bayesian Modeling." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1316714657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Feng. "Bayesian Modeling of Conditional Densities." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-89426.

Full text
Abstract:
This thesis develops models and associated Bayesian inference methods for flexible univariate and multivariate conditional density estimation. The models are flexible in the sense that they can capture widely differing shapes of the data. The estimation methods are specifically designed to achieve flexibility while still avoiding overfitting. The models are flexible both for a given covariate value, but also across covariate space. A key contribution of this thesis is that it provides general approaches of density estimation with highly efficient Markov chain Monte Carlo methods. The methods are illustrated on several challenging non-linear and non-normal datasets. In the first paper, a general model is proposed for flexibly estimating the density of a continuous response variable conditional on a possibly high-dimensional set of covariates. The model is a finite mixture of asymmetric student-t densities with covariate-dependent mixture weights. The four parameters of the components, the mean, degrees of freedom, scale and skewness, are all modeled as functions of the covariates. The second paper explores how well a smooth mixture of symmetric components can capture skewed data. Simulations and applications on real data show that including covariate-dependent skewness in the components can lead to substantially improved performance on skewed data, often using a much smaller number of components. We also introduce smooth mixtures of gamma and log-normal components to model positively-valued response variables. In the third paper we propose a multivariate Gaussian surface regression model that combines both additive splines and interactive splines, and a highly efficient MCMC algorithm that updates all the multi-dimensional knot locations jointly. We use shrinkage priors to avoid overfitting with different estimated shrinkage factors for the additive and surface part of the model, and also different shrinkage parameters for the different response variables. In the last paper we present a general Bayesian approach for directly modeling dependencies between variables as function of explanatory variables in a flexible copula context. In particular, the Joe-Clayton copula is extended to have covariate-dependent tail dependence and correlations. Posterior inference is carried out using a novel and efficient simulation method. The appendix of the thesis documents the computational implementation details.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: In press. Paper 4: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
4

Rahlin, Alexandra Sasha. "Bayesian modeling of microwave foregrounds." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44735.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2008.
Includes bibliographical references (p. 93-94).
In the past decade, advances in precision cosmology have pushed our understanding of the evolving Universe to new limits. Since the discovery of the cosmic microwave background (CMB) radiation in 1965 by Penzias and Wilson, precise measurements of various cosmological parameters have provided a glimpse into the dynamics of the early Universe and the fate that awaits it in the very distant future. However, these measurements are hindered by the presence of strong foreground contamination (synchrotron, free-free, dust emission) from the interstellar medium in our own Galaxy and others that masks the CMB signal. Recent developments in modeling techniques may provide a better understanding of these foregrounds and allow improved constraints on current cosmological models. The method of nested sampling [16, 5], a Bayesian inference technique for calculating the evidence (the average of the likelihood over the prior mass), promises to be efficient and accurate for modeling the microwave foregrounds masking the CMB signal. An efficient and accurate algorithm would prove extremely useful for analyzing data obtained from current and future CMB experiments. This analysis aims to characterize the behavior of the nested sampling algorithm. We create a physically realistic data simulation, which we then use to reconstruct the CMB sky using both the Internal Linear Combination (ILC) method and nested sampling. The accuracy of the reconstruction is determined by figures of merit based on the RMS of the reconstruction, residuals and foregrounds. We find that modeling the foregrounds by nested sampling produces the most accurate results when the spectral index for the dust foreground component is fixed.
(cont.) Although the reconstructed foregrounds are qualitatively similar to what is expected, none of the non-linear models produce a CMB map as accurate as that produced by internal linear combination(ILC). More over, additional low-frequency components (synchrotron steepening, spinning dust) produce inconclusive results. Further study is needed to improve efficiency and accuracy of the nested sampling algorithm.
by Alexandra Sasha Rahlin.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Wenyu. "Advanced Nonparametric Bayesian Functional Modeling." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99913.

Full text
Abstract:
Functional analyses have gained more interest as we have easier access to massive data sets. However, such data sets often contain large heterogeneities, noise, and dimensionalities. When generalizing the analyses from vectors to functions, classical methods might not work directly. This dissertation considers noisy information reduction in functional analyses from two perspectives: functional variable selection to reduce the dimensionality and functional clustering to group similar observations and thus reduce the sample size. The complicated data structures and relations can be easily modeled by a Bayesian hierarchical model, or developed from a more generic one by changing the prior distributions. Hence, this dissertation focuses on the development of Bayesian approaches for functional analyses due to their flexibilities. A nonparametric Bayesian approach, such as the Dirichlet process mixture (DPM) model, has a nonparametric distribution as the prior. This approach provides flexibility and reduces assumptions, especially for functional clustering, because the DPM model has an automatic clustering property, so the number of clusters does not need to be specified in advance. Furthermore, a weighted Dirichlet process mixture (WDPM) model allows for more heterogeneities from the data by assuming more than one unknown prior distribution. It also gathers more information from the data by introducing a weight function that assigns different candidate priors, such that the less similar observations are more separated. Thus, the WDPM model will improve the clustering and model estimation results. In this dissertation, we used an advanced nonparametric Bayesian approach to study functional variable selection and functional clustering methods. We proposed 1) a stochastic search functional selection method with application to 1-M matched case-crossover studies for aseptic meningitis, to examine the time-varying unknown relationship and find out important covariates affecting disease contractions; 2) a functional clustering method via the WDPM model, with application to three pathways related to genetic diabetes data, to identify essential genes distinguishing between normal and disease groups; and 3) a combined functional clustering, with the WDPM model, and variable selection approach with application to high-frequency spectral data, to select wavelengths associated with breast cancer racial disparities.
Doctor of Philosophy
As we have easier access to massive data sets, functional analyses have gained more interest to analyze data providing information about curves, surfaces, or others varying over a continuum. However, such data sets often contain large heterogeneities and noise. When generalizing the analyses from vectors to functions, classical methods might not work directly. This dissertation considers noisy information reduction in functional analyses from two perspectives: functional variable selection to reduce the dimensionality and functional clustering to group similar observations and thus reduce the sample size. The complicated data structures and relations can be easily modeled by a Bayesian hierarchical model due to its flexibility. Hence, this dissertation focuses on the development of nonparametric Bayesian approaches for functional analyses. Our proposed methods can be applied in various applications: the epidemiological studies on aseptic meningitis with clustered binary data, the genetic diabetes data, and breast cancer racial disparities.
APA, Harvard, Vancouver, ISO, and other styles
6

Caballero, Jose Louis Galan. "Modeling qualitative judgements in Bayesian networks." Thesis, Queen Mary, University of London, 2008. http://qmro.qmul.ac.uk/xmlui/handle/123456789/28170.

Full text
Abstract:
Although Bayesian Networks (BNs) are increasingly being used to solve real world problems [47], their use is still constrained by the difficulty of constructing the node probability tables (NPTs). A key challenge is to construct relevant NPTs using the minimal amount of expert elicitation, recognising that it is rarely cost-effective to elicit complete sets of probability values. This thesis describes an approach to defining NPTs for a large class of commonly occurring nodes called ranked nodes. This approach is based on the doubly truncated Normal distribution with a central tendency that is invariably a type of a weighted function of the parent nodes. We demonstrate through two examples how to build large probability tables using the ranked nodes approach. Using this approach we are able to build the large probability tables needed to capture the complex models coming from assessing firm's risks in the safety or finance sector. The aim of the first example with the National Air-Traffic Services(NATS) is to show that using this approach we can model the impact of the organisational factors in avoiding mid-air aircraft collisions. The resulting model was validated by NATS and helped managers to assess the efficiency of the company handling risks and thus, control the likelihood of air-traffic incidents. In the second example, we use BN models to capture the operational risk (OpRisk) in financial institutions. The novelty of this approach is the use of causal reasoning as a means to reduce the uncertainty surrounding this type of risk. This model was validated against the Basel framework [160], which is the emerging international standard regulation governing how financial institutions assess OpRisks.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhuang, Lili. "Bayesian Dynamical Modeling of Count Data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1315949027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nounou, Mohamed Numan. "Multiscale bayesian linear modeling and applications /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488203552781115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Harati, Nejad Torbati Amir Hossein. "Nonparametric Bayesian Approaches for Acoustic Modeling." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/338396.

Full text
Abstract:
Electrical Engineering
Ph.D.
The goal of Bayesian analysis is to reduce the uncertainty about unobserved variables by combining prior knowledge with observations. A fundamental limitation of a parametric statistical model, including a Bayesian approach, is the inability of the model to learn new structures. The goal of the learning process is to estimate the correct values for the parameters. The accuracy of these parameters improves with more data but the model’s structure remains fixed. Therefore new observations will not affect the overall complexity (e.g. number of parameters in the model). Recently, nonparametric Bayesian methods have become a popular alternative to Bayesian approaches because the model structure is learned simultaneously with the parameter distributions in a data-driven manner. The goal of this dissertation is to apply nonparametric Bayesian approaches to the acoustic modeling problem in continuous speech recognition. Three important problems are addressed: (1) statistical modeling of sub-word acoustic units; (2) semi-supervised training algorithms for nonparametric acoustic models; and (3) automatic discovery of sub-word acoustic units. We have developed a Doubly Hierarchical Dirichlet Process Hidden Markov Model (DHDPHMM) with a non-ergodic structure that can be applied to problems involving sequential modeling. DHDPHMM shares mixture components between states using two Hierarchical Dirichlet Processes (HDP). An inference algorithm for this model has been developed that enables DHDPHMM to outperform both its hidden Markov model (HMM) and HDP HMM (HDPHMM) counterparts. This inference algorithm is shown to also be computationally less expensive than a comparable algorithm for HDPHMM. In addition to sharing data, the proposed model can learn non-ergodic structures and non-emitting states, something that HDPHMM does not support. This extension to the model is used to model finite length sequences. We have also developed a generative model for semi-supervised training of DHDPHMMs. Semi-supervised learning is an important practical requirement for many machine learning applications including acoustic modeling in speech recognition. The relative improvement in error rates on classification and recognition tasks is shown to be 22% and 7% respectively. Semi-supervised training results are slightly better than supervised training (29.02% vs. 29.71%). Context modeling was also investigated and results show a modest improvement of 1.5% relative over the baseline system. We also introduce a nonparametric Bayesian transducer based on an ergodic HDPHMM/DHDPHMM that automatically segments and clusters the speech signal using an unsupervised approach. This transducer was used in several applications including speech segmentation, acoustic unit discovery, spoken term detection and automatic generation of a pronunciation lexicon. For the segmentation problem, an F¬¬¬¬¬¬-score of 76.62% was achieved which represents a 9% relative improvement over the baseline system. On the spoken term detection tasks, an average precision of 64.91% was achieved, which represents a 20% improvement over the baseline system. Lexicon generation experiments also show automatically discovered units (ADU) generalize to new datasets. In this dissertation, we have established the foundation for applications of non-parametric Bayesian modeling to problems such as speech recognition that involve sequential modeling. These models allow a new generation of machine learning systems that adapt their overall complexity in a data-driven manner and yet preserve meaningful modalities in the data. As a result, these models improve generalization and offer higher performance at lower complexity.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
10

Beierholm, Ulrik Ravnsborg Quartz Steven Quartz Steven. "Bayesian modeling of sensory cue combinations /." Diss., Pasadena, Calif. : California Institute of Technology, 2007. http://resolver.caltech.edu/CaltechETD:etd-05212007-172639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Durante, Daniele. "Bayesian Nonparametric Modeling of Network Data." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424390.

Full text
Abstract:
Network data representing relationship structures among a set of nodes are available in many fields of applications covering social science, neuroscience, business intelligence and broader relational settings. Although early probability models for networks date back almost sixty years, this field of research is still an object of intense and dynamic interest. A primary reason for the recent growth of statistical methodologies in modeling of networks is that the routine collection of such data is a recent development. Online social networks, novel neuroimaging technologies, improved business intelligence analyses and sophisticated computer algorithms monitoring world news media, currently provide increasingly complex network data sets along with novel motivating applications and new methodological questions. A challenging issue in such settings is that data are available via multiple network observations and hence the rich literature in modeling of a single network falls far short of the goal of providing flexible inference in this scenario. Statistical modeling of replicated network data is still on its infancy and several questions remain about coherence of inference, flexibility, computational tractability and other key issues. Motivated by complex applications from different domains, this thesis aims to take a sizable step towards addressing these issues via Bayesian nonparametric modeling. The thesis is organized in two main frameworks, further divided in different topics. The first thread develops flexible and computationally tractable stochastic processes for modeling dynamic networks, which incorporate temporal dependence and exploit latent network structures. The second focuses on defining a provably flexible representation for the probabilistic generative mechanism underlying a network-valued random variable, which is able to provide valuable insights both on shared and subject -- or phenotype -- specific sources of variability in the network structure
I dati di rete misurano connessioni tra un insieme di nodi e ricorrono in molti campi di studio, tra cui le scienze sociali, le neuroscienze, il marketing ed altre discipline. Sebbene i primi modelli probabilistici per dati di rete risalgano a circa sessant'anni fa, questo campo di ricerca è tuttora oggetto di vivace ed intenso interesse. La principale motivazione per la recente crescita di metodologie statistiche per la modellazione di reti è legata alla sempre più massiccia accessibilità a dati di questo tipo. Le reti sociali online, i recenti sviluppi tecnologici nel monitoraggio di reti cerebrali e la disponibilità di algoritmi sofisticati per catalogare informazioni dai mezzi di comunicazione, forniscono dati di rete caratterizzati da una progressiva complessità e contribuiscono a nuovi interrogativi applicativi e metodologici. Un aspetto comune a queste nuove basi di dati è legato alla disponibilità di misure ripetute di reti, anziché di una sola rete. Di conseguenza, l'ampia letteratura nello studio di una singola rete richiede generalizzazioni sostanziali per fornire adeguati strumenti inferenziali in questi nuovi scenari. Le tecniche statistiche di modellazione per misure ripetute di reti sono ancora agli albori e diversi interrogativi rimangono ancora irrisolti in merito alla coerenza dei metodi inferenziali, alla maneggevolezza degli strumenti computazionali ed altre importanti questioni. Questa tesi è motivata da applicazioni complesse in diversi ambiti di studio e si pone l'obiettivo di compiere un passo considerevole nella risponda alle precedenti tematiche attraverso modelli Bayesiani non parametrici. Il lavoro è organizzato in due macro aree, a loro volta suddivise in diverse tematiche. La prima si pone l'obiettivo di sviluppare processi stocastici flessibili per la modellazione di reti dinamiche, capaci di incorporare sia la dipendenza temporale che quella di rete. La seconda macro area cerca invece di definire tecniche di rappresentazione flessibili per definire meccanismi probabilistici associati a variabili aleatorie di rete, con il fine di fornire informazioni chiave su strutture comuni di connessione e comprendere se e come queste si modifichino in funzione di altre variabili
APA, Harvard, Vancouver, ISO, and other styles
12

D'ANGELO, LAURA. "Bayesian modeling of calcium imaging data." Doctoral thesis, Università degli Studi di Padova, 2022. https://hdl.handle.net/10281/399067.

Full text
Abstract:
Recent advancements in miniaturized fluorescence microscopy have made it possible to investigate neuronal responses to external stimuli in awake behaving animals through the analysis of intra-cellular calcium signals. An ongoing challenge is deconvolving the noisy calcium signals to extract the spike trains, and understanding how this activity is affected by external stimuli and conditions. In this thesis, we aim to provide novel approaches to tackle various aspects of the analysis of calcium imaging data within a Bayesian framework. Following the standard methodology to the analysis of calcium imaging data based on a two-stage approach, we investigate efficient computational methods to link the output of the deconvolved fluorescence traces with the experimental conditions. In particular, we focus on the use of Poisson regression models to relate the number of detected spikes with several covariates. Motivated by this framework, but with a general impact in terms of application to other fields, we develop an efficient Metropolis-Hastings and importance sampling algorithm to simulate from the posterior distribution of the parameters of Poisson log-linear models under conditional Gaussian priors, with superior performance with respect to the state-of-the-art alternatives. Motivated by the lack of clear uncertainty quantification resulting from the use of a two-stage approach, and the impossibility to borrow information between the two stages, we focus on the analysis of individual neurons, and develop a coherent mixture model that allows for estimation of spiking activity and, simultaneously, reconstructing the distributions of the calcium transient spikes' amplitudes under different experimental conditions. More specifically, our modeling framework leverages two nested layers of random discrete mixture priors to borrow information between experiments and discover similarities in the distributional patterns of the neuronal response to different stimuli. Finally, we move to the multivariate analysis of populations of neurons. Here the interest is not only to detect and analyze the spiking activity but also to investigate the existence of groups of co-activating neurons. Estimation of such groups is a challenging problem due to the need to deconvolve the calcium traces and then cluster the resulting latent binary time series of activity. We describe a nonparametric mixture model that allows for simultaneous deconvolution and clustering of time series based on common patterns of activity. The model makes use of a latent continuous process for the spike probabilities to identify groups of co-activating cells. Neurons' dependence is taken into account by informing the mixture weights with their spatial location, following the common neuroscience assumption that neighboring neurons often activate together.
APA, Harvard, Vancouver, ISO, and other styles
13

Cho, Hyun Cheol. "Dynamic Bayesian networks for online stochastic modeling." abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3221394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Baker, Roderick James Samuel. "Bayesian opponent modeling in adversarial game environments." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5205.

Full text
Abstract:
This thesis investigates the use of Bayesian analysis upon an opponent's behaviour in order to determine the desired goals or strategy used by a given adversary. A terrain analysis approach utilising the A* algorithm is investigated, where a probability distribution between discrete behaviours of an opponent relative to a set of possible goals is generated. The Bayesian analysis of agent behaviour accurately determines the intended goal of an opponent agent, even when the opponent's actions are altered randomly. The environment of Poker is introduced and abstracted for ease of analysis. Bayes' theorem is used to generate an effective opponent model, categorizing behaviour according to its similarity with known styles of opponent. The accuracy of Bayes' rule yields a notable improvement in the performance of an agent once an opponent's style is understood. A hybrid of the Bayesian style predictor and a neuroevolutionary approach is shown to lead to effective dynamic play, in comparison to agents that do not use an opponent model. The use of recurrence in evolved networks is also shown to improve the performance and generalizability of an agent in a multiplayer environment. These strategies are then employed in the full-scale environment of Texas Hold'em, where a betting round-based approach proves useful in determining and counteracting an opponent's play. It is shown that the use of opponent models, with the adaptive benefits of neuroevolution aid the performance of an agent, even when the behaviour of an opponent does not necessarily fit within the strict definitions of opponent 'style'.
APA, Harvard, Vancouver, ISO, and other styles
15

Guan, Jinyan. "Bayesian generative modeling for complex dynamical systems." Thesis, The University of Arizona, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10109036.

Full text
Abstract:

This dissertation presents a Bayesian generative modeling approach for complex dynamical systems for emotion-interaction patterns within multivariate data collected in social psychology studies. While dynamical models have been used by social psychologists to study complex psychological and behavior patterns in recent years, most of these studies have been limited by using regression methods to fit the model parameters from noisy observations. These regression methods mostly rely on the estimates of the derivatives from the noisy observation, thus easily result in overfitting and fail to predict future outcomes. A Bayesian generative model solves the problem by integrating the prior knowledge of where the data comes from with the observed data through posterior distributions. It allows the development of theoretical ideas and mathematical models to be independent of the inference concerns. Besides, Bayesian generative statistical modeling allows evaluation of the model based on its predictive power instead of the model residual error reduction in regression methods to prevent overfitting in social psychology data analysis.

In the proposed Bayesian generative modeling approach, this dissertation uses the State Space Model (SSM) to model the dynamics of emotion interactions. Specifically, it tests the approach in a class of psychological models aimed at explaining the emotional dynamics of interacting couples in committed relationships. The latent states of the SSM are composed of continuous real numbers that represent the level of the true emotional states of both partners. One can obtain the latent states at all subsequent time points by evolving a differential equation (typically a coupled linear oscillator (CLO)) forward in time with some known initial state at the starting time. The multivariate observed states include self-reported emotional experiences and physiological measurements of both partners during the interactions. To test whether well-being factors, such as body weight, can help to predict emotion-interaction patterns, We construct functions that determine the prior distributions of the CLO parameters of individual couples based on existing emotion theories. Besides, we allow a single latent state to generate multivariate observations and learn the group-shared coefficients that specify the relationship between the latent states and the multivariate observations.

Furthermore, we model the nonlinearity of the emotional interaction by allowing smooth changes (drift) in the model parameters. By restricting the stochasticity to the parameter level, the proposed approach models the dynamics in longer periods of social interactions assuming that the interaction dynamics slowly and smoothly vary over time. The proposed approach achieves this by applying Gaussian Process (GP) priors with smooth covariance functions to the CLO parameters. Also, we propose to model the emotion regulation patterns as clusters of the dynamical parameters. To infer the parameters of the proposed Bayesian generative model from noisy experimental data, we develop a Gibbs sampler to learn the parameters of the patterns using a set of training couples.

To evaluate the fitted model, we develop a multi-level cross-validation procedure for learning the group-shared parameters and distributions from training data and testing the learned models on held-out testing data. During testing, we use the learned shared model parameters to fit the individual CLO parameters to the first 80% of the time points of the testing data by Monte Carlo sampling and then predict the states of the last 20% of the time points. By evaluating models with cross-validation, one can estimate whether complex models are overfitted to noisy observations and fail to generalize to unseen data. I test our approach on both synthetic data that was generated by the generative model and real data that was collected in multiple social psychology experiments. The proposed approach has the potential to model other complex behavior since the generative model is not restricted to the forms of the underlying dynamics.

APA, Harvard, Vancouver, ISO, and other styles
16

Guan, Jinyan. "Bayesian Generative Modeling of Complex Dynamical Systems." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612950.

Full text
Abstract:
This dissertation presents a Bayesian generative modeling approach for complex dynamical systems for emotion-interaction patterns within multivariate data collected in social psychology studies. While dynamical models have been used by social psychologists to study complex psychological and behavior patterns in recent years, most of these studies have been limited by using regression methods to fit the model parameters from noisy observations. These regression methods mostly rely on the estimates of the derivatives from the noisy observation, thus easily result in overfitting and fail to predict future outcomes. A Bayesian generative model solves the problem by integrating the prior knowledge of where the data comes from with the observed data through posterior distributions. It allows the development of theoretical ideas and mathematical models to be independent of the inference concerns. Besides, Bayesian generative statistical modeling allows evaluation of the model based on its predictive power instead of the model residual error reduction in regression methods to prevent overfitting in social psychology data analysis. In the proposed Bayesian generative modeling approach, this dissertation uses the State Space Model (SSM) to model the dynamics of emotion interactions. Specifically, it tests the approach in a class of psychological models aimed at explaining the emotional dynamics of interacting couples in committed relationships. The latent states of the SSM are composed of continuous real numbers that represent the level of the true emotional states of both partners. One can obtain the latent states at all subsequent time points by evolving a differential equation (typically a coupled linear oscillator (CLO)) forward in time with some known initial state at the starting time. The multivariate observed states include self-reported emotional experiences and physiological measurements of both partners during the interactions. To test whether well-being factors, such as body weight, can help to predict emotion-interaction patterns, we construct functions that determine the prior distributions of the CLO parameters of individual couples based on existing emotion theories. Besides, we allow a single latent state to generate multivariate observations and learn the group-shared coefficients that specify the relationship between the latent states and the multivariate observations. Furthermore, we model the nonlinearity of the emotional interaction by allowing smooth changes (drift) in the model parameters. By restricting the stochasticity to the parameter level, the proposed approach models the dynamics in longer periods of social interactions assuming that the interaction dynamics slowly and smoothly vary over time. The proposed approach achieves this by applying Gaussian Process (GP) priors with smooth covariance functions to the CLO parameters. Also, we propose to model the emotion regulation patterns as clusters of the dynamical parameters. To infer the parameters of the proposed Bayesian generative model from noisy experimental data, we develop a Gibbs sampler to learn the parameters of the patterns using a set of training couples. To evaluate the fitted model, we develop a multi-level cross-validation procedure for learning the group-shared parameters and distributions from training data and testing the learned models on held-out testing data. During testing, we use the learned shared model parameters to fit the individual CLO parameters to the first 80% of the time points of the testing data by Monte Carlo sampling and then predict the states of the last 20% of the time points. By evaluating models with cross-validation, one can estimate whether complex models are overfitted to noisy observations and fail to generalize to unseen data. I test our approach on both synthetic data that was generated by the generative model and real data that was collected in multiple social psychology experiments. The proposed approach has the potential to model other complex behavior since the generative model is not restricted to the forms of the underlying dynamics.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Ju Hee. "Robust Statistical Modeling through Nonparametric Bayesian Methods." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275399497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Huo, Shuning. "Bayesian Modeling of Complex High-Dimensional Data." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/101037.

Full text
Abstract:
With the rapid development of modern high-throughput technologies, scientists can now collect high-dimensional complex data in different forms, such as medical images, genomics measurements. However, acquisition of more data does not automatically lead to better knowledge discovery. One needs efficient and reliable analytical tools to extract useful information from complex datasets. The main objective of this dissertation is to develop innovative Bayesian methodologies to enable effective and efficient knowledge discovery from complex high-dimensional data. It contains two parts—the development of computationally efficient functional mixed models and the modeling of data heterogeneity via Dirichlet Diffusion Tree. The first part focuses on tackling the computational bottleneck in Bayesian functional mixed models. We propose a computational framework called variational functional mixed model (VFMM). This new method facilitates efficient data compression and high-performance computing in basis space. We also propose a new multiple testing procedure in basis space, which can be used to detect significant local regions. The effectiveness of the proposed model is demonstrated through two datasets, a mass spectrometry dataset in a cancer study and a neuroimaging dataset in an Alzheimer's disease study. The second part is about modeling data heterogeneity by using Dirichlet Diffusion Trees. We propose a Bayesian latent tree model that incorporates covariates of subjects to characterize the heterogeneity and uncover the latent tree structure underlying data. This innovative model may reveal the hierarchical evolution process through branch structures and estimate systematic differences between groups of samples. We demonstrate the effectiveness of the model through the simulation study and a brain tumor real data.
Doctor of Philosophy
With the rapid development of modern high-throughput technologies, scientists can now collect high-dimensional data in different forms, such as engineering signals, medical images, and genomics measurements. However, acquisition of such data does not automatically lead to efficient knowledge discovery. The main objective of this dissertation is to develop novel Bayesian methods to extract useful knowledge from complex high-dimensional data. It has two parts—the development of an ultra-fast functional mixed model and the modeling of data heterogeneity via Dirichlet Diffusion Trees. The first part focuses on developing approximate Bayesian methods in functional mixed models to estimate parameters and detect significant regions. Two datasets demonstrate the effectiveness of proposed method—a mass spectrometry dataset in a cancer study and a neuroimaging dataset in an Alzheimer's disease study. The second part focuses on modeling data heterogeneity via Dirichlet Diffusion Trees. The method helps uncover the underlying hierarchical tree structures and estimate systematic differences between the group of samples. We demonstrate the effectiveness of the method through the brain tumor imaging data.
APA, Harvard, Vancouver, ISO, and other styles
19

SILVA, Rodrigo Bernardo da. "A Bayesian approach for modeling stochastic deterioration." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/5610.

Full text
Abstract:
Made available in DSpace on 2014-06-12T17:40:37Z (GMT). No. of bitstreams: 2 arquivo720_1.pdf: 2087569 bytes, checksum: 4e440439e51674690e086dbc501c7a58 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Conselho Nacional de Desenvolvimento Científico e Tecnológico
A modelagem de deterioracão tem estado na vanguarda das analises Bayesianas de confiabilidade. As abordagens mais conhecidas encontradas na literatura para este proposito avaliam o comportamento da medida de confiabilidade ao longo do tempo a luz dos dados empiricos, apenas. No contexto de engenharia de confiabilidade, essas abordagens têm aplicabilidade limitada uma vez que frequentemente lida-se com situacões caracterizadas pela escassez de dados empiricos. Inspirado em estrategias Bayesianas que agregam dados empiricos e opiniões de especialistas na modelagem de medidas de confiabilidade não-dependentes do tempo, este trabalho propõe uma metodologia para lidar com confiabilidade dependente do tempo. A metodologia proposta encapsula conhecidas abordagens Bayesianas, como metodos Bayesianos para combinar dados empiricos e opiniões de especialistas e modelos Bayesianos indexados no tempo, promovendo melhorias sobre eles a fim de encontrar um modelo mais realista para descrever o processo de deterioracão de um determinado componente ou sistema. Os casos a serem discutidos são os tipicamente encontrados na pratica de confiabilidade (por meio de simulacão): avaliacão dos dados sobre tempo de execucão para taxas de falha e a quantidade de deterioracão, dados com base na demanda para probabilidade de falha; e opiniões de especialistas para analise da taxa de falha, quantidade de deterioracão e probabilidade de falha. Estes estudos de caso mostram que o uso de informacões especializadas pode levar a uma reducão da incerteza sobre distribuicões de medidas de confiabilidade, especialmente em situacões em que poucas ou nenhuma falha e observada.
APA, Harvard, Vancouver, ISO, and other styles
20

Baker, Roderick J. S. "Bayesian opponent modeling in adversarial game environments." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5205.

Full text
Abstract:
This thesis investigates the use of Bayesian analysis upon an opponent¿s behaviour in order to determine the desired goals or strategy used by a given adversary. A terrain analysis approach utilising the A* algorithm is investigated, where a probability distribution between discrete behaviours of an opponent relative to a set of possible goals is generated. The Bayesian analysis of agent behaviour accurately determines the intended goal of an opponent agent, even when the opponent¿s actions are altered randomly. The environment of Poker is introduced and abstracted for ease of analysis. Bayes¿ theorem is used to generate an effective opponent model, categorizing behaviour according to its similarity with known styles of opponent. The accuracy of Bayes¿ rule yields a notable improvement in the performance of an agent once an opponent¿s style is understood. A hybrid of the Bayesian style predictor and a neuroevolutionary approach is shown to lead to effective dynamic play, in comparison to agents that do not use an opponent model. The use of recurrence in evolved networks is also shown to improve the performance and generalizability of an agent in a multiplayer environment. These strategies are then employed in the full-scale environment of Texas Hold¿em, where a betting round-based approach proves useful in determining and counteracting an opponent¿s play. It is shown that the use of opponent models, with the adaptive benefits of neuroevolution aid the performance of an agent, even when the behaviour of an opponent does not necessarily fit within the strict definitions of opponent ¿style¿.
Engineering and Physical Sciences Research Council (EPSRC)
APA, Harvard, Vancouver, ISO, and other styles
21

Frermann, Lea. "Bayesian models of category acquisition and meaning development." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25379.

Full text
Abstract:
The ability to organize concepts (e.g., dog, chair) into efficient mental representations, i.e., categories (e.g., animal, furniture) is a fundamental mechanism which allows humans to perceive, organize, and adapt to their world. Much research has been dedicated to the questions of how categories emerge and how they are represented. Experimental evidence suggests that (i) concepts and categories are represented through sets of features (e.g., dogs bark, chairs are made of wood) which are structured into different types (e.g, behavior, material); (ii) categories and their featural representations are learnt jointly and incrementally; and (iii) categories are dynamic and their representations adapt to changing environments. This thesis investigates the mechanisms underlying the incremental and dynamic formation of categories and their featural representations through cognitively motivated Bayesian computational models. Models of category acquisition have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this thesis, we focus on categories acquired from natural language stimuli, using nouns as a stand-in for their reference concepts, and their linguistic contexts as a representation of the concepts’ features. The use of text corpora allows us to (i) develop large-scale unsupervised models thus simulating human learning, and (ii) model child category acquisition, leveraging the linguistic input available to children in the form of transcribed child-directed language. In the first part of this thesis we investigate the incremental process of category acquisition. We present a Bayesian model and an incremental learning algorithm which sequentially integrates newly observed data. We evaluate our model output against gold standard categories (elicited experimentally from human participants), and show that high-quality categories are learnt both from child-directed data and from large, thematically unrestricted text corpora. We find that the model performs well even under constrained memory resources, resembling human cognitive limitations. While lists of representative features for categories emerge from this model, they are neither structured nor jointly optimized with the categories. We address these shortcomings in the second part of the thesis, and present a Bayesian model which jointly learns categories and structured featural representations. We present both batch and incremental learning algorithms, and demonstrate the model’s effectiveness on both encyclopedic and child-directed data. We show that high-quality categories and features emerge in the joint learning process, and that the structured features are intuitively interpretable through human plausibility judgment evaluation. In the third part of the thesis we turn to the dynamic nature of meaning: categories and their featural representations change over time, e.g., children distinguish some types of features (such as size and shade) less clearly than adults, and word meanings adapt to our ever changing environment and its structure. We present a dynamic Bayesian model of meaning change, which infers time-specific concept representations as a set of feature types and their prevalence, and captures their development as a smooth process. We analyze the development of concept representations in their complexity over time from child-directed data, and show that our model captures established patterns of child concept learning. We also apply our model to diachronic change of word meaning, modeling how word senses change internally and in prevalence over centuries. The contributions of this thesis are threefold. Firstly, we show that a variety of experimental results on the acquisition and representation of categories can be captured with computational models within the framework of Bayesian modeling. Secondly, we show that natural language text is an appropriate source of information for modeling categorization-related phenomena suggesting that the environmental structure that drives category formation is encoded in this data. Thirdly, we show that the experimental findings hold on a larger scale. Our models are trained and tested on a larger set of concepts and categories than is common in behavioral experiments and the categories and featural representations they can learn from linguistic text are in principle unrestricted.
APA, Harvard, Vancouver, ISO, and other styles
22

White, Gentry. "Bayesian semiparametric spatial and joint spatio-temporal modeling." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4450.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Yi. "Bayesian spatial and ecological modeling of suicide rates." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/12568.

Full text
Abstract:
Suicide and suicide attempts constitute major public and mental health problems in many countries. The risk factors of suicide include not only psychological and other individual features but also the characteristics of the community in which the people live. Therefore, in order to better understand the potential impacts of community characteristics on suicide, the regional level effects of suicide need to be thoroughly examined. For this thesis, an ecological analysis was incorporated into a Bayesian disease mapping study in order to estimate suicide rates, explore regional risk factors, and discern spatial patterns in suicide risks. Fully Bayesian disease mapping and ecological regression methods were used to estimate area-specific suicide risks, investigate spatial variations, and explore and quantify the associations between regional characteristics and suicide occurrences. The fact that spatially smoothed estimates of suicide rates highlight the high risk regions can act as stable health outcome indicators at the regional level. Furthermore, regional characteristics explored as potential risk factors of suicide rates can provide a better understanding of regional variations of suicide rates. Both can help in planning future public health prevention programs. In order to avoid multicollinearity among risk factors and reduce the dimensionality of the risk indicators, Principal Component Analysis and Empirical Bayes method (via Penalized Quasi-Likelihood) were applied in variable selection and highlighting risk patterns. Using 10-year aggregated data for all age groups and both genders, this study conducted a comprehensive analysis of suicide hospitalization and mortality rates in eighty-four Local Health Areas in British Columbia (Canada). A broad range of regional characteristics was investigated and different associations with suicide rates were observed in different demographic and gender groups. The major regional risk patterns related to suicide rates across age groups were social and economic characteristics, which include unemployment rates, income, education attainment, marital status, family structure, and dwellings. Some age groups also showed a relation to aboriginal population, immigrants, and language. The results of this study may inform policy initiatives and programs for suicide prevention.
APA, Harvard, Vancouver, ISO, and other styles
24

Stein, Nathan Mathes. "Advances in Empirical Bayes Modeling and Bayesian Computation." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11051.

Full text
Abstract:
Chapter 1 of this thesis focuses on accelerating perfect sampling algorithms for a Bayesian hierarchical model. A discrete data augmentation scheme together with two different parameterizations yields two Gibbs samplers for sampling from the posterior distribution of the hyperparameters of the Dirichlet-multinomial hierarchical model under a default prior distribution. The finite-state space nature of this data augmentation permits us to construct two perfect samplers using bounding chains that take advantage of monotonicity and anti-monotonicity in the target posterior distribution, but both are impractically slow. We demonstrate however that a composite algorithm that strategically alternates between the two samplers' updates can be substantially faster than either individually. We theoretically bound the expected time until coalescence for the composite algorithm, and show via simulation that the theoretical bounds can be close to actual performance. Chapters 2 and 3 introduce a strategy for constructing scientifically sensible priors in complex models. We call these priors catalytic priors to suggest that adding such prior information catalyzes our ability to use richer, more realistic models. Because they depend on observed data, catalytic priors are a tool for empirical Bayes modeling. The overall perspective is data-driven: catalytic priors have a pseudo-data interpretation, and the building blocks are alternative plausible models for observations, yielding behavior similar to hierarchical models but with a conceptual shift away from distributional assumptions on parameters. The posterior under a catalytic prior can be viewed as an optimal approximation to a target measure, subject to a constraint on the posterior distribution's predictive implications. In Chapter 3, we apply catalytic priors to several familiar models and investigate the performance of the resulting posterior distributions. We also illustrate the application of catalytic priors in a preliminary analysis of the effectiveness of a job training program, which is complicated by the need to account for noncompliance, partially defined outcomes, and missing outcome data.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
25

Havasi, Catherine Andrea 1981. "Bayesian modeling of manner and path psychological data." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/16678.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 106-110).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
How people and computers can learn the meaning of words has long been a key question for both AI and cognitive science. It is hypothesized that a person acquires a bias to favor the characteristics of their native language, in order to aid word learning. Other hypothesized aids are syntactic bootstrapping, in which the learner assumes that the meaning of a novel word is similar to that of other words used in a similar syntax, and its complement, semantic bootstrapping, in which the learner assumes that the syntax of a novel word is similar to that of other words used in similar situations. How these components work together is key to understanding word learning. Using cognitive psychology and computer science as a platform, this thesis attempts to tackle these questions using the classic example of manner and path verb bias. A series of cognitive psychology experiments was designed to gather information on this bias. Considerable flexibility of the subject's bias was demonstrated during these experiments. Another separate series of experiments was conducted using different syntactic frames for the novel verbs to address the question of bootstrapping. The resulting information was used to design a Bayesian model which successfully predicts the human behavior in the psychological experiments that were conducted. Dynamic parameters were required to account for subjects revising their expected manner and path verb distributions during the course of an experiment. Bayesian model parameters that were optimized for rich syntactic frame data performed equally well in predicting poor syntactic frame data.
by Catherine Andrea Havasi.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
26

Molari, Marco. "Modeling and Bayesian inference for antibody affinity maturation." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE017.

Full text
Abstract:
La Maturation d’Affinité (MA) est le processus biologique grâce auquel notre système immunitaire génère de puissants anticorps contre les nouveaux agents pathogènes rencontrés. Ce processus est également à la base de la vaccination, l’une des procédures médicales les plus efficaces jamais mises au point, qui permet de sauver des millions de vies chaque année. La MA présentent encore de nombreuses questions ouvertes, dont les réponses peuvent améliorer la manière dont nous vaccinons. Les mécanismes à la base de la MA sont extrêmement complexes, avec des interactions non linéaires entre nombreux cellules différentes. Dans ce contexte, les modèles théoriques et l’inférence Bayésienne sont des outils précieux pour relier les hypothèses qualitatives aux descriptions quantitatives et extraire informations des données expérimentales. Dans ce manuscrit, nous utilisons ces outils pour aborder certaines questions ouvertes, comme l’effet du dosage de l’antigène sur la qualité de la vaccination
Affinity Maturation (AM) is the biological process through which our Immune System generates potent Antibodies (Abs) against newly encountered pathogens. This process is also at the base of vaccination, one of the most successful and cost-effective medical procedures ever developed, responsible for saving millions of lives every year. AM still present many open questions, whose answers have the potential of improving the way we vaccinate. The mechanisms at the base of AM are extremely complex, involving non-linear interactions between many different cellular agents. In this context theoretical models and Bayesian Inference are invaluable tools, respectively to link qualitative hypothesis to quantitative descriptions and to extract information from experimental data. In this manuscript we make use of these tools to tackle some of the open questions, such as the non-trivial effect of Ag dosage on the outcome of vaccination
APA, Harvard, Vancouver, ISO, and other styles
27

Jochmann, Markus. "Three Essays on Bayesian Nonparametric Modeling in Microeconometrics." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-57862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Grundy, William Noble. "A bayesian approach to motif-based protein modeling /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9904723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Leininger, Thomas J. "An Adaptive Bayesian Approach to Dose-Response Modeling." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3325.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ferrari, Clarissa <1976&gt. "The wrapping approach for circular data Bayesian modeling." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1473/1/clarissa_ferrari_tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ferrari, Clarissa <1976&gt. "The wrapping approach for circular data Bayesian modeling." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1473/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

MANFREDOTTI, CRISTINA ELENA. "Modeling and inference with relational dynamic bayesian networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/7829.

Full text
Abstract:
Many domains in the real world are richly structured, containing a diverse set of agents characterized by different set of features and related to each other in a variety of ways. Moreover, uncertainty both on the objects observations and on their relations can be present. This is the case of many problems as, for example, multi-target tracking, activity recognition, automatic surveillance and traffic monitoring. The common ground of these types of problems is the necessity of recognizing and understanding the scene, the activities that are going on, who are the actors, their role and estimate their positions. When the environment is particularly complex, including several distinct entities whose behaviors might be correlated, automated reasoning becomes particularly challenging. Even in cases where humans can easily recognize activities, current computer programs fail because they lack of commonsense reasoning, and because the current limitation of automated reasoning systems. As a result surveillance supervision is so far mostly delegated to humans. The explicit representation of the interconnected behaviors of agents can provide better models for capturing key elements of the activities in the scene. In this Thesis we propose the use of relations to model particular correlations between agents features, aimed at improving the inference task. We propose the use of relational Dynamic Bayesian Networks, an extension of Dynamic Bayesian Networks with First Order Logic, to represent the dependencies between an agent’s attributes, the scene’s elements and the evolution of state variables over time. In this way, we can combine the advantages of First Order Logic (that can compactly represent structured environments), with those of probabilistic models (that provide a mathematically sound framework for inference in face of uncertainty). In particular, we investigate the use of Relational Dynamic Bayesian Networks to represent the dependencies between the agents’ behaviors in the context of multi-agents tracking and activity recognition. We propose a new formulation of the transition model that accommodates for relations and present a filtering algorithm that extends the Particle Filter algorithm in order to directly track relations between the agents. The explicit recognition of the relationships between interacting objects can improve the understanding of their dynamic domain. The inference algorithm we develop in this Thesis is able to take into account relations between interacting objects and we demonstrate with experiments that the performance of our relational approach outperforms those of standard non-relational methods. While the goal of emulating human-level inference on scene understanding is out of reach for the current state of the art, we believe that this work represents an important step towards better algorithms and models to provide inference in complex multi-agent systems. Another advantage of our probabilistic model is its ability to make inference online, so that the appropriate cause of action can be taken when necessary (e.g., raise an alarm). This is an important requirement for the adoption of automatic surveillance systems in the real world, and avoid the common problems associated with human surveillance.
APA, Harvard, Vancouver, ISO, and other styles
33

Petit, Sébastien. "Improved Gaussian process modeling : Application to Bayesian optimization." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG063.

Full text
Abstract:
Cette thèse s’inscrit dans la lignée de travaux portant sur la modélisation bayésienne de fonctions par processus gaussiens, pour des applications en conception industrielle s’appuyant sur des simulateurs numériques dont le temps de calcul peut atteindre jusqu’à plusieurs heures. Notre travail se concentre sur le problème de sélection et de validation de modèle et s’articule autour de deux axes. Le premier consiste à étudier empiriquement les pratiques courantes de modélisation par processus gaussien stationnaire. Plusieurs problèmes sur la sélection automatique de paramètre de processus gaussien sont considérés. Premièrement, une étude empirique des critères de sélection de paramètres constitue le coeur de cet axe de recherche et conclut que, pour améliorer la prédictivité des modèles, le choix d’un critère de sélection parmi les plus courants est un facteur de moindre importance que le choix a priori d’une famille de modèles. Plus spécifiquement, l’étude montre que le paramètre de régularité de la fonction de covariance de Matérn est plus déterminant que le choix d’un critère de vraisemblance ou de validation croisée. De plus, l’analyse des résultats numériques montre que ce paramètre peut-être sélectionné de manière satisfaisante par les critères, ce qui aboutit à une recommandation permettant d’améliorer les pratiques courantes. Ensuite, une attention particulière est réservée à l’optimisation numérique du critère de vraisemblance. Constatant, comme Erickson et al. (2018), des inconsistances importantes entre les différentes librairies disponibles pour la modélisation par processus gaussien, nous proposons une série de recettes numériques élémentaires permettant d’obtenir des gains significatifs tant en termes de vraisemblance que de précision du modèle. Enfin, les formules analytiques pour le calcul de critère de validation croisée sont revisitées sous un angle nouveau et enrichies de formules analogues pour les gradients. Cette dernière contribution permet d’aligner le coût calculatoire d’une classe de critères de validation croisée sur celui de la vraisemblance. Le second axe de recherche porte sur le développement de méthodes dépassant le cadre des modèles gaussiens stationnaires. Constatant l’absence de méthode ciblée dans la littérature, nous proposons une approche permettant d’améliorer la précision d’un modèle sur une plage d’intérêt en sortie. Cette approche consiste à relâcher les contraintes d’interpolation sur une plage de relaxation disjointe de la plage d’intérêt, tout en conservant un coût calculatoire raisonnable. Nous proposons également une approche pour la sélection automatique de la plage de relaxation en fonction de la plage d’intérêt. Cette nouvelle méthode permet de définir des régions d’intérêt potentiellement complexes dans l’espace d’entrée avec peu de paramètres et, en dehors, d’apprendre de manière non-paramétrique une transformation permettant d’améliorer la prédictivité du modèle sur la plage d’intérêt. Des simulations numériques montrent l’intérêt de la méthode pour l’optimisation bayésienne, où l’on est intéressé par les valeurs basses dans le cadre de la minimisation. De plus, la convergence théorique de la méthode est établie, sous certaines hypothèses
This manuscript focuses on Bayesian modeling of unknown functions with Gaussian processes. This task arises notably for industrial design, with numerical simulators whose computation time can reach several hours. Our work focuses on the problem of model selection and validation and goes in two directions. The first part studies empirically the current practices for stationary Gaussian process modeling. Several issues on Gaussian process parameter selection are tackled. A study of parameter selection criteria is the core of this part. It concludes that the choice of a family of models is more important than that of the selection criterion. More specifically, the study shows that the regularity parameter of the Matérn covariance function is more important than the choice of a likelihood or cross-validation criterion. Moreover, the analysis of the numerical results shows that this parameter can be selected satisfactorily by the criteria, which leads to a practical recommendation. Then, particular attention is given to the numerical optimization of the likelihood criterion. Observing important inconsistencies between the different libraries available for Gaussian process modeling like Erickson et al. (2018), we propose elementary numerical recipes making it possible to obtain significant gains both in terms of likelihood and model accuracy. Finally, the analytical formulas for computing cross-validation criteria are revisited under a new angle and enriched with similar formulas for the gradients. This last contribution aligns the computational cost of a class of cross-validation criteria with that of the likelihood. The second part presents a goal-oriented methodology. It is designed to improve the accuracy of the model in an (output) range of interest. This approach consists in relaxing the interpolation constraints on a relaxation range disjoint from the range of interest. We also propose an approach for automatically selecting the relaxation range. This new method can implicitly manage potentially complex regions of interest in the input space with few parameters. Outside, it learns non-parametrically a transformation improving the predictions on the range of interest. Numerical simulations show the benefits of the approach for Bayesian optimization, where one is interested in low values in the minimization framework. Moreover, the theoretical convergence of the method is established under some assumptions
APA, Harvard, Vancouver, ISO, and other styles
34

Ghotikar, Miheer S. "Aortic valve analysis and area prediction using bayesian modeling." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Junning. "Dynamic Bayesian networks : modeling and analysis of neural signals." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/12618.

Full text
Abstract:
Studying interactions between different brain regions or neural components is crucial in understanding neurological disorders. Dynamic Bayesian networks, a type of statistical graphical model, have been suggested as a promising tool to model neural communication systems. This thesis investigates the employment of dynamic Bayesian networks for analyzing neural connectivity, especially with focus on three topics: structural feature extraction, group analysis, and error control in learning network structures. Extracting interpretable features from experimental data is important for clinical diagnosis and improving experiment design. A framework is designed for discovering structural differences, such as the pattern of sub-networks, between two groups of Bayesian networks. The framework consists of three components: Bayesian network modeling, statistical structure-comparison, and structure-based classification. In a study on stroke using surface electromyography, this method detected several coordination patterns among muscles that could effectively differentiate patients from healthy people. Group analyses are widely conducted in neurological research. However for dynamic Bayesian networks, the performances of different group-analysis methods had not been systematically investigated. To provide guidance on selecting group-analysis methods, three popular methods, i.e. the virtual-typical-subject, the common-structure and the individual-structure methods, were compared in a study on Parkinson's disease, from the aspects of their statistical goodness-of-fit to the data, and more importantly, their sensitivity in detecting the effect of medication. The three methods led to considerably different group-level results, and the individual-structure approach was more sensitive to the normalizing effect of medication. Controlling errors is a fundamental problem in applying dynamic Bayesian networks to discovering neural connectivity. An algorithm is developed for this purpose, particularly for controlling the false discovery rate (FDR). It is proved that the algorithm is able to curb the FDR under user-specified levels (for example, conventionally 5%) at the limit of large sample size, and meanwhile recover all the true connections with probability one. Several extensions are also developed, including a heuristic modification for moderate sample sizes, an adaption to prior knowledge, and a combination with Bayesian inference.
APA, Harvard, Vancouver, ISO, and other styles
36

Du, Chao. "Stochastic Modeling and Bayesian Inference with Applications in Biophysics." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10366.

Full text
Abstract:
This thesis explores stochastic modeling and Bayesian inference strategies in the context of the following three problems: 1) Modeling the complex interactions between and within molecules; 2) Extracting information from stepwise signals that are commonly found in biophysical experiments; 3) Improving the computational efficiency of a non-parametric Bayesian inference algorithm. Chapter 1 studies the data from a recent single-molecule biophysical experiment on enzyme kinetics. Using a stochastic network model, we analyze the autocorrelation of experimental fluorescence intensity and the autocorrelation of enzymatic reaction times. This chapter shows that the stochastic network model is capable of explaining the experimental data in depth and further explains why the enzyme molecules behave fundamentally differently from what the classical model predicts. The modern knowledge on the molecular kinetics is often learned through the information extracted from stepwise signals in experiments utilizing fluorescence spectroscopy. Chapter 2 proposes a new Bayesian method to estimate the change-points in stepwise signals. This approach utilizes marginal likelihood as the tool of inference. This chapter illustrates the impact of the choice of prior on the estimator and provides guidelines for setting the prior. Based on the results of simulation study, this method outperforms several existing change-points estimators under certain settings. Furthermore, DNA array CGH data and single molecule data are analyzed with this approach. Chapter 3 focuses on the optional Polya tree, a newly established non-parametric Bayesian approach (Wong and Li 2010). While the existing study shows that the optional Polya tree is promising in analyzing high dimensional data, its applications are hindered by the high computational costs. A heuristic algorithm is proposed in this chapter, with an attempt to speed up the optional Polya tree inference. This study demonstrates that the new algorithm can reduce the running time significantly with a negligible loss of precision.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
37

Molinares, Carlos A. "Parametric and Bayesian Modeling of Reliability and Survival Analysis." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3252.

Full text
Abstract:
The objective of this study is to compare Bayesian and parametric approaches to determine the best for estimating reliability in complex systems. Determining reliability is particularly important in business and medical contexts. As expected, the Bayesian method showed the best results in assessing the reliability of systems. In the first study, the Bayesian reliability function under the Higgins-Tsokos loss function using Jeffreys as its prior performs similarly as when the Bayesian reliability function is based on the squared-error loss. In addition, the Higgins-Tsokos loss function was found to be as robust as the squared-error loss function and slightly more efficient. In the second study, we illustrated that--through the power law intensity function--Bayesian analysis is applicable in the power law process. The power law intensity function is the key entity of the power law process (also called the Weibull process or the non-homogeneous Poisson process). It gives the rate of change of a system's reliability as a function of time. First, using real data, we demonstrated that one of our two parameters behaves as a random variable. With the generated estimates, we obtained a probability density function that characterizes the behavior of this random variable. Using this information, under the commonly used squared-error loss function and with a proposed adjusted estimate for the second parameter, we obtained a Bayesian reliability estimate of the failure probability distribution that is characterized by the power law process. Then, using a Monte Carlo simulation, we showed the superiority of the Bayesian estimate compared with the maximum likelihood estimate and also the better performance of the proposed estimate compared with its maximum likelihood counterpart. In the next study, a Bayesian sensitivity analysis was performed via Monte Carlo simulation, using the same parameter as in the previous study and under the commonly used squared-error loss function, using mean square error comparison. The analysis was extended to the second parameter as a function of the first, based on the relationship between their maximum likelihood estimates. The simulation procedure demonstrated that the Bayesian estimates are superior to the maximum likelihood estimates and that the selection of the prior distribution was sensitive. Secondly, we found that the proposed adjusted estimate for the second parameter has better performance under a noninformative prior. In the fourth study, a Bayesian approach was applied to real data from breast cancer research. The purpose of the study was to investigate the applicability of a Bayesian analysis to survival time of breast cancer data and to justify the applicability of the Bayesian approach to this domain. The estimation of one parameter, the survival function, and hazard function were analyzed. The simulation analysis showed that the Bayesian estimate of the parameter performed better compared with the estimated value under the Wheeler procedure. The excellent performance of the Bayesian estimate is reflected even for small sample sizes. The Bayesian survival function was also found to be more efficient than its parametric counterpart. In the last study, a Bayesian analysis was carried out to investigate the sensitivity to the choice of the loss function. One of the parameters of the distribution that characterized the survival times for breast cancer data was estimated applying a Bayesian approach and under two different loss functions. Also, the estimates of the survival function were determined under the same setting. The simulation analysis showed that the choice of the squared-error loss function is robust in estimating the parameter and the survival function.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Ming. "Hierarchical Bayesian topic modeling with sentiment and author extension." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/20598.

Full text
Abstract:
Doctor of Philosophy
Computing and Information Sciences
William H. Hsu
While the Hierarchical Dirichlet Process (HDP) has recently been widely applied to topic modeling tasks, most current hybrid models for concurrent inference of topics and other factors are not based on HDP. In this dissertation, we present two new models that extend an HDP topic modeling framework to incorporate other learning factors. One model injects Latent Dirichlet Allocation (LDA) based sentiment learning into HDP. This model preserves the benefits of nonparametric Bayesian models for topic learning, while learning latent sentiment aspects simultaneously. It automatically learns different word distributions for each single sentiment polarity within each topic generated. The other model combines an existing HDP framework for learning topics from free text with latent authorship learning within a generative model using author list information. This model adds one more layer into the current hierarchy of HDPs to represent topic groups shared by authors, and the document topic distribution is represented as a mixture of topic distribution of its authors. This model automatically learns author contribution partitions for documents in addition to topics.
APA, Harvard, Vancouver, ISO, and other styles
39

Hagerty, Nicholas L. "Bayesian Network Modeling of Causal Relationships in Polymer Models." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1619009432971036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Andreev, Andriy. "Nonparametric statistical modeling of recurrent events : a Bayesian approach." Helsinki : University of Helsinki, 2000. http://ethesis.helsinki.fi/julkaisut/mat/rolfn/vk/andreev/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Toronto, Neil. "Super-resolution via image recapture and Bayesian effect modeling /." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Page, Garritt L. "Bayesian mixture modeling and outliers in inter-laboratory studies." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3389133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Del, Pero Luca. "Top-Down Bayesian Modeling and Inference for Indoor Scenes." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/297040.

Full text
Abstract:
People can understand the content of an image without effort. We can easily identify the objects in it, and figure out where they are in the 3D world. Automating these abilities is critical for many applications, like robotics, autonomous driving and surveillance. Unfortunately, despite recent advancements, fully automated vision systems for image understanding do not exist. In this work, we present progress restricted to the domain of images of indoor scenes, such as bedrooms and kitchens. These environments typically have the "Manhattan" property that most surfaces are parallel to three principal ones. Further, the 3D geometry of a room and the objects within it can be approximated with simple geometric primitives, such as 3D blocks. Our goal is to reconstruct the 3D geometry of an indoor environment while also understanding its semantic meaning, by identifying the objects in the scene, such as beds and couches. We separately model the 3D geometry, the camera, and an image likelihood, to provide a generative statistical model for image data. Our representation captures the rich structure of an indoor scene, by explicitly modeling the contextual relationships among its elements, such as the typical size of objects and their arrangement in the room, and simple physical constraints, such as 3D objects do not intersect. This ensures that the predicted image interpretation will be globally coherent geometrically and semantically, which allows tackling the ambiguities caused by projecting a 3D scene onto an image, such as occlusions and foreshortening. We fit this model to images using MCMC sampling. Our inference method combines bottom-up evidence from the data and top-down knowledge from the 3D world, in order to explore the vast output space efficiently. Comprehensive evaluation confirms our intuition that global inference of the entire scene is more effective than estimating its individual elements independently. Further, our experiments show that our approach is competitive and often exceeds the results of state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Toronto, Neil B. "Super-Resolution via Image Recapture and Bayesian Effect Modeling." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1839.

Full text
Abstract:
The goal of super-resolution is to increase not only the size of an image, but also its apparent resolution, making the result more plausible to human viewers. Many super-resolution methods do well at modest magnification factors, but even the best suffer from boundary and gradient artifacts at high magnification factors. This thesis presents Bayesian edge inference (BEI), a novel method grounded in Bayesian inference that does not suffer from these artifacts and remains competitive in published objective quality measures. BEI works by modeling the image capture process explicitly, including any downsampling, and modeling a fictional recapture process, which together allow principled control over blur. Scene modeling requires noncausal modeling within a causal framework, and an intuitive technique for that is given. Finally, BEI with trivial changes is shown to perform well on two tasks outside of its original domain—CCD demosaicing and inpainting—suggesting that the model generalizes well.
APA, Harvard, Vancouver, ISO, and other styles
45

Visalli, Antonino. "Bayesian modeling of temporal expectations in the human brain." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426247.

Full text
Abstract:
The ability to predict when a relevant event might occur is critical to survive in our dynamic and uncertain environment. This cognitive ability, usually referred to as temporal preparation, allows us to prepare temporally optimized responses to forthcoming stimuli by anticipating their timing: from safely crossing a busy road during rush hours, to timing turn taking in a conversation, to catching something in mid-air, are all examples of how important and ubiquitous temporal preparation is in our everyday life (e.g., Correa, 2010; Coull & Nobre, 2008; Nobre, Correa, & Coull, 2007). In laboratory settings, temporal preparation has been traditionally investigated, in its implicit form, through the “variable foreperiod paradigm” (see Coull, 2009; Niemi & Näätänen, 1981, for a review). In such a paradigm, the foreperiod is a time interval of variable duration that separates a warning stimulus and a target stimulus requiring a response. What is usually observed with this paradigm is that response times (RTs) reflect the temporal probability of stimulus onset: RTs decrease with increasing probability. This implies that participants learn to use the information implicitly afforded by the passage of time and that related to the temporal probability of the onset of the target stimulus (i.e., hazard rate; Janssen & Shadlen, 2005). In other words, it seems that they are able to use predictive internal models of event timing in order to optimize behaviour. Despite previous studies have started to investigate which brain areas encode temporal probabilities (i.e., predictive models) to anticipate event onset (e.g., Bueti, Bahrami, Walsh, & Rees, 2010; Cui, Stetson, Montague, & Eagleman, 2009; also see Vallesi et al., 2007), to our knowledge, there is no evidence on how the brain does form and update such predictive models. Based on such premises, the overarching goal of the present PhD project was to pinpoint the neural mechanisms by which predictive models of event timing are dynamically updated. Moreover, given that in real life updating usually occurs in the presence of surprising events (i.e. low probable events under a predictive model), it is challenging to disentangle between updating and surprise (O’Reilly et al, 2013). Therefore, our second and interrelated research goal was to understand whether, and to which extent, it is possible to dissociate between the neural mechanisms specifically involved in updating and those dealing with surprising events that do not require an update of internal models. To accomplish our research goals, we capitalized on both state-of-the-art methodologies [i.e., functional magnetic resonance imaging (fMRI) and electrophysiology (EEG)] and computational modelling. Specifically, we considered the brain like a Bayesian observer. Indeed, Bayesian frameworks are gaining increasing popularity to explain cognitive brain functions (Friston, 2012). In a nutshell, the construction of computational Bayesian models allows us to quantitatively describe temporal expectations in terms of probability distributions and to capture updating using Bayes’ rule. In order to accomplish our goals, the present PhD project is composed of three studies. In the first two studies we implemented a version of the foreperiod paradigm in which participants could predict target onsets by estimating their underlying temporal probability distributions. During the task, these distributions changed, hence requiring participants to update their temporal expectations. Furthermore, a simple manipulation of the colors in which the target were presented (cf., O’Reilly et al., 2013) allowed us to independently vary updating and surprise across trials. Then, we constructed a normative Bayesian learner (a computational model adapted from O’Reilly et al., 2013) in order to obtain an estimate of a participant’s temporal expectations on a trial-by-trial basis. In Study 1, trial-by-trial fMRI data acquired during our foreperiod paradigm were correlated with two information theoretical parameters calculated with reference to our Bayesian model: the Kullbach-Leibler divergence (DKL) and the Shannon’s information (IS). These two measures have been previously used to formally describe belief updating and surprise associated with events under a predictive model, respectively (e.g., Baldi & Itti, 2010; Kolossa, Kopp, & Fingscheidt, 2015; O'Reilly et al., 2013; Strange et al., 2005). Our results showed that the fronto-parietal network and the cingulo-opercular network were differentially involved in the updating of temporal expectations and in dealing with surprising events, respectively. Having successfully validated the use of Bayesian models in our first fMRI study and dissociated between updating and surprise, the next step was to investigate the temporal dynamics of these two processes. Do updating and surprise act on similar or distinct processing stage(s)? What is the time course associated with the two? To address these questions, in Study 2 participants performed our adapted foreperiod task (same task as in Study 1) while their EEG activity was recorded. In this study, we relied on the literature on the P3 (a specific ERP component related to information processing) and the Bayesian brain (e.g., Kopp, 2008; Kopp et al., 2016; Mars et al., 2008; Seer, Lange, Boos, Dengler, & Kopp, 2016). Importantly, however, we also took advantage from the combination of a mass-univariate approach with novel deconvolution methods to explore the entire spatio-temporal pattern of EEG data. This enabled us to extend our analyses beyond the P3 component. Results from study 2 confirmed that surprise and updating can be differentiated also at the electrophysiological level and that updating elicited a more complex pattern than surprise. As regards the P3 in relation to the literature on the Bayesian brain (Kolossa, Fingscheidt, Wessel, & Kopp, 2013; Kolossa et al., 2015; Mars et al., 2008), our findings corroborated the idea that such a component is selectively modulated by surprise and updating. While in Studies 1 and 2, participants were explicitly encouraged to form and update temporal expectations using the target color, in Study 3 we wanted to make a step further by asking whether the use of a more implicit task structure might influence the construction of the predictive internal model. To that aim, during the foreperiod task designed for the third study, participants were not explicitly informed about the presence of the underlying temporal probability distributions from which target onsets were drawn. In this way, we aimed to investigate behavioural and EEG differences in the way participants learnt to form and updated temporal expectations when changes in the underlying distributions were not explicitly signalled. Critically, we again found that surprise and updating could be differentiated. Moreover, coupled with the results from study 2, we isolated two EEG signatures of the inferential process underlying updating of prior temporal expectations, which responded to both explicit and implicit contextual changes. Overall, we believe that the results of the present PhD project will further our understanding of the cognitive processes and neural mechanisms that allow us to optimize our temporal preparation abilities.
Saper anticipare il tempo di occorrenza di un evento è una capacità necessaria alla sopravvivenza. Quest’abilità cognitiva, cui di solito ci si riferisce con il termine di preparazione temporale, ci permette di preparare in maniera temporalmente ottimizzata delle risposte a stimoli imminenti. Dal punto di vista sperimentale, la preparazione temporale è stata tradizionalmente studiata usando compiti di foreperiod. Con il termine foreperiod s’intende l’intervallo di tempo che separa un segnale di allerta da un target che richiede una risposta. Dai risultati comportamentali di questo compito si osserva di solito che i tempi di risposta riflettono la probabilità a priori di occorrenza del target condizionata allo scorrere del tempo. In altre parole, sembra che le persone abbiano dei modelli predittivi interni di aspettativa temporale che usano per ottimizzare il loro comportamento. Nonostante studi precedenti hanno ampliamente studiato i meccanismi neurali che utilizzano tali modelli di predizione temporale, non ci sono studi, sulla base delle nostre conoscenze, che abbiano studiato come il cervello forma e aggiorna tali modelli. Su queste premesse, lo scopo generale di questo progetto di dottorato è stato quello di individuare i meccanismi neurali coinvolti nell’updating, cioè aggiornamento, di modelli di predizione temporale. Un secondo, ma strettamente legato, obiettivo è stato quello di distinguere tali processi di updating da quei meccanismi coinvolti nel far fronte a eventi sorprendenti. È da notare, infatti, che l’aggiornamento delle aspettative avviene solitamente di fronte ad eventi poco probabili per il modello, cioè sorprendenti. Per raggiungere questi obiettivi ci siamo serviti delle tecniche più diffuse nello studio funzionale del cervello, cioè l’elettroencefalografia (EEG) e la risonanza magnetica funzionale (fMRI) utilizzando un approccio di tipo computazionale legato all’ipotesi del cervello bayesiano. Quest’ approccio consiste nell’implementare un modello di osservatore ideale che permetta di rappresentare quantitativamente l’aspettativa temporale in termini di distribuzioni di probabilità. La seguente dissertazione è composta di tre studi. Nei primi due studi abbiamo utilizzato un compito di foreperiod in cui i partecipanti potevano predire il tempo di occorrenza dei target stimandone la probabilità temporale di occorrenza. Durante il compito, la distribuzione reale da cui venivano estratte le durate di foreperiod, cambiava, e ciò richiedeva ai partecipanti di aggiornare i loro modelli di predizione. Per decorrelare sorpresa e updating, in questi due studi abbiamo utilizzato una manipolazione che segnalava esplicitamente ai partecipanti se un evento sorprendente era utile o no nel predire i futuri eventi. Nel primo studio, il segnale fMRI acquisito durante il compito è stato correlato a due misure delle teoria dell’informazione calcolate sulla base del nostro modello bayesiano ed utilizzate in precedenza per quantificare l’updating e la sorpresa associate a un evento, la Kullbach Leibler divergence e la Shannon’s information. I nostri risultati hanno mostrato che due network cerebrali di controllo cognitivo, il network fronto-parietale e il network cingolo-opercolare erano differentemente modulati da updating e sorpresa. Dopo aver validato il nostro modello nel primo studio e aver dissociato updating e sorpresa, il passo successivo è stato quello di studiare le dinamiche temporali di questi due processi. A tale scopo, nel secondo studio, abbiamo condotto uno studio EEG con lo stesso compito di foreperiod. I risultati hanno mostrato che anche a livello di segnale EEG è possibile dissociare updating e sorpresa. Mentre nei primi due studi i partecipanti erano esplicitamente incoraggiati ad aggiornare le loro aspettative temporali, nel terzo studio (EEG) ci siamo chiesti se l’utilizzo di un compito più implicito potesse influire sui processi di updating. A tal scopo, abbiamo utilizzato un task in cui i cambi di durata dei foreperiod non erano segnalati esplicitamente. Così facendo abbiamo potuto esaminare come i partecipanti aggiornavano le loro aspettative temporali in presenza di cambiamenti nel compito non esplicitamente segnalati. Integrando i due studi EEG, siamo riusciti a isolare due indici elettrofisiologici coinvolti nell’updating temporale in risposta a cambiamenti nel compito sia espliciti che impliciti.
APA, Harvard, Vancouver, ISO, and other styles
46

Bäcklund, JOakim, and Johdet Nils. "A Bayesian approach to predict the number of soccer goals : Modeling with Bayesian Negative Binomial regression." Thesis, Linköpings universitet, Statistik och maskininlärning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149028.

Full text
Abstract:
This thesis focuses on a well-known topic in sports betting, predicting the number of goals in soccer games.The data set used comes from the top English soccer league: Premier League, and consists of games played in the seasons 2015/16 to 2017/18.This thesis approaches the prediction with the auxiliary support of the odds from the betting exchange Betfair. The purpose is to find a model that can create an accurate goal distribution. %The other purpose is to investigate whether Negative binomial distribution regressionThe methods used are Bayesian Negative Binomial regression and Bayesian Poisson regression. The results conclude that the Poisson regression is the better model because of the presence of underdispersion.We argue that the methods can be used to compare different sportsbooks accuracies, and may help creating better models.
APA, Harvard, Vancouver, ISO, and other styles
47

Shahrabi, Farahani Hossein. "Computational Modeling of Cancer Progression." Doctoral thesis, KTH, Beräkningsbiologi, CB, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121597.

Full text
Abstract:
Cancer is a multi-stage process resulting from accumulation of genetic mutations. Data obtained from assaying a tumor only contains the set of mutations in the tumor and lacks information about their temporal order. Learning the chronological order of the genetic mutations is an important step towards understanding the disease. The probability of introduction of a mutation to a tumor increases if certain mutations that promote it, already happened. Such dependencies induce what we call the monotonicity property in cancer progression. A realistic model of cancer progression should take this property into account. In this thesis, we present two models for cancer progression and algorithms for learning them. In the first model, we propose Progression Networks (PNs), which are a special class of Bayesian networks. In learning PNs the issue of monotonicity is taken into consideration. The problem of learning PNs is reduced to Mixed Integer Linear Programming (MILP), which is a NP-hard problem for which very good heuristics exist. We also developed a program, DiProg, for learning PNs. In the second model, the problem of noise in the biological experiments is addressed by introducing hidden variable. We call this model Hidden variable Oncogenetic Network (HON). In a HON, there are two variables assigned to each node, a hidden variable that represents the progression of cancer to the node and an observable random variable that represents the observation of the mutation corresponding to the node. We devised a structural Expectation Maximization (EM) algorithm for learning HONs. In the M-step of the structural EM algorithm, we need to perform a considerable number of inference tasks. Because exact inference is tractable only on Bayesian networks with bounded treewidth, we also developed an algorithm for learning bounded treewidth Bayesian networks by reducing the problem to a MILP. Our algorithms performed well on synthetic data. We also tested them on cytogenetic data from renal cell carcinoma. The learned progression networks from both algorithms are in agreement with the previously published results. MicroRNAs are short non-coding RNAs that are involved in post transcriptional regulation. A-to-I editing of microRNAs converts adenosine to inosine in the double stranded RNA. We developed a method for determining editing levels in mature microRNAs from the high-throughput RNA sequencing data from the mouse brain. Here, for the first time, we showed that the level of editing increases with development.

QC 20130503

APA, Harvard, Vancouver, ISO, and other styles
48

McHugh, Sean W. "Phylogenetic Niche Modeling." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104893.

Full text
Abstract:
Projecting environmental niche models through time is a common goal when studying species response to climatic change. Species distribution models (SDMs) are commonly used to estimate a species' niche from observed patterns of occurrence and environmental predictors. However, a species niche is also shaped by non-environmental factors--including biotic interactions and dispersal barrier—truncating SDM estimates. Though truncated SDMs may accurately predict present-day species niche, projections through time are often biased by environmental condition change. Modeling niche in a phylogenetic framework leverages a clade's shared evolutionary history to pull species estimates closer towards phylogenetic conserved values and farther away from species specific biases. We propose a new Bayesian model of phylogenetic niche implemented in R. Under our model, species SDM parameters are transformed into biologically interpretable continuous parameters of environmental niche optimum, breadth, and tolerance evolving under multivariate Brownian motion random walk. Through simulation analyses, we demonstrated model accuracy and precision that improved as phylogeny size increased. We also demonstrated our model on a clade of eastern United States Plethodontid salamanders by accurately estimating species niche, even when no occurrence data is present. Our model demonstrates a novel framework where niche changes can be studied forwards and backwards through time to understand ancestral ranges, patterns of environmental specialization, and niche in data deficient species.
Master of Science
As many species face increasing pressure in a changing climate, it is crucial to understand the set of environmental conditions that shape species' ranges--known as the environmental niche--to guide conservation and land management practices. Species distribution models (SDMs) are common tools that are used to model species' environmental niche. These models treat a species' probability of occurrence as a function of environmental conditions. SDM niche estimates can predict a species' range given climate data, paleoclimate, or projections of future climate change to estimate species range shifts from the past to the future. However, SDM estimates are often biased by non-environmental factors shaping a species' range including competitive divergence or dispersal barriers. Biased SDM estimates can result in range predictions that get worse as we extrapolate beyond the observed climatic conditions. One way to overcome these biases is by leveraging the shared evolutionary history amongst related species to "fill in the gaps". Species that are more closely phylogenetically related often have more similar or "conserved" environmental niches. By estimating environmental niche over all species in a clade jointly, we can leverage niche conservatism to produce more biologically realistic estimates of niche. However, currently a methodological gap exists between SDMs estimates and macroevolutionary models, prohibiting them from being estimated jointly. We propose a novel model of evolutionary niche called PhyNE (Phylogenetic Niche Evolution), where biologically realistic environmental niches are fit across a set of species with occurrence data, while simultaneously fitting and leveraging a model of evolution across a portion of the tree of life. We evaluated model accuracy, bias, and precision through simulation analyses. Accuracy and precision increased with larger phylogeny size and effectively estimated model parameters. We then applied PhyNE to Plethodontid salamanders from Eastern North America. This ecologically-important and diverse group of lungless salamanders require cold and wet conditions and have distributions that are strongly affected by climatic conditions. Species within the family vary greatly in distribution, with some species being wide ranging generalists, while others are hyper-endemics that inhabit specific mountains in the Southern Appalachians with restricted thermal and hydric conditions. We fit PhyNE to occurrence data for these species and their associated average annual precipitation and temperature data. We identified no correlations between species environmental preference and specialization. Pattern of preference and specialization varied among Plethodontid species groups, with more aquatic species possessing a broader environmental niche, likely due to the aquatic microclimate facilitating occurrence in a wider range of conditions. We demonstrated the effectiveness of PhyNE's evolutionarily-informed estimates of environmental niche, even when species' occurrence data is limited or even absent. PhyNE establishes a proof-of-concept framework for a new class of approaches for studying niche evolution, including improved methods for estimating niche for data-deficient species, historical reconstructions, future predictions under climate change, and evaluation of niche evolutionary processes across the tree of life. Our approach establishes a framework for leveraging the rapidly growing availability of biodiversity data and molecular phylogenies to make robust eco-evolutionary predictions and assessments of species' niche and distributions in a rapidly changing world.
APA, Harvard, Vancouver, ISO, and other styles
49

Guo, Xiao. "Bayesian surrogates for functional response modeling and metamaterial rapid design." HKBU Institutional Repository, 2017. http://repository.hkbu.edu.hk/etd_oa/418.

Full text
Abstract:
In many scientific and engineering researches, Bayesian surrogate models are utilized to handle nonlinear data for regression and classification tasks. In this thesis, we consider a real-life problem, functional response modeling of metamaterial and its rapid design, to which we establish and test such models. To familiarize with this subject, some fundamental electromagnetic physics are provided.. Noticing that the dispersive data are usually in rational form, a two-stage modeling approach is proposed, where in the first stage, a universal link function is formulated to rationally approximate the data with a few discrete parameters, namely poles and residues. Then they are used to synthesize equivalent circuits, and surrogate models are applied to circuit elements in the second stage.. To start with a regression scheme, the classical Gaussian process (GP) is introduced, which proceeds by parameterizing a covariance function of any continuous inputs, and infers hyperparameters given the training data. Two metamaterial prototypes are illustrated to demonstrate the methodology of model building, whose results are shown to prove the efficiency and precision of probabilistic pre- dictions. One well-known problem with metamaterial functionality is its great variability in resonance identities, which shows discrepancy in approximation orders required to fit the data with rational functions. In order to give accurate prediction, both approximation order and the presenting circuit elements should be inferred, by classification and regression, respectively. An augmented Bayesian surrogate model, which integrates GP multiclass classification, Bayesian treed GP regression, is formulated to provide a systematic dealing to such unique physical phenomenon. Meanwhile, the nonstationarity and computational complexity are well scaled with such model.. Finally, as one of the most advantageous property of Bayesian perspective, probabilistic assessment to underlying uncertainties is also discussed and demonstrated with detailed formulation and examples.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Qihua. "Bayesian hierarchial spatiotemporal modeling of functional magnetic resonance imaging data." Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3245023.

Full text
Abstract:
Thesis (Ph.D. in Statistical Science)--S.M.U., 2007.
Title from PDF title page (viewed Mar. 18, 2008). Source: Dissertation Abstracts International, Volume: 67-12, Section: B, page: 7154. Adviser: Richard F. Gunst. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography