To see the other types of publications on this topic, follow the link: Bayesian framework.

Dissertations / Theses on the topic 'Bayesian framework'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bayesian framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tenenbaum, Joshua B. (Joshua Brett) 1972. "A Bayesian framework for concept learning." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/16714.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1999.
Includes bibliographical references (p. 297-314).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Human concept learning presents a version of the classic problem of induction, which is made particularly difficult by the combination of two requirements: the need to learn from a rich (i.e. nested and overlapping) vocabulary of possible concepts and the need to be able to generalize concepts reasonably from only a few positive examples. I begin this thesis by considering a simple number concept game as a concrete illustration of this ability. On this task, human learners can with reasonable confidence lock in on one out of a billion billion billion logically possible concepts, after seeing only four positive examples of the concept, and can generalize informatively after seeing just a single example. Neither of the two classic approaches to inductive inference hypothesis testing in a constrained space of possible rules and computing similarity to the observed examples can provide a complete picture of how people generalize concepts in even this simple setting. This thesis proposes a new computational framework for understanding how people learn concepts from examples, based on the principles of Bayesian inference. By imposing the constraints of a probabilistic model of the learning situation, the Bayesian learner can draw out much more information about a concept's extension from a given set of observed examples than either rule-based or similarity-based approaches do, and can use this information in a rational way to infer the probability that any new object is also an instance of the concept. There are three components of the Bayesian framework: a prior probability distribution over a hypothesis space of possible concepts; a likelihood function, which scores each hypothesis according to its probability of generating the observed examples; and the principle of hypothesis averaging, under which the learner computes the probability of generalizing a concept to new objects by averaging the predictions of all hypotheses weighted by their posterior probability (proportional to the product of their priors and likelihoods). The likelihood, under the assumption of randomly sampled positive examples, embodies the size principle for scoring hypotheses: smaller consistent hypotheses are more likely than larger hypotheses, and they become exponentially more likely as the number of observed examples increases. The principle of hypothesis averaging allows the Bayesian framework to accommodate both rule-like and similarity-like generalization behavior, depending on how peaked the posterior probability is. Together, the size principle plus hypothesis averaging predict a convergence from similarity-like generalization (due to a broad posterior distribution) after very few examples are observed to rule-like generalization (due to a sharply peaked posterior distribution) after sufficiently many examples have been observed. The main contributions of this thesis are as follows. First and foremost, I show how it is possible for people to learn and generalize concepts from just one or a few positive examples (Chapter 2). Building on that understanding, I then present a series of case studies of simple concept learning situations where the Bayesian framework yields both qualitative and quantitative insights into the real behavior of human learners (Chapters 3-5). These cases each focus on a different learning domain. Chapter 3 looks at generalization in continuous feature spaces, a typical representation of objects in psychology and machine learning with the virtues of being analytically tractable and empirically accessible, but the downside of being highly abstract and artificial. Chapter 4 moves to the more natural domain of learning words for categories of objects and shows the relevance of the same phenomena and explanatory principles introduced in the more abstract setting of Chapters 1-3 for real-world learning tasks like this one. In each of these domains, both similarity-like and rule-like generalization emerge as special cases of the Bayesian framework in the limits of very few or very many examples, respectively. However, the transition from similarity to rules occurs much faster in the word learning domain than in the continuous feature space domain. I propose a Bayesian explanation of this difference in learning curves that places crucial importance on the density or sparsity of overlapping hypotheses in the learner's hypothesis space. To test this proposal, a third case study (Chapter 5) returns to the domain of number concepts, in which human learners possess a more complex body of prior knowledge that leads to a hypothesis space with both sparse and densely overlapping components. Here, the Bayesian theory predicts and human learners produce either rule-based or similarity-based generalization from a few examples, depending on the precise examples observed. I also discusses how several classic reasoning heuristics may be used to approximate the much more elaborate computations of Bayesian inference that this domain requires. In each of these case studies, I confront some of the classic questions of concept learning and induction: Is the acquisition of concepts driven mainly by pre-existing knowledge or the statistical force of our observations? Is generalization based primarily on abstract rules or similarity to exemplars? I argue that in almost all instances, the only reasonable answer to such questions is, Both. More importantly, I show how the Bayesian framework allows us to answer much more penetrating versions of these questions: How does prior knowledge interact with the observed examples to guide generalization? Why does generalization appear rule-based in some cases and similarity-based in others? Finally, Chapter 6 summarizes the major contributions in more detailed form and discusses how this work ts into the larger picture of contemporary research on human learning, thinking, and reasoning.
by Joshua B. Tenenbaum.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Denton, Stephen E. "Exploring active learning in a Bayesian framework." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3380073.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Psychological and Brain Sciences the Dept. of Cognitive Science, 2009.
Title from PDF t.p. (viewed on Jul 19, 2010). Source: Dissertation Abstracts International, Volume: 70-12, Section: B, page: 7870. Advisers: John K. Kruschke; Jerome R. Busemeyer.
APA, Harvard, Vancouver, ISO, and other styles
3

Scotto, Di Perrotolo Alexandre. "A Theoretical Framework for Bayesian Optimization Convergence." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-225129.

Full text
Abstract:
Bayesian optimization is a well known class of derivative-free optimization algorithms mainly used for expensive black-box objective functions. Despite their efficiency, they suffer from a lack of rigorous convergence criterion which makes them more prone to be used as modeling tools rather than optimizing tools. This master thesis proposes, analyzes, and tests a globally convergent framework (that is to say the convergence to a stationary point regardless the initial sample) for Bayesian optimization algorithms. The framework design intends to preserve the global search characteristics for minimum while being rigorously monitored to converge.
Bayesiansk optimering är en välkänd klass av globala optimeringsalgoritmer som inte beror av derivator och främst används för optimering av dyra svartlådsfunktioner. Trots sin relativa effektivitet lider de av en brist av stringent konvergenskriterium som gör dem mer benägna att användas som modelleringsverktyg istället för som optimeringsverktyg. Denna rapport är avsedd att föreslå, analysera och testa en ett globalt konvergerande ramverk (på ett sätt som som beskrivs vidare) för Bayesianska optimeringsalgoritmer, som ärver de globala sökegenskaperna för minimum medan de noggrant övervakas för att konvergera.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhong, Xionghu. "Bayesian framework for multiple acoustic source tracking." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4752.

Full text
Abstract:
Acoustic source (speaker) tracking in the room environment plays an important role in many speech and audio applications such as multimedia, hearing aids and hands-free speech communication and teleconferencing systems; the position information can be fed into a higher processing stage for high-quality speech acquisition, enhancement of a specific speech signal in the presence of other competing talkers, or keeping a camera focused on the speaker in a video-conferencing scenario. Most of existing systems focus on the single source tracking problem, which assumes one and only one source is active all the time, and the state to be estimated is simply the source position. However, in practical scenarios, multiple speakers may be simultaneously active, and the tracking algorithm should be able to localise each individual source and estimate the number of sources. This thesis contains three contributions towards solutions to multiple acoustic source tracking in a moderate noisy and reverberant environment. The first contribution of this thesis is proposing a time-delay of arrival (TDOA) estimation approach for multiple sources. Although the phase transform (PHAT) weighted generalised cross-correlation (GCC) method has been employed to extract the TDOAs of multiple sources, it is primarily used for a single source scenario and its performance for multiple TDOA estimation has not been comprehensively studied. The proposed approach combines the degenerate unmixing estimation technique (DUET) and GCC method. Since the speech mixtures are assumed window-disjoint orthogonal (WDO) in the time-frequency domain, the spectrograms can be separated by employing DUET, and the GCC method can then be applied to the spectrogram of each individual source. The probabilities of detection and false alarm are also proposed to evaluate the TDOA estimation performance under a series of experimental parameters. Next, considering multiple acoustic sources may appear nonconcurrently, an extended Kalman particle filtering (EKPF) is developed for a special multiple acoustic source tracking problem, namely “nonconcurrent multiple acoustic tracking (NMAT)”. The extended Kalman filter (EKF) is used to approximate the optimum weights, and the subsequent particle filtering (PF) naturally takes the previous position estimates as well as the current TDOA measurements into account. The proposed approach is thus able to lock on the sharp change of the source position quickly, and avoid the tracking-lag in the general sequential importance resampling (SIR) PF. Finally, these investigations are extended into an approach to track the multiple unknown and time-varying number of acoustic sources. The DUET-GCC method is used to obtain the TDOA measurements for multiple sources and a random finite set (RFS) based Rao-blackwellised PF is employed and modified to track the sources. Each particle has a RFS form encapsulating the states of all sources and is capable of addressing source dynamics: source survival, new source appearance and source deactivation. A data association variable is defined to depict the source dynamic and its relation to the measurements. The Rao-blackwellisation step is used to decompose the state: the source positions are marginalised by using an EKF, and only the data association variable needs to be handled by a PF. The performances of all the proposed approaches are extensively studied under different noisy and reverberant environments, and are favorably comparable with the existing tracking techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Kwee, Ivo Widjaja. "Towards a Bayesian framework for optical tomography." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anand, Farminder Singh. "Bayesian framework for improved R&D decisions." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/39530.

Full text
Abstract:
This thesis work describes the formulation of a Bayesian approach along with new tools to systematically reduce uncertainty in Research&Development (R&D) alternatives. During the initial stages of R&D many alternatives are considered and high uncertainty exists for all the alternatives. The ideal approach in addressing the many R&D alternatives is to find the one alternative which is stochastically dominant i.e. the alternative which is better in all possible scenarios of uncertainty. Often a stochastically dominant alternative does not exist. This leaves the R&D manager with two alternatives, either to make a selection based on user defined utility function or to gather more information in order to reduce uncertainty in the various alternatives. From the decision makers perspective the second alternative has more intrinsic value, since reduction of uncertainty will improve the confidence in the selection and further reduce the high downside risk involved with the decisions made under high uncertainty. The motivation for this work is derived from our preliminary work on the evaluation of biorefiney alternatives, which brought into limelight the key challenges and opportunities in the evaluation of R&D alternatives. The primary challenge in the evaluation of many R&D alternatives was the presence of uncertainty in the many unit operations within each and every alternative. Additionally, limited or non-existent experimental data made it infeasible to quantify the uncertainty and lead to inability to develop an even simple systematic strategy to reduce it. Moreover, even if the uncertainty could be quantified, the traditional approaches (scenario analysis or stochastic analysis), lacked the ability to evaluate the key group of uncertainty contributors. Lastly, the traditional design of experiment approaches focus towards reduction in uncertainty in the parameter estimates of the model, whereas what is required is a design of experiment approach which focuses on the decision (selection of the key alternative). In order to tackle all the above mentioned challenges a Bayesian framework along with two new tools is proposed. The Bayesian framework consists of three main steps: a. Quantification of uncertainty b. Evaluation of key uncertainty contributors c. Design of experiment strategies, focussed on decision making rather than the traditional parameter uncertainty reduction To quantify technical uncertainty using expert knowledge, existing elicitation methods in the literature (outside chemical engineering domain) are used. To illustrate the importance of quantifying technical uncertainty, a bio-refinery case study is considered. The case study is an alternative for producing ethanol as a value added product in a Kraft mill producing pulp from softwood. To produce ethanol, a hot water pre-extraction of hemi-cellulose is considered, prior to the pulping stage. Using this case study, the methodology to quantify technical uncertainty using experts' knowledge is demonstrated. To limit the cost of R&D investment for selection or rejection of an R&D alternative, it is essential to evaluate the key uncertainty contributors. Global sensitivity analysis (GSA) is a tool which can be used to evaluate the key uncertainties. But quite often global sensitivity analysis fails to differentiate between the uncertainties and assigns them equal global sensitivity index. To counter this failing of GSA, a new method conditional global sensitivity (c-GSA) is presented, which is able to differentiate between the uncertainties even when GSA fails to do so. To demonstrate the value of c-GSA many small examples are presented. The third and the last key method in the Bayesian framework is the decision oriented design of experiment. Traditional 'Design of Experiment' (DOE) approaches focus on minimization of parameter error variance. In this work, a new "decision-oriented" DOE approach is proposed that takes into account how the generated data, and subsequently, the model developed based on them will be used in decision making. By doing so, the parameter variances get distributed in a manner such that its adverse impact on the targeted decision making is minimal. Results show that the new decision-oriented DOE approach significantly outperforms the standard D-optimal design approach. The new design method should be a valuable tool when experiments are conducted for the purpose of making R&D decisions. Finally, to demonstrate the importance of the overall Bayesian framework a bio-refinery case study is considered. The case study consists of the alternative to introduce a hemi-cellulose pre-extraction stage prior to pulping in a thermo-mechanical pulp mill. Application of the Bayesian framework to address this alternative, results in significant improvement in the prediction of the true potential value of the alternative.
APA, Harvard, Vancouver, ISO, and other styles
7

Shao, Yuan. "A Bayesian reasoning framework for model-driven vision." Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brunton, Alan. "A Bayesian framework for panoramic imaging of complex scenes." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27336.

Full text
Abstract:
This thesis presents a Bayesian framework for generating a panoramic image of a scene from a set of images, where there is only a small amount of overlap between adjacent images. Dense correspondence is computed using loopy belief propagation on a pair-wise Markov random field, and used to resample and blend the input images to remove artifacts in overlapping regions and seams along the overlap boundaries. Bayesian approaches have been used extensively in vision and imaging, and involve computing an observational likelihood from the input images and imposing a priori constraints. Photoconsistency or matching cost computed from the images is used as the likelihood in this thesis. The primary contribution of this thesis is the use of and efficient belief propagation algorithm to yield the piecewise smooth resampling of the input images with the highest probability of not producing artifacts or seams.
APA, Harvard, Vancouver, ISO, and other styles
9

Atrash, Amin. "A Bayesian Framework for Online Parameter Learning in POMDPs." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104587.

Full text
Abstract:
Decision-making under uncertainty has become critical as autonomous and semi-autonomous agents become more ubiquitious in our society. These agents must deal with uncertainty and ambiguity from the environment and still perform desired tasks robustly. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for modelling agents operating in such an environment. These models are able to capture the uncertainty from noisy sensors, inaccurate actuators, and perform decision-making in light of the agent's incomplete knowledge of the world. POMDPs have been applied successfully in domains ranging from robotics to dialogue management to medical systems. Extensive research has been conducted on methods for optimizing policies for POMDPs. However, these methods typically assume a model of the environment is known. This thesis presents a Bayesian reinforcement learning framework for learning POMDP parameters during execution. This framework takes advantage of agents which work alongside an operator who can provide optimal policy information to help direct the learning. By using Bayesian reinforcement learning, the agent can perform learning concurrently with execution, incorporate incoming data immediately, and take advantage of prior knowledge of the world. By using such a framework, an agent is able to adapt its policy to that of the operator. This framework is validated on data collected from the interaction manager of an autonomous wheelchair. The interaction manager acts as an intelligent interface between the user and the robot, allowing the user to issue high-level commands through natural interface such as speech. This interaction manager is controlled using a POMDP and acts as a rich scenario for learning in which the agent must adjust to the needs of the user over time.
Comme le nombre d'agents autonomes et semi-autonomes dansnotre société ne cesse de croître, les prises de décisions sous incertitude constituent désormais un problème critique. Malgré l'incertitude et l'ambiguité inhérentes à leurs environnements, ces agents doivent demeurer robustes dans l'exécution de leurs tâches. Les processus de décision markoviens partiellement observables (POMDP) offrent un cadre mathématique permettant la modélisation des agents et de leurs environnements. Ces modèles sont capables de capturer l'incertitude due aux perturbations dans les capteurs ainsi qu'aux actionneurs imprécis. Ils permettent conséquemment une prise de décision tenant compte des connaissances imparfaites des agents. À ce jour, les POMDP ont été utilisés avec succès dans un éventail de domaines, allant de la robotique à la gestion de dialogue, en passant par la médecine. Plusieurs travaux de recherche se sont penchés sur des méthodes visant à optimiser les POMDP. Cependant, ces méthodes requièrent habituellement un modèle environnemental préalablement connu. Dans ce mémoire, une méthode bayésienne d'apprentissage par renforcement est présentée, avec laquelle il est possible d'apprendre les paramètres du modèle POMDP pendant l'éxécution. Cette méthode tire avantage d'une coopération avec un opérateur capable de guider l'apprentissage en divulguant certaines données optimales. Avec l'aide du renforcement bayésien, l'agent peut apprendre pendant l'éxécution, incorporer immédiatement les données nouvelles et profiter des connaissances précédentes, pour finalement pouvoir adapter sa politique de décision à celle de l'opérateur. La méthodologie décrite est validée à l'aide de données produites par le gestionnaire d'interactions d'une chaise roulante autonome. Ce gestionnaire prend la forme d'une interface intelligente entre le robot et l'usager, permettant à celui-ci de stipuler des commandes de haut niveau de façon naturelle, par exemple en parlant à voix haute. Les fonctions du gestionnaire sont accomplies à l'aide d'un POMDP et constituent un scénario d'apprentissage idéal, dans lequel l'agent doit s'ajuster progressivement aux besoins de l'usager.
APA, Harvard, Vancouver, ISO, and other styles
10

Sullivan, Josephine Jean. "A Bayesian framework for object localisation in visual images." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Parno, Matthew David. "A multiscale framework for Bayesian inference in elliptic problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65322.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011.
Page 118 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 112-117).
The Bayesian approach to inference problems provides a systematic way of updating prior knowledge with data. A likelihood function involving a forward model of the problem is used to incorporate data into a posterior distribution. The standard method of sampling this distribution is Markov chain Monte Carlo which can become inefficient in high dimensions, wasting many evaluations of the likelihood function. In many applications the likelihood function involves the solution of a partial differential equation so the large number of evaluations required by Markov chain Monte Carlo can quickly become computationally intractable. This work aims to reduce the computational cost of sampling the posterior by introducing a multiscale framework for inference problems involving elliptic forward problems. Through the construction of a low dimensional prior on a coarse scale and the use of iterative conditioning technique the scales are decouples and efficient inference can proceed. This work considers nonlinear mappings from a fine scale to a coarse scale based on the Multiscale Finite Element Method. Permeability characterization is the primary focus but a discussion of other applications is also provided. After some theoretical justification, several test problems are shown that demonstrate the efficiency of the multiscale framework.
by Matthew David Parno.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
12

GUERRERO, PEÑA Fidel Alejandro. "A Bayesian framework for object recognition under severe occlusion." Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/25221.

Full text
Abstract:
PEÑA, Fidel Alejandro Guerrero também é conhecido em citações bibliográficas por: GUERRERO-PEÑA, Fidel Alejandro
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-07-25T18:34:38Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Fidel Alenjandro Guerrero Peña.pdf: 3548161 bytes, checksum: 0af5697d578c29adf24e374dac93cf4f (MD5)
Approved for entry into archive by Alice Araujo (alice.caraujo@ufpe.br) on 2018-07-26T21:16:04Z (GMT) No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Fidel Alenjandro Guerrero Peña.pdf: 3548161 bytes, checksum: 0af5697d578c29adf24e374dac93cf4f (MD5)
Made available in DSpace on 2018-07-26T21:16:04Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Fidel Alenjandro Guerrero Peña.pdf: 3548161 bytes, checksum: 0af5697d578c29adf24e374dac93cf4f (MD5) Previous issue date: 2017-02-22
CNPq
Shape classification has multiple applications. In real scenes, shapes may contain severe occlusions, hardening the identification of objects. In this work, a bayesian framework for object recognition under severe and varied conditions of occlusion is proposed. The proposed framework is capable of performing three main steps in object recognition: representation of parts, retrieval of the most probable objects and hypotheses validation for final object identification. Occlusion is dealt with separating shapes into parts through high curvature points, then tangent angle signature is found for each part and continuous wavelet transform is calculated for each signature in order to reduce noise. Next, the best matching object is retrieved for each part using Pearson’s correlation coefficient as query prior, indicating the similarity between the part representation and of the most probable object in the database. For each probable class, an ensemble of Hidden Markov Model (HMM) is created through training with the one-class approach. A sort of search space retrieval is created using class posterior probability given by the ensemble. For occlusion likelihood, an area term that measure visual consistency between retrieved object and occlusion is proposed. For hypotheses validation, a area constraint is set to enhance recognition performance eliminating duplicated hypotheses. Experiments were carried out employing several real world images and synthetical generated occluded objects datasets using shapes of CMU_KO and MPEG-7 databases. The MPEG-7 dataset contains 1500 test shape instances with different scenarios of object occlusion with varied levels of object occlusion, different number of object classes in the problem, and different number of objects in the occlusion. For real images experimentation the CMU_KO challenge set contains 8 single view object classes with 100 occluded objects per class for testing and 1 non occluded object per class for training. Results showed the method not only was capable of identifying highly occluded shapes (60%-80% overlapping) but also present several advantages over previous methods. The minimum F-Measure obtained in MPEG-7 experiments was 0.67, 0.93 and 0.92, respectively and minimum AUROC of 0.87 for recognition in CMU_KO dataset, a very promising result due to complexity of the problem. Different amount of noise and varied amount of search space retrieval visited were also tested to measure framework robustness. Results provided an insight on capabilities and limitations of the method, demonstrating the use of HMMs for sorting search space retrieval improved efficiency over typical unsorted version. Also, wavelet filtering consistently outperformed the unfiltered and sampling noise reduction versions under high amount of noise.
A classificação da forma tem múltiplas aplicações. Em cenas reais, as formas podem conter oclusões severas, tornando difícil a identificação de objetos. Neste trabalho, propõe-se uma abordagem bayesiana para o reconhecimento de objetos com oclusão severa e em condições variadas. O esquema proposto é capaz de realizar três etapas principais no reconhecimento de objetos: representação das partes, recuperação dos objetos mais prováveis e a validação de hipóteses para a identificação final dos objetos. A oclusão é tratada separando as formas em partes através de pontos de alta curvatura, então a assinatura do ângulo tangente é encontrada para cada parte e a transformada contínua de wavelet é calculada para cada assinatura reduzindo o ruído. Em seguida, o objeto mais semelhante é recuperado para cada parte usando o coeficiente de correlação de Pearson como prior da consulta, indicando a similaridade entre a representação da parte e o objeto mais provável no banco de dados. Para cada classe provável, um sistema de múltiplos classificadores com Modelos Escondido de Markov (HMM) é criado através de treinamento com a abordagem de uma classe. Um ordenamento do espaço de busca é criada usando a probabilidade a posterior da classe dada pelos classificadores. Como verosimilhança de oclusão, é proposto um termo de área que mede a consistência visual entre o objeto recuperado e a oclusão. Para a validação de hipóteses, uma restrição de área é definida para melhorar o desempenho do reconhecimento eliminando hipóteses duplicadas. Os experimentos foram realizados utilizando várias imagens do mundo real e conjuntos de dados de objetos oclusos gerados de forma sintética usando formas dos bancos de dados CMU_KO e MPEG-7. O conjunto de dados MPEG-7 contém 1500 instâncias de formas de teste com diferentes cenários de oclusão por exemplo, com vários níveis de oclusões de objetos, número diferente de classes de objeto no problema e diferentes números de objetos na oclusão. Para a experimentação de imagens reais, o desafiante conjunto CMU_KO contém 8 classes de objeto na mesma perspectiva com 100 objetos ocluídos por classe para teste e 1 objeto não ocluso por classe para treinamento. Os resultados mostraram que o método não só foi capaz de identificar formas altamente ocluídas (60% - 80% de sobreposição), mas também apresentar várias vantagens em relação aos métodos anteriores. A F-Measure mínima obtida em experimentos com MPEG-7 foi de 0.67, 0.93 e 0.92, respectivamente, e AUROC mínimo de 0.87 para o reconhecimento no conjunto de dados CMU_KO, um resultado muito promissor devido à complexidade do problema. Diferentes quantidades de ruído e quantidade variada de espaço de busca visitado também foram testadas para medir a robustez do método. Os resultados forneceram uma visão sobre as capacidades e limitações do método, demonstrando que o uso de HMMs para ordenar o espaço de busca melhorou a eficiência sobre a versão não ordenada típica. Além disso, a filtragem com wavelets superou consistentemente as versões de redução de ruído não filtradas e de amostragem sob grande quantidade de ruído.
APA, Harvard, Vancouver, ISO, and other styles
13

Nightingale, Glenna Faith. "Bayesian point process modelling of ecological communities." Thesis, University of St Andrews, 2013. http://hdl.handle.net/10023/3710.

Full text
Abstract:
The modelling of biological communities is important to further the understanding of species coexistence and the mechanisms involved in maintaining biodiversity. This involves considering not only interactions between individual biological organisms, but also the incorporation of covariate information, if available, in the modelling process. This thesis explores the use of point processes to model interactions in bivariate point patterns within a Bayesian framework, and, where applicable, in conjunction with covariate data. Specifically, we distinguish between symmetric and asymmetric species interactions and model these using appropriate point processes. In this thesis we consider both pairwise and area interaction point processes to allow for inhibitory interactions and both inhibitory and attractive interactions. It is envisaged that the analyses and innovations presented in this thesis will contribute to the parsimonious modelling of biological communities.
APA, Harvard, Vancouver, ISO, and other styles
14

Mohamed, Ibrahim Daoud Ahmed. "Automatic history matching in Bayesian framework for field-scale applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3170.

Full text
Abstract:
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.
APA, Harvard, Vancouver, ISO, and other styles
15

Davradakis, Emmanuel. "Monetary policy analysis at a non-linear and Bayesian framework." Thesis, University of Warwick, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kumuthini, Judit. "Extraction of genetic network from microarray data using Bayesian framework." Thesis, Cranfield University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Korrapati, Raghu B. "A Bayesian Framework to Determine Patient Compliance in Glaucoma Cases." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/643.

Full text
Abstract:
This dissertation develops a Bayesian framework to assess medication compliance in glaucoma patients. Bayesian Networks have increasingly become tools of choice in solving problems involving uncertainty in the medical domain. These models have been successfully applied to diagnosis applications. This research applied Bayesian modeling to medication noncompliance in glaucoma patients. Medication noncompliance is the failure to comply with a physician's instructions with regard to taking medications at specified times. If the patient is non-compliant, irrespective of the advances in medical field, the person does not benefit from medical intervention. A model-based decision support system using a Bayesian Network was developed to determine whether a patient was complying with the medications prescribed by the physician. The predictive ability of the model was investigated using the existing patient data. To assess research validity, the results obtained through the model were compared against a domain expert's evaluation of the patient cases. The results provided by the Bayesian framework agree with the information provided by the domain expert. The Bayesian model can be used to confirm an ophthalmologist's clinical intuition or to formulate a prescription strategy for a glaucoma patient. The model can be further refined using larger patient data sets and additional variables. A clinical decision support system can be developed using the refined model to prevent medical errors in glaucoma compliance process. Results from this study could potentially improve the decision making process, given the uncertain and incomplete data available to a physician. The Bayesian approach may be generalized to other applications where a decision has to be made based on incomplete and uncertain data sets.
APA, Harvard, Vancouver, ISO, and other styles
18

Prezioso, Jamie. "An Inverse Problem of Cerebral Hemodynamics in the Bayesian Framework." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1492108014359289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Parpart, Paula. "Why less can be more : a Bayesian framework for heuristics." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/10024597/.

Full text
Abstract:
When making decisions under uncertainty, one common view is that people rely on simple heuristics that deliberately ignore information. One of the greatest puzzles in cognitive science concerns why heuristics can sometimes outperform full-information models, such as linear regression, which make full use of the available information. In this thesis, I will contribute the novel idea that heuristics can be thought of as embodying extreme Bayesian priors. Thereby, an explanation for less-is-more is that the heuristics’ relative simplicity and inflexibility amounts to a strong inductive bias, that is suitable for some learning and decision problems. I will formalize this idea by introducing Bayesian models within which heuristics are an extreme case along a continuum of model flexibility defined by the strength and nature of the prior. Crucially, the Bayesian models include heuristics at one of the Bayesian prior strength and classic full-information models at the other end of the Bayesian prior. This allows for a comparative test between the intermediate models along the continuum and the extremes of heuristics and full regression model. Indeed, I will show that intermediate models perform best across simulations, suggesting that down-weighting information is preferable to entirely ignoring it. These results refute an absolute version of less-is-more, demonstrating that heuristics will usually be outperformed by a model that takes into account the full information but weighs it appropriately. Thereby, the thesis provides a novel explanation for less-is-more: Heuristics work well because they embody a Bayesian prior that approximates the optimal prior. While the main contribution is formal, the final Chapter will explore whether less is more at the psychological level, and finds that people do not use heuristics, but rely on the full information instead. A consistent perspective will emerge throughout the whole thesis, which is that less is not more.
APA, Harvard, Vancouver, ISO, and other styles
20

Ezeani, Callistus. "A Framework for MultiFactorAuthentication on Mobile Devices.- A Bayesian Approach." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85984.

Full text
Abstract:
The most authentication mechanism used in certain domains like home banking, infrastructure surveillance, industrial control, etc. are commercial off the Shelf (COTS) solutions. These are packaged solutions which are adapted to satisfy the need of the purchasing organization, Microsoft, for example, is a COTS software provider. Multifactor Authentication (MFA) is COTS. MFA in the context of this research provides a framework to improve the available techniques. This framework is based on biometrics and as such presents, an alternative to complement the traditional knowledge-based authentication techniques. With an overview based on the probability of failure to enroll(FTE), this research work evaluates available approaches and identifies promising avenues in utilizing MFA in modern mobile devices. Biometrics removes heuristic errors and probability adjustment errors by providing the full potential to increase MFA in mobile devices. The primary objective is to Identify discrepancies and limitation commonly faced by mobile owners during authentication.
APA, Harvard, Vancouver, ISO, and other styles
21

Biresaw, Tewodros Atanaw. "Self-correcting Bayesian target tracking." Thesis, Queen Mary, University of London, 2015. http://qmro.qmul.ac.uk/xmlui/handle/123456789/7925.

Full text
Abstract:
Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking. For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting tracking for an extended object with model points and tracker-level fusion as described below. For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed detections in the motion capture data. Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The fusion framework combines trackers addressing different tracking challenges to improve the overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework with two Bayesian trackers on video sequences with various challenges and show its robustness compared to the independent use of the trackers used in the framework, and also compared to other state-of-the-art trackers. The appropriate online performance measure based appearance model update and prior mixing on trackers allows the proposed framework to deal with tracking challenges.
APA, Harvard, Vancouver, ISO, and other styles
22

Cevher, Volkan. "A Bayesian Framework for Target Tracking using Acoustic and Image Measurements." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6824.

Full text
Abstract:
Target tracking is a broad subject area extensively studied in many engineering disciplines. In this thesis, target tracking implies the temporal estimation of target features such as the target's direction-of-arrival (DOA), the target's boundary pixels in a sequence of images, and/or the target's position in space. For multiple target tracking, we have introduced a new motion model that incorporates an acceleration component along the heading direction of the target. We have also shown that the target motion parameters can be considered part of a more general feature set for target tracking, e.g., target frequencies, which may be unrelated to the target motion, can be used to improve the tracking performance. We have introduced an acoustic multiple-target tracker using a flexible observation model based on an image tracking approach by assuming that the DOA observations might be spurious and that some of the DOAs might be missing in the observation set. We have also addressed the acoustic calibration problem from sources of opportunity such as beacons or a moving source. We have derived and compared several calibration methods for the case where the node can hear a moving source whose position can be reported back to the node. The particle filter, as a recursive algorithm, requires an initialization phase prior to tracking a state vector. The Metropolis-Hastings (MH) algorithm has been used for sampling from intractable multivariate target distributions and is well suited for the initialization problem. Since the particle filter only needs samples around the mode, we have modified the MH algorithm to generate samples distributed around the modes of the target posterior. By simulations, we show that this mode hungry algorithm converges an order of magnitude faster than the original MH scheme. Finally, we have developed a general framework for the joint state-space tracking problem. A proposal strategy for joint state-space tracking using the particle filters is defined by carefully placing the random support of the joint filter in the region where the final posterior is likely to lie. Computer simulations demonstrate improved performance and robustness of the joint state-space when using the new particle proposal strategy.
APA, Harvard, Vancouver, ISO, and other styles
23

Kabir, Golam. "Planning repair and replacement program for water mains : a Bayesian framework." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/57568.

Full text
Abstract:
Aging water infrastructure is a major concern for water utilities throughout the world. It is challenging to develop an extensive water mains renewal program and predict the performance of the water mains. Uncertainties become an integral part of the repair and replacement (R&R) action program due to incomplete and partial information, integration of data/information from different sources, and the involvement of expert judgment for the data interpretation and so on. Moreover, the uncertainties differ because of the amount and quality of data available for developing or implementing R&R action program varies among utilities. In this research, a Bayesian framework is developed for the R&R action program of water mains considering these uncertainties. At the beginning of the research, state-of-the-art critical review of existing regression-based, survival analysis and heuristic based failure models and life cycle cost (LCC) studies in the field of water main are performed. To identify the influential covariates and to predict the failure rates of water mains considering model uncertainties with limited failure information, Bayesian model averaging and Bayesian regression based model are developed. In these models, decision maker’s degree of optimism and credibility are integrated using ordered weighted averaging operator. A robust Bayesian updating based framework is proposed to update the performance of water main failure model for medium to large-sized utilities with adequate failure information. A LCC framework is prepared for water main of small to medium-sized utilities. Finally, a Bayesian belief network (BBN) based water main failure risk framework is developed for small to medium sized utilities with no or limited failure information. The integration of the proposed robust Bayesian models with the geographic information system (GIS) of the water utilities will provide information both at operation level and network level. The proposed tool will help the utility engineers and managers to predict the suitable new installation and rehabilitation programs as well as their corresponding costs for effective and proactive decision-making and thereby avoiding any unexpected and unpleasant surprises.
Applied Science, Faculty of
Engineering, School of (Okanagan)
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Li Ph D. Massachusetts Institute of Technology. "Efficient IC statistical modeling and extraction using a Bayesian inference framework." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99786.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 147-156).
Variability modeling and extraction in advanced process technologies is a key challenge to ensure robust circuit performance as well as high manufacturing yield. In this thesis, we present an ecient framework for device and circuit variability modeling and extraction by combining an ultra-compact transistor model, called the MIT virtual source (MVS) model, and a Bayesian extraction method. Based on statistical formulations extended from the MVS model, we propose algorithms for three applications that greatly reduce time and cost required for measurement of on-chip test structures and characterization of library cells. We start with a novel DC and transient parameter extraction methodology for the MVS model and achieve a quantitative match with industry standard models for output characteristics of MOS transistor devices. We develop a physically based statistical MVS model extension and a corresponding statistical extraction technique based on the backward propagation of variance (BPV). The resulting statistical MVS model is validated using Monte Carlo simulations, and the statistical distributions of several gures of merit for logic and memory cells are compared with those of a 40-nm CMOS industrial design kit. A critical problem in design for manufacturability (DFM) is to build statistically valid prediction models of circuit performance based on a small number of measurements taken from a mixture of on-chip test structures. Towards this goal, we propose a technique named physical subspace projection to transfer a mixture of measurements into a unique probability space spanned by MVS parameters. We search over MVS parameter combinations to nd those with the maximum probability by extending the expectation-maximization (EM) algorithm and iteratively solve the maximum a posteriori (MAP) estimation problem. Finally, we develop a process shift calibration technique to estimate circuit performance by combining SPICE simulation and very few new measurements. We further develop a parameter extraction algorithm to accurately extract all current-voltage (I - V ) parameters given limited and incomplete I - V measurements, applicable to early technology evaluation and statistical parameter extraction. An important step in this method is the use of MAP estimation where past measurements of transistors from various technologies are used to learn a prior distribution and its uncertainty matrix for the parameters of the target technology. We then utilize Bayesian inference to facilitate extraction and posterior estimates for the target technologies using a very small set of additional measurements. Finally, we develop a novel flow to enable computationally efficient statistical characterization of delay and slew in standard cell libraries. We first propose a novel ultra-compact, analytical model for gate timing characterization. Next, instead of exploiting the sparsity of the regression coefficients of the process space with a reduced process sample size, we exploit correlations between dierent cell variables (design and input conditions) by a Bayesian learning algorithm to estimate the parameters of the aforementioned timing model using past library characterizations along with a very small set of additional simulations.
by Li Yu.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Casamitjana, Diaz Adria. "New insights on speech signal modeling in a Bayesian framework approach." Thesis, KTH, Kommunikationsteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166844.

Full text
Abstract:
Speech signal processing is an old research topic within the communication theory community. The continously increasing telephony market brought special attention to the discipline during the 80’s and 90’s, specially in speech coding and speech enhancement, where the most significant contributions were made. More recently, due to the appearance of novel signal processing techniques, the standard methods are being questioned. Sparse representation of signals and compessed sensing made significant contributions to the discipline, through a better representation of signals and more efficient processing techniques. In this thesis, standard speech modeling techniques are revisited. Firstly, a representation of the speech signal through the line spectral frequencies (LSF) is presented, with a extended stability analysis. Moreover, a new Bayesian framework to time-varying linear prediction (TVLP) is shown, with the analysis of different methods. Finally, a theoretical basis for speech denoising is presented and analyzed. At the end of the thesis, the reader will have a broader view of the speech signal processing discipline with new insights that can improve the standard methodology.
APA, Harvard, Vancouver, ISO, and other styles
26

Papakis, Ioannis. "A Bayesian Framework for Multi-Stage Robot, Map and Target Localization." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93024.

Full text
Abstract:
This thesis presents a generalized Bayesian framework for a mobile robot to localize itself and a target, while building a map of the environment. The proposed technique builds upon the Bayesian Simultaneous Robot Localization and Mapping (SLAM) method, to allow the robot to localize itself and the environment using map features or landmarks in close proximity. The target feature is distinguished from the rest of features since the robot has to navigate to its location and thus needs to be observed from a long distance. The contribution of the proposed approach is on enabling the robot to track a target object or region, using a multi-stage technique. In the first stage, the target state is corrected sequentially to the robot correction in the Recursive Bayesian Estimation. In the second stage, with the target being closer, the target state is corrected simultaneously with the robot and the landmarks. The process allows the robot's state uncertainty to be propagated into the estimated target's state, bridging the gap between tracking only methods where the target is estimated assuming known observer state and SLAM methods where only landmarks are considered. When the robot is located far, the sequential stage is efficient in tracking the target position while maintaining an accurate robot state using close only features. Also, target belief is always maintained in comparison to temporary tracking methods such as image-tracking. When the robot is closer to the target and most of its field of view is covered by the target, it is shown that simultaneous correction needs to be used in order to minimize robot, target and map entropies in the absence of other landmarks.
M.S.
This thesis presents a generalized framework with the goal of allowing a robot to localize itself and a static target, while building a map of the environment. This map is used as in the Simultaneous Localization and Mapping (SLAM) framework to enhance robot accuracy and with close features. Target, here, is distinguished from the rest of features since the robot has to navigate to its location and thus needs to be continuously observed from a long distance. The contribution of the proposed approach is on enabling the robot to track a target object or region, using a multi-stage technique. In the first stage, the robot and close landmarks are estimated simultaneously and they are both corrected. Using the robot's uncertainty in its estimate, the target state is then estimated sequentially, considering known robot state. That decouples the target estimation from the rest of the process. In the second stage, with the target being closer, target, robot and landmarks are estimated simultaneously. When the robot is located far, the sequential stage is efficient in tracking the target position while maintaining an accurate robot state using close only features. When the robot is closer to the target and most of its field of view is covered by the target, it is shown that simultaneous correction needs to be used in order to minimize robot, target and map uncertainties in the absence of other landmarks.
APA, Harvard, Vancouver, ISO, and other styles
27

Fu, Guiyu. "Relational framework of distributed Bayesian networks using an extended relational data model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ35835.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jakimovska, Ana. "Empirical framework for building and evaluating Bayesian network models for defect predication." Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fenwick, Elisabeth. "An iterative framework for health technology assessment employing Bayesian statistical decision theory." Thesis, University of York, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lawson, Antony Steven. "Bayesian framework for multi-stage transmission expansion planning under uncertainty via emulation." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12587/.

Full text
Abstract:
Effective transmission expansion planning is necessary to ensure a power system can satisfy all demand both reliably and economically. However, at the time reinforcement decisions are made many elements of the future power system background are uncertain, such as demand level, type and location of installed generators, and plant availability statistics. In the current power system planning literature, making decisions which account for such uncertainties is usually done by considering a small set of plausible scenarios, and the resulting limited coverage of parameter space limits confidence that the resulting decision will be a good one with respect to the real world. This thesis will consider a Bayesian approach to transmission expansion planning under uncertainty, which uses statistical emulators to approximate how input affects output of expensive simulators using a small number of training runs (evaluations from the simulator), as well as quantifying uncertainty in the simulator output for all points at which it has not been evaluated. In addition, expert judgement is used to formulate probability density functions to describe the uncertainties which exist in the power system, which can then be used alongside the emulator to estimate expected costs under uncertainty whilst also giving credible intervals for the resulting estimate. Further, the methodology will be expanded to consider multi-stage transmission expansion problems under uncertainty, where uncertainty can be reduced in various aspects of the power system between decisions. In the existing power system planning literature, multi-stage decisions under uncertainty are handled by considering a small number of possible projections of the future power system, which gives a very limited coverage of the space of all possible projections of the future power system. This thesis will consider how emulation can be used alongside backwards induction to calculate costs across all stages as a function of the first stage decision only, whilst also accounting for the uncertainties which exists in the future power system. As part of this, the future state of the power system is modelled using continuous variables which effectively allows for an infinite number of possible projections to be considered. Throughout this thesis, the methodology used is detailed in quite general terms, which should allow for the methodology to be applied to problems of interest other than the transmission expansion planning problem considered in this thesis with relative ease.
APA, Harvard, Vancouver, ISO, and other styles
31

Tohme, Tony. "The Bayesian validation metric : a framework for probabilistic model calibration and validation." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126919.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 109-114).
In model development, model calibration and validation play complementary roles toward learning reliable models. In this thesis, we propose and develop the "Bayesian Validation Metric" (BVM) as a general model validation and testing tool. We show that the BVM can represent all the standard validation metrics - square error, reliability, probability of agreement, frequentist, area, probability density comparison, statistical hypothesis testing, and Bayesian model testing - as special cases while improving, generalizing and further quantifying their uncertainties. In addition, the BVM assists users and analysts in designing and selecting their models by allowing them to specify their own validation conditions and requirements. Further, we expand the BVM framework to a general calibration and validation framework by inverting the validation mathematics into a method for generalized Bayesian regression and model learning. We perform Bayesian regression based on a user's definition of model-data agreement. This allows for model selection on any type of data distribution, unlike Bayesian and standard regression techniques, that "fail" in some cases. We show that our tool is capable of representing and combining Bayesian regression, standard regression, and likelihood-based calibration techniques in a single framework while being able to generalize aspects of these methods. This tool also offers new insights into the interpretation of the predictive envelopes in Bayesian regression, standard regression, and likelihood-based methods while giving the analyst more control over these envelopes.
by Tony Tohme.
S.M.
S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
APA, Harvard, Vancouver, ISO, and other styles
32

Ricciardi, Denielle E. "Uncertainty Quantification and Propagation in Materials Modeling Using a Bayesian Inferential Framework." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587473424147276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zheng, Xin. "Stock Market, Investment and Sentiment in the Framework of Bayesian DSGE Models." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20348.

Full text
Abstract:
We investigate the interactions among consumer preference, firm investment, stock market activity, investor sentiment and monetary policy in Bayesian Dynamic Stochastic General Equilibrium (DSGE) models for the U.S. economy. We design a framework in which household turnovers, firm turnovers, equity risk premiums, investment, preference and sentiment jointly influence stock price misalignments and macroeconomic fluctuations. These are not only due to households’ interactions with the stock market through financial wealth, consumer preference and investor sentiment, but also induced by firms’ interactions with the stock market through financial resources, firm investment and equity risk premiums. Our objectives are fivefold. We disentangle between stock price fluctuations induced by risk premiums and by animal spirits. We model risk premiums using the financial shock and animal spirits using the sentiment shock. We identify influence channels of the real economy on stock price fluctuations. We investigate propagation mechanisms of stock price fluctuations to the real economy and evaluate monetary policy responses to stock price misalignments. Our methodologies include Bayesian estimation, historical shock decomposition, impulse response analysis, forecast error variance decomposition and Bayesian model comparison. Our main findings are fourfold. Equity risk premiums, sentiment, investment and preference make substantial contributions to explaining stock price fluctuations, consumer sentiment variations, investment fluctuations, output fluctuations and inflation variations. Financial, sentiment, investment and preference shocks propagate through stock market index pricing rule, stock market bubble evolution, intertemporal substitution of investment and intertemporal substitution of consumption respectively. Higher household turnover rate increases stock market wealth effect and aggregate demand, whereas higher firm turnover rate contaminates stock market wealth effect and financial shocks’ impacts. Monetary policy responds counteractively and significantly to financial slack at business cycle frequency. We then combine the artificial data generated by the DSGE model and the actual data originating from the unrestricted VAR model to formulate DSGE prior for Bayesian VAR (BVAR) model. Furthermore, we apply the DSGE model implied cross-equation restrictions to BVAR model and generate DSGE-VAR model in two forms. One form is combination of DSGE model implied prior mean and Normal-Inverse Wishart prior, and we define it as DSGE-VAR model with N-IW prior. The other form is combination of DSGE model implied prior mean and Stochastic Search Variable Selection (SSVS) in Mean-Inverse Wishart prior, and we define it as DSGE-VAR model with SSVS in Mean-IW prior. Finally, we estimate and assess relative forecasting performance of BVAR models with three types of priors, which are DSGE-N-IW prior, DSGE-SSVS in Mean-IW prior and Minnesota Prior. We find DSGE-VAR model with SSVS in Mean-IW prior has identical forecasting performance of BVAR model with Minnesota prior. Therefore, we have demonstrated that the DSGE model does not incur serious model misspecification problem.
APA, Harvard, Vancouver, ISO, and other styles
34

Adrakey, Hola Kwame. "Control and surveillance of partially observed stochastic epidemics in a Bayesian framework." Thesis, Heriot-Watt University, 2016. http://hdl.handle.net/10399/3290.

Full text
Abstract:
This thesis comprises a number of inter-related parts. For most of the thesis we are concerned with developing a new statistical technique that can enable the identi cation of the optimal control by comparing competing control strategies for stochastic epidemic models in real time. In the second part, we develop a novel approach for modelling the spread of Peste des Petits Ruminants (PPR) virus within a given country and the risk of introduction to other countries. The control of highly infectious diseases of agriculture crops, animal and human diseases is considered as one of the key challenges in epidemiological and ecological modelling. Previous methods for analysis of epidemics, in which different controls are compared, do not make full use of the trajectory of the epidemic. Most methods use the information provided by the model parameters which may consider partial information on the epidemic trajectory, so for example the same control strategy may lead to different outcomes when the experiment is repeated. Also, by using partial information it is observed that it might need more simulated realisations when comparing two different controls. We introduce a statistical technique that makes full use of the available information in estimating the effect of competing control strategies on real-time epidemic outbreaks. The key to this approach lies in identifying a suitable mechanism to couple epidemics, which could be unaffected by controls. To that end, we use the Sellke construction as a latent process to link epidemics with different control strategies. The method is initially applied on non-spatial processes including SIR and SIS models assuming that there are no observation data available before moving on to more complex models that explicitly represent the spatial nature of the epidemic spread. In the latter case, the analysis is conditioned on some observed data and inference on the model parameters is performed in Bayesian framework using the Markov Chain Monte Carlo (MCMC) techniques coupled with the data augmentation methods. The methodology is applied on various simulated data sets and to citrus canker data from Florida. Results suggest that the approach leads to highly positively correlated outcomes of different controls, thus reducing the variability between the effect of different control strategies, hence providing a more efficient estimator of their expected differences. Therefore, a reduction of the number of realisations required to compare competing strategies in term of their expected outcomes is obtained. The main purpose of the final part of this thesis is to develop a novel approach to modelling the speed of Pest des Petits Ruminants (PPR) within a given country and to understand the risk of subsequent spread to other countries. We are interested in constructing models that can be fitted using information on the occurrence of outbreaks as the information on the susceptible population is not available, and use these models to estimate the speed of spatial spread of the virus. However, there was little prior modelling on which the models developed here could be built. We start by first establishing a spatio-temporal stochastic formulation for the spread of PPR. This modelling is then used to estimate spatial transmission and speed of spread. To account for uncertainty on the lack of information on the susceptible population, we apply ideas from Bayesian modelling and data augmentation by treating the transmission network as a missing quantity. Lastly, we establish a network model to address questions regarding the risk of spread in the large-scale network of countries and introduce the notion of ` first-passage time' using techniques from graph theory and operational research such as the Bellman-Ford algorithm. The methodology is first applied to PPR data from Tunisia and on simulated data. We also use simulated models to investigate the dynamics of spread through a network of countries.
APA, Harvard, Vancouver, ISO, and other styles
35

McCall, Joel Curtis. "Human attention and intent analysis using robust visual cues in a Bayesian framework." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3215458.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed July 24, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 159-168).
APA, Harvard, Vancouver, ISO, and other styles
36

Schaberreiter, T. (Thomas). "A Bayesian network based on-line risk prediction framework for interdependent critical infrastructures." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526202129.

Full text
Abstract:
Abstract Critical Infrastructures (CIs) are an integral part of our society and economy. Services like electricity supply or telecommunication services are expected to be available at all times and a service failure may have catastrophic consequences for society or economy. Current CI protection strategies are from a time when CIs or CI sectors could be operated more or less self-sufficient and interconnections among CIs or CI sectors, which may lead to cascading service failures to other CIs or CI sectors, where not as omnipresent as today. In this PhD thesis, a cross-sector CI model for on-line risk monitoring of CI services, called CI security model, is presented. The model allows to monitor a CI service risk and to notify services that depend on it of possible risks in order to reduce and mitigate possible cascading failures. The model estimates CI service risk by observing the CI service state as measured by base measurements (e.g. sensor or software states) within the CI service components and by observing the experienced service risk of CI services it depends on (CI service dependencies). CI service risk is estimated in a probabilistic way using a Bayesian network based approach. Furthermore, the model allows CI service risk prediction in the short-term, mid-term and long-term future, given a current CI service risk and it allows to model interdependencies (a CI service risk that loops back to the originating service via dependencies), a special case that is difficult to model using Bayesian networks. The representation of a CI as a CI security model requires analysis. In this PhD thesis, a CI analysis method based on the PROTOS-MATINE dependency analysis methodology is presented in order to analyse CIs and represent them as CI services, CI service dependencies and base measurements. Additional research presented in this PhD thesis is related to a study of assurance indicators able to perform an on-line evaluation of the correctness of risk estimates within a CI service, as well as for risk estimates received from dependencies. A tool that supports all steps of establishing a CI security model was implemented during this PhD research. The research on the CI security model and the assurance indicators was validated based on a case study and the initial results suggest its applicability to CI environments
Tiivistelmä Tässä väitöskirjassa esitellään läpileikkausmalli kriittisten infrastruktuurien jatkuvaan käytön riskimallinnukseen. Tämän mallin avulla voidaan tiedottaa toisistaan riippuvaisia palveluita mahdollisista vaaroista, ja siten pysäyttää tai hidastaa toisiinsa vaikuttavat ja kumuloituvat vikaantumiset. Malli analysoi kriittisen infrastruktuurin palveluriskiä tutkimalla kriittisen infrastruktuuripalvelun tilan, joka on mitattu perusmittauksella (esimerkiksi anturi- tai ohjelmistotiloina) kriittisen infrastruktuurin palvelukomponenttien välillä ja tarkkailemalla koetun kriittisen infrastruktuurin palveluriskiä, joista palvelut riippuvat (kriittisen infrastruktuurin palveluriippuvuudet). Kriittisen infrastruktuurin palveluriski arvioidaan todennäköisyyden avulla käyttämällä Bayes-verkkoja. Lisäksi malli mahdollistaa tulevien riskien ennustamisen lyhyellä, keskipitkällä ja pitkällä aikavälillä, ja mahdollistaa niiden keskinäisten riippuvuuksien mallintamisen, joka on yleensä vaikea esittää Bayes-verkoissa. Kriittisen infrastruktuurin esittäminen kriittisen infrastruktuurin tietoturvamallina edellyttää analyysiä. Tässä väitöskirjassa esitellään kriittisen infrastruktuurin analyysimenetelmä, joka perustuu PROTOS-MATINE -riippuvuusanalyysimetodologiaan. Kriittiset infrastruktuurit esitetään kriittisen infrastruktuurin palveluina, palvelujen keskinäisinä riippuvuuksina ja perusmittauksina. Lisäksi tutkitaan varmuusindikaattoreita, joilla voidaan tutkia suoraan toiminnassa olevan kriittisen infrastruktuuripalvelun riskianalyysin oikeellisuutta, kuin myös riskiarvioita riippuvuuksista. Tutkimuksessa laadittiin työkalu, joka tukee kriittisen infrastruktuurin tietoturvamallin toteuttamisen kaikkia vaiheita. Kriittisen infrastruktuurin tietoturvamalli ja varmuusindikaattorien oikeellisuus vahvistettiin konseptitutkimuksella, ja alustavat tulokset osoittavat menetelmän toimivuuden
Kurzfassung In dieser Doktorarbeit wird ein Sektorübergreifendes Modell für die kontinuierliche Risikoabschätzung von kritische Infrastrukturen im laufenden Betrieb vorgestellt. Das Modell erlaubt es, Dienstleistungen, die in Abhängigkeit einer anderen Dienstleistung stehen, über mögliche Gefahren zu informieren und damit die Gefahr des Übergriffs von Risiken in andere Teile zu stoppen oder zu minimieren. Mit dem Modell können Gefahren in einer Dienstleistung anhand der Überwachung von kontinuierlichen Messungen (zum Beispiel Sensoren oder Softwarestatus) sowie der Überwachung von Gefahren in Dienstleistungen, die eine Abhängigkeit darstellen, analysiert werden. Die Abschätzung von Gefahren erfolgt probabilistisch mittels eines Bayessches Netzwerks. Zusätzlich erlaubt dieses Modell die Voraussage von zukünftigen Risiken in der kurzfristigen, mittelfristigen und langfristigen Zukunft und es erlaubt die Modellierung von gegenseitigen Abhängigkeiten, die im Allgemeinen schwer mit Bayesschen Netzwerken darzustellen sind. Um eine kritische Infrastruktur als ein solches Modell darzustellen, muss eine Analyse der kritischen Infrastruktur durchgeführt werden. In dieser Doktorarbeit wird diese Analyse durch die PROTOS-MATINE Methode zur Analyse von Abhängigkeiten unterstützt. Zusätzlich zu dem vorgestellten Modell wird in dieser Doktorarbeit eine Studie über Indikatoren, die das Vertrauen in die Genauigkeit einer Risikoabschätzung evaluieren können, vorgestellt. Die Studie beschäftigt sich sowohl mit der Evaluierung von Risikoabschätzungen innerhalb von Dienstleistungen als auch mit der Evaluierung von Risikoabschätzungen, die von Dienstleistungen erhalten wurden, die eine Abhängigkeiten darstellen. Eine Software, die alle Aspekte der Erstellung des vorgestellten Modells unterstützt, wurde entwickelt. Sowohl das präsentierte Modell zur Abschätzung von Risiken in kritischen Infrastrukturen als auch die Indikatoren zur Uberprüfung der Risikoabschätzungen wurden anhand einer Machbarkeitsstudie validiert. Erste Ergebnisse suggerieren die Anwendbarkeit dieser Konzepte auf kritische Infrastrukturen
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Boye, and 扬博野. "Online auction price prediction: a Bayesian updating framework based on the feedback history." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43085830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cook, Alex. "Inference and prediction in plant populations using data augmentation within a Bayesian framework." Thesis, Heriot-Watt University, 2006. http://hdl.handle.net/10399/178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Alterovitz, Gil 1975. "A Bayesian framework for statistical signal processing and knowledge discovery in proteomic engineering." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34479.

Full text
Abstract:
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, February 2006.
Includes bibliographical references (leaves 73-85).
Proteomics has been revolutionized in the last couple of years through integration of new mass spectrometry technologies such as -Enhanced Laser Desorption/Ionization (SELDI) mass spectrometry. As data is generated in an increasingly rapid and automated manner, novel and application-specific computational methods will be needed to deal with all of this information. This work seeks to develop a Bayesian framework in mass-based proteomics for protein identification. Using the Bayesian framework in a statistical signal processing manner, mass spectrometry data is filtered and analyzed in order to estimate protein identity. This is done by a multi-stage process which compares probabilistic networks generated from mass spectrometry-based data with a mass-based network of protein interactions. In addition, such models can provide insight on features of existing models by identifying relevant proteins. This work finds that the search space of potential proteins can be reduced such that simple antibody-based tests can be used to validate protein identity. This is done with real proteins as a proof of concept. Regarding protein interaction networks, the largest human protein interaction meta-database was created as part of this project, containing over 162,000 interactions. A further contribution is the implementation of the massome network database of mass-based interactions- which is used in the protein identification process.
(cont.) This network is explored in terms potential usefulness for protein identification. The framework provides an approach to a number of core issues in proteomics. Besides providing these tools, it yields a novel way to approach statistical signal processing problems in this domain in a way that can be adapted as proteomics-based technologies mature.
by Gil Alterovitz.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
40

Enderwick, Tracey Claire. "Reasoning with incomplete information : within the framework of Bayesian networks and influence diagrams." Thesis, Cranfield University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501518.

Full text
Abstract:
Human cognitive limitations make it very difficult to effectively process and rationalise information in complex situations. To overcome this limitation many analytical methods have been designed and applied to aid decision-makers in complex situations. In some cases, the information gained is comprehensive and complete. However, very often it is the case that information regarding the situation is incomplete and uncertain. In these cases it is necessary to reason with incomplete and uncertain information. The probabilistic graphical models known as Bayesian Networks and Influence Diagrams provide a powerful and increasingly popular framework to represent such situations. The research described here makes use of this framework to address a number of aspects relating to incomplete information. The methods presented are intended to provide support in areas of measuring the completeness of information, assessing the trade-off of speed versus quality of decision-making and incorporating the impact of unrevealed information as time progresses. Two measures are investigated to determine the completeness levels of influential observable information. One measure is based on mutual information. This measure is ultimately shown to fail, however, since it can result in a negative completeness value. The other measure focuses on the range reductions of either the probabilities (for the Bayesian Networks) or the utilities (for the Influence Diagrams) when observations are made. Analytical models were developed to determine the trade off between waiting for more information or making an immediate decision. A number of experiments involving participants in imaginary decision-making scenarios were also conducted to gain an understanding of how people intuitively weight such choices. The value of unrevealed information was utilised by applying likelihood evidence. Unrevealed information relates to something we are looking for but have not yet found. The longer time passes without it being found, the more confident we can become that it is not actually there.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Boye. "Online auction price prediction a Bayesian updating framework based on the feedback history /." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43085830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

PARADISO, SIMONE. "CMB LIKELIHOOD AND COSMOLOGICAL PARAMETERS ESTIMATION IN A BAYESIAN END-TO-END FRAMEWORK." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/875458.

Full text
Abstract:
The cosmic microwave background (CMB) constitutes one of the most powerful probes of cosmology available today, as the statistical properties of the pattern of small vari- ations in the intensity and polarisation of this radiation impose strong constraints on cosmological structure formation processes in the early universe. The first discovery of these fluctuations was made by Smoot et al. (1992), and during the last three decades massive efforts have been spent on producing detailed maps with steadily increasing sensitivity and precision (e.g., Bennett et al. 2013; de Bernardis et al. 2000; Louis et al. 2017; Sievers et al. 2013; Ogburn et al. 2010; Planck Collaboration I 2020, and references therein). State-of-the-art full-sky CMB measurements from the Planck satellite, complemented by ground and balloon observations and data from non-CMB cosmological probes have led to a spectacularly successful cosmological concordance model called ΛCDM that posits that the Universe was created during a hot Big Bang about 13.8 billion years ago; that it was seeded by Gaussian random density fluctuations during a brief period of exponential expansion called inflation; and that it consists of about 5 % baryonic matter, 25 % dark matter, and 70 % dark energy. This model is able to describe a host of cosmological observables with exquisite precision (see e.g. Planck Collaboration VI 2020), although it leaves much to be desired in terms of theoretical understanding. Indeed, some of the biggest questions in modern cosmology revolves around understanding the physical nature of inflation, dark matter and dark energy, and billions of dollars and euros are spent on these questions. CMB observations play a key role in all these studies. The next major scientific endeavour for the CMB community is the search for primordial gravitational waves created during the inflationary epoch (e.g., Kamionkowski & Kovetz 2016). Current theories predict that such gravitational waves should imprint large-scale B-mode polarisation in the CMB anisotropies, with a map-domain amplitude no larger than a few tens of nK on degree angular scales. Detecting such a faint signal requires at least one or two orders of magnitude higher sensitivity than Planck, and correspondingly more stringent systematics suppression and uncertainty assessment. Indeed, Planck marked the transition from noise dominated CMB measurements, at least for temperature anisotropies, to instrumental and foregrounds systematics dominated measurements, necessitating a change of approach in CMB data analysis. Perhaps the single most important lesson learned in this respect is an understanding of the tight relationship between instrument characterisation and astrophysical component separa- tion. Because any current and planned CMB experiment in practice must be calibrated with in-flight observations of astrophysical sources, the calibration is in practice limited by our knowledge by the astrophysical sources in question—which also typically must be derived from the same data set. Instrument calibration and component separation must therefore be performed jointly, and a significant fraction of the full uncertainty budget arise from degeneracies between the two. This project addresses this challenge by constructing a complete end-to-end analysis pipeline for CMB observation into one integrated framework that does not require intermediate human intervention. This is the first complete approach to support seamless end-to-end error propagation for CMB applications, including full marginalisation over both instrumental and astrophysical uncertainties and their internal degeneracies; see BeyondPlanck Collaboration (2021); Colombo et al. (2021) for further discussion. For pragmatic reasons, the current pipeline has so far only been applied to the Planck LFI observations, which have significantly lower computational requirements and signal- to-noise ratio than the Planck HFI observations. The cosmological parameter constraints derived in the following are therefore not competitive in terms of absolute uncertain- ties as compared with already published Planck constraints. Rather, the present analysis focuses primarily on general algorithmic aspects, and serves as a first real-world demonstration of the end-to-end Bayesian framework, serving as a platform for further development and data integration (Gerakakis et al. 2021). Within BP, my activity focused on the scientific analysis of CMB products, at map, power spectrum and cosmological parameters level. More specifically, noting the sen- sitivity of systematics on large-scale polarisation reconstruction, I used the reionisation optical depth τ to assess the stability and performance of the BEYONDPLANCK frame- work, estimating P (τ | d) from Planck LFI and WMAP observations. I also constrained a basic 6-parameter ΛCDM model, combining the BEYONDPLANCK low-l likelihood with a high-l Blackwell-Rao CMB temperature likelihood that for the first time covers the two first acoustic peaks, or l ≤ 600. Due to LFI angular resolution and sensitivity, I complemented this with the Planck high-l likelihood to extend the multipole range to the full Planck resolution, as well as selected external non-CMB data sets. I also studied different model independent parameterisations to constrain the reionisatio history of the Universe.
APA, Harvard, Vancouver, ISO, and other styles
43

Bitto, Angela, and Sylvia Frühwirth-Schnatter. "Achieving shrinkage in a time-varying parameter model framework." Elsevier, 2019. http://dx.doi.org/10.1016/j.jeconom.2018.11.006.

Full text
Abstract:
Shrinkage for time-varying parameter (TVP) models is investigated within a Bayesian framework, with the aim to automatically reduce time-varying Parameters to staticones, if the model is overfitting. This is achieved through placing the double gamma shrinkage prior on the process variances. An efficient Markov chain Monte Carlo scheme is devel- oped, exploiting boosting based on the ancillarity-sufficiency interweaving strategy. The method is applicable both to TVP models for univariate a swell as multivariate time series. Applications include a TVP generalized Phillips curve for EU area inflation modeling and a multivariate TVP Cholesky stochastic volatility model for joint modeling of the Returns from the DAX-30index.
APA, Harvard, Vancouver, ISO, and other styles
44

Karaaslan, Hatice. "A Study Of Argumentation In Turkish Within A Bayesian Reasoning Framework: Arguments From Ignorance." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614858/index.pdf.

Full text
Abstract:
In this dissertation, a normative prescriptive paradigm, namely a Bayesian theory of content-dependent argument strength, was employed in order to investigate argumentation, specifically the classic fallacy of the &ldquo
argument from ignorance&rdquo
or &ldquo
argumentum ad ignorantiam&rdquo
. The study was carried out in Turkish with Turkish participants. In the Bayesian framework, argument strength is determined by the interactions between three major factors: prior belief, polarity, and evidence reliability. In addition, topic effects are considered. Three experiments were conducted. The first experiment replicated Hahn et al.&rsquo
s (2005) study in Turkish to investigate whether similar results would be obtained in a different linguistic and cultural community. We found significant main effects of three of the manipulated factors in Oaksford and Hahn (2004) and Hahn et al. (2005): prior belief, reliability and topic. With respect to the Bayesian analysis, the overall fit between the data and the model was very good. The second experiment tested the hypothesis that argument acceptance would not vary across different intelligence levels. There was no significant main effect of prior belief, polarity, topic, and intelligence. We found a main effect of reliability only. However, further analyses on significant interactions showed that more intelligent subjects were less inclined to accept negative polarity items. Finally, the third experiment investigated the hypothesis that argument acceptance would vary depending on the presence of and the kind of evidentiality markers prevalent in Turkish, indicating the certainty with which events in the past have happened, marked with overt morpho-syntactic markers (&ndash
DI or &ndash
mIs). The experiment found a significant main effect of evidentiality as well as replicating the significant main effects of the two of the manipulated factors (prior belief and reliability) in Oaksford and Hahn (2004), Hahn et al. (2005) and in our first experiment. Furthermore, reliability and evidentiality interacted, indicating separate as well as combined effects of the two. With respect to the Bayesian analysis, the overall fit between the data and the model was lower than the one in the first experiment, but still acceptable. Overall, this study supported the normative Bayesian approach to studying argumentation in an interdisciplinary perspective, combining computation, psychology, linguistics, and philosophy.
APA, Harvard, Vancouver, ISO, and other styles
45

McClellan, Michael James. "Estimating regional nitrous oxide emissions using isotopic ratio observations and a Bayesian inverse framework." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119986.

Full text
Abstract:
Thesis: Ph. D. in Atmospheric Science, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 141-148).
Atmospheric nitrous oxide (N₂O) significantly impacts Earth's climate due to its dual role as an inert potent greenhouse gas in the troposphere and as a reactive source of ozone-destroying nitrogen oxides in the stratosphere. Global atmospheric concentrations of N₂O, produced by natural and anthropogenic processes, continue to rise due to increases in emissions linked to human activity. The understanding of the impact of this gas is incomplete as there remain significant uncertainties in its global budget. The experiment described in this thesis, in which a global chemical transport model (MOZART-4), a fine-scale regional Lagrangian model (NAME), and new high-frequency atmospheric observations are combined, shows that uncertainty in N₂O emissions estimates can be reduced in areas with continuous monitoring of N₂O mole fraction and site-specific isotopic ratios. Due to unique heavy-atom (15N and 18O) isotopic substitutions made by different N₂O sources, the measurement of N₂O isotopic ratios in ambient air can help identify the distribution and magnitude of distinct sources. The new Stheno-TILDAS continuous wave laser spectroscopy instrument developed at MIT, recently installed at the Mace Head Atmospheric Research Station in western Ireland, can produce high-frequency timelines of atmospheric N₂O isotopic ratios that can be compared to contemporaneous trends in correlative trace gas mole fractions and NAME-based statistical distributions of the origin of air sampled at the station. This combination leads to apportionment of the relative contribution from five major N₂O sectors in the European region (agriculture, oceans, natural soils, industry, and biomass burning) plus well-mixed air transported from long distances to the atmospheric N₂O measured at Mace Head. Bayesian inverse modeling methods that compare N₂O mole fraction and isotopic ratio observations at Mace Head and at Diibendorf, Switzerland to simulated conditions produced using NAME and MOZART-4 lead to an optimized set of source-specific N₂O emissions estimates in the NAME Europe domain. Notably, this inverse modeling experiment leads to a significant decrease in uncertainty in summertime emissions for the four largest sectors in Europe, and shows that industrial and agricultural N₂O emissions in Europe are underestimated in inventories such as EDGAR v4.3.2. This experiment sets up future work that will be able to help constrain global estimates of N₂O emissions once additional isotopic observations are made in other global locations and integrated into the NAME-MOZART inverse modeling framework described in this thesis.
by Michael James McClellan.
Ph. D. in Atmospheric Science
APA, Harvard, Vancouver, ISO, and other styles
46

Tsiftsi, Thomai. "Statistical shape analysis in a Bayesian framework : the geometric classification of fluvial sand bodies." Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11368/.

Full text
Abstract:
We present a novel shape classification method which is embedded in the Bayesian paradigm. We focus on the statistical classification of planar shapes by using methods which replace some previous approximate results by analytic calculations in a closed form. This gives rise to a new Bayesian shape classification algorithm and we evaluate its efficiency and efficacy on available shape databases. In addition we apply our results to the statistical classification of geological sand bodies. We suggest that our proposed classification method, that utilises the unique geometrical information of the sand bodies, is more substantial and can replace ad-hoc and simplistic methods that have been used in the past. Finally, we conclude this work by extending the proposed classification algorithm for shapes in three-dimensions.
APA, Harvard, Vancouver, ISO, and other styles
47

Mukhtar, Abdulaziz Yagoub Abdelrahman. "Mathematical modeling of the transmission dynamics of malaria in South Sudan." University of the Western Cape, 2019. http://hdl.handle.net/11394/7037.

Full text
Abstract:
Philosophiae Doctor - PhD
Malaria is a common infection in tropical areas, transmitted between humans through female anopheles mosquito bites as it seeks blood meal to carry out egg production. The infection forms a direct threat to the lives of many people in South Sudan. Reports show that malaria caused a large proportion of morbidity and mortality in the fledgling nation, accounting for 20% to 40% morbidity and 20% to 25% mortality, with the majority of the affected people being children and pregnant mothers. In this thesis, we construct and analyze mathematical models for malaria transmission in South Sudan context incorporating national malaria control strategic plan. In addition, we investigate important factors such as climatic conditions and population mobility that may drive malaria in South Sudan. Furthermore, we study a stochastic version of the deterministic model by introducing a white noise.
APA, Harvard, Vancouver, ISO, and other styles
48

Kenja, Krishna. "Bayesian Parameter Estimation for Hyperelastic Constitutive Models of Soft Tissue under Non-homogeneous Deformation." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1515505801223584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kroon, Rodney Stephen. "A framework for estimating risk." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019.1/1104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Aprilia, Asti Wulandari. "Uncertainty quantification of volumetric and material balance analysis of gas reservoirs with water influx using a Bayesian framework." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4998.

Full text
Abstract:
Accurately estimating hydrocarbon reserves is important, because it affects every phase of the oil and gas business. Unfortunately, reserves estimation is always uncertain, since perfect information is seldom available from the reservoir, and uncertainty can complicate the decision-making process. Many important decisions have to be made without knowing exactly what the ultimate outcome will be from a decision made today. Thus, quantifying the uncertainty is extremely important. Two methods for estimating original hydrocarbons in place (OHIP) are volumetric and material balance methods. The volumetric method is convenient to calculate OHIP during the early development period, while the material balance method can be used later, after performance data, such as pressure and production data, are available. In this work, I propose a methodology for using a Bayesian approach to quantify the uncertainty of original gas in place (G), aquifer productivity index (J), and the volume of the aquifer (Wi) as a result of combining volumetric and material balance analysis in a water-driven gas reservoir. The results show that we potentially have significant non-uniqueness (i.e., large uncertainty) when we consider only volumetric analyses or material balance analyses. By combining the results from both analyses, the non-uniqueness can be reduced, resulting in OGIP and aquifer parameter estimates with lower uncertainty. By understanding the uncertainty, we can expect better management decision making.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography