Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Probabilistic Graphical Model.

Thèses sur le sujet « Probabilistic Graphical Model »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Probabilistic Graphical Model ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Srinivasan, Vivekanandan. « Real delay graphical probabilistic switching model for VLSI circuits ». [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000538.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gyftodimos, Elias. « A probabilistic graphical model framework for higher-order term-based representations ». Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425088.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Lai, Wai Lok M. Eng Massachusetts Institute of Technology. « A probabilistic graphical model based data compression architecture for Gaussian sources ». Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/117322.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 107-108).
Data is compressible because of inherent redundancies in the data, mathematically expressed as correlation structures. A data compression algorithm uses the knowledge of these structures to map the original data to a different encoding. The two aspects of data compression, source modeling, ie. using knowledge about the source, and coding, ie. assigning an output sequence of symbols to each output, are not inherently related, but most existing algorithms mix the two and treat the two as one. This work builds on recent research on model-code separation compression architectures to extend this concept into the domain of lossy compression of continuous sources, in particular, Gaussian sources. To our knowledge, this is the first attempt with using with sparse linear coding and discrete-continuous hybrid graphical model decoding for compressing continuous sources. With the flexibility afforded by the modularity of the architecture, we show that the proposed system is free from many inadequacies of existing algorithms, at the same time achieving competitive compression rates. Moreover, the modularity allows for many architectural extensions, with capabilities unimaginable for existing algorithms, including refining of source model after compression, robustness to data corruption, seamless interface with source model parameter learning, and joint homomorphic encryption-compression. This work, meant to be an exploration in a new direction in data compression, is at the intersection of Electrical Engineering and Computer Science, tying together the disciplines of information theory, digital communication, data compression, machine learning, and cryptography.
by Wai Lok Lai.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ramani, Shiva Shankar. « Graphical Probabilistic Switching Model : Inference and Characterization for Power Dissipation in VLSI Circuits ». [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000497.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Obembe, Olufunmilayo. « Development of a probabilistic graphical structure from a model of mental health clinical expertise ». Thesis, Aston University, 2013. http://publications.aston.ac.uk/19432/.

Texte intégral
Résumé :
This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yoo, Keunyoung. « Probabilistic SEM : an augmentation to classical Structural equation modelling ». Diss., University of Pretoria, 2018. http://hdl.handle.net/2263/66521.

Texte intégral
Résumé :
Structural equation modelling (SEM) is carried out with the aim of testing hypotheses on the model of the researcher in a quantitative way, using the sampled data. Although SEM has developed in many aspects over the past few decades, there are still numerous advances which can make SEM an even more powerful technique. We propose representing the nal theoretical SEM by a Bayesian Network (BN), which we would like to call a Probabilistic Structural Equation Model (PSEM). With the PSEM, we can take things a step further and conduct inference by explicitly entering evidence into the network and performing di erent types of inferences. Because the direction of the inference is not an issue, various scenarios can be simulated using the BN. The augmentation of SEM with BN provides signi cant contributions to the eld. Firstly, structural learning can mine data for additional causal information which is not necessarily clear when hypothesising causality from theory. Secondly, the inference ability of the BN provides not only insight as mentioned before, but acts as an interactive tool as the `what-if' analysis is dynamic.
Mini Dissertation (MCom)--University of Pretoria, 2018.
Statistics
MCom
Unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
7

Malings, Carl Albert. « Optimal Sensor Placement for Infrastructure System Monitoring using Probabilistic Graphical Models and Value of Information ». Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/869.

Texte intégral
Résumé :
Civil infrastructure systems form the backbone of modern civilization, providing the basic services that allow society to function. Effective management of these systems requires decision-making about the allocation of limited resources to maintain and repair infrastructure components and to replace failed or obsolete components. Making informed decisions requires an understanding of the state of the system; such an understanding can be achieved through a computational or conceptual system model combined with information gathered on the system via inspections or sensors. Gathering of this information, referred to generally as sensing, should be optimized to best support the decision-making and system management processes, in order to reduce long-term operational costs and improve infrastructure performance. In this work, an approach to optimal sensing in infrastructure systems is developed by combining probabilistic graphical models of infrastructure system behavior with the value of information (VoI) metric, which quantifies the utility of information gathering efforts (referred to generally as sensor placements) in supporting decision-making in uncertain systems. Computational methods are presented for the efficient evaluation and optimization of the VoI metric based on the probabilistic model structure. Various case studies on the application of this approach to managing infrastructure systems are presented, illustrating the flexibility of the basic method as well as various special cases for its practical implementation. Three main contributions are presented in this work. First, while the computational complexity of the VoI metric generally grows exponentially with the number of components, growth can be greatly reduced in systems with certain topologies (designated as cumulative topologies). Following from this, an efficient approach to VoI computation based on a cumulative topology and Gaussian random field model is developed and presented. Second, in systems with non-cumulative topologies, approximate techniques may be used to evaluate the VoI metric. This work presents extensive investigations of such systems and draws some general conclusions about the behavior of this metric. Third, this work presents several complete application cases for probabilistic modeling techniques and the VoI metric in supporting infrastructure system management. Case studies are presented in structural health monitoring, seismic risk mitigation, and extreme temperature response in urban areas. Other minor contributions included in this work are theoretical and empirical comparisons of the VoI with other sensor placement metrics and an extension of the developed sensor placement method to systems that evolve in time. Overall, this work illustrates how probabilistic graphical models and the VoI metric can allow for efficient sensor placement optimization to support infrastructure system management. Areas of future work to expand on the results presented here include the development of approximate, heuristic methods to support efficient sensor placement in non-cumulative system topologies, as well as further validation of the efficient sensing optimization approaches used in this work.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Piao, Dongzhen. « Speeding Up Gibbs Sampling in Probabilistic Optical Flow ». Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/481.

Texte intégral
Résumé :
In today’s machine learning research, probabilistic graphical models are used extensively to model complicated systems with uncertainty, to help understanding of the problems, and to help inference and predict unknown events. For inference tasks, exact inference methods such as junction tree algorithms exist, but they suffer from exponential growth of cluster size and thus is not able to handle large and highly connected graphs. Approximate inference methods do not try to find exact probabilities, but rather give results that improve as algorithm runs. Gibbs sampling, as one of the approximate inference methods, has gained lots of traction and is used extensively in inference tasks, due to its ease of understanding and implementation. However, as problem size grows, even the faster algorithm needs a speed boost to meet application requirement. The number of variables in an application graphical model can range from tens of thousands to billions, depending on problem domain. The original sequential Gibbs sampling may not return satisfactory result in limited time. Thus, in this thesis, we investigate in ways to speed up Gibbs sampling. We will study ways to do better initialization, blocking variables to be sampled together, as well as using simulated annealing. These are the methods that modifies the algorithm itself. We will also investigate in ways to parallelize the algorithm. An algorithm is parallelizable if some steps do not depend on other steps, and we will find out such dependency in Gibbs sampling. We will discuss how the choice of different hardware and software architecture will affect the parallelization result. We will use optical flow problem as an example to demonstrate the various speed up methods we investigated. An optical flow method tries to find out the movements of small image patches between two images in a temporal sequence. We demonstrate how we can model it using probabilistic graphical model, and solve it using Gibbs sampling. The result of using sequential Gibbs sampling is demonstrated, with comparisons from using various speed up methods and other optical flow methods.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kausler, Bernhard [Verfasser], et Fred A. [Akademischer Betreuer] Hamprecht. « Tracking-by-Assignment as a Probabilistic Graphical Model with Applications in Developmental Biology / Bernhard Kausler ; Betreuer : Fred A. Hamprecht ». Heidelberg : Universitätsbibliothek Heidelberg, 2013. http://d-nb.info/1177381079/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wang, Chao. « Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data ». Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199284713.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Hakimov, Sherzod [Verfasser]. « Learning Multilingual Semantic Parsers for Question Answering over Linked Data. A comparison of neural and probabilistic graphical model architectures / Sherzod Hakimov ». Bielefeld : Universitätsbibliothek Bielefeld, 2019. http://d-nb.info/1186887850/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

GARBARINO, DAVIDE. « Acknowledging the structured nature of real-world data with graphs embeddings and probabilistic inference methods ». Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1092453.

Texte intégral
Résumé :
In the artificial intelligence community there is a growing consensus that real world data is naturally represented as graphs because they can easily incorporate complexity at several levels, e.g. hierarchies or time dependencies. In this context, this thesis studies two main branches for structured data. In the first part we explore how state-of-the-art machine learning methods can be extended to graph modeled data provided that one is able to represent graphs in vector spaces. Such extensions can be applied to analyze several kinds of real-world data and tackle different problems. Here we study the following problems: a) understand the relational nature and evolution of websites which belong to different categories (e-commerce, academic (p.a.) and encyclopedic (forum)); b) model tennis players scores based on different game surfaces and tournaments in order to predict matches results; c) analyze preter- m-infants motion patterns able to characterize possible neuro degenerative disorders and d) build an academic collaboration recommender system able to model academic groups and individual research interest while suggesting possible researchers to connect with, topics of interest and representative publications to external users. In the second part we focus on graphs inference methods from data which present two main challenges: missing data and non-stationary time dependency. In particular, we study the problem of inferring Gaussian Graphical Models in the following settings: a) inference of Gaussian Graphical Models when data are missing or latent in the context of multiclass or temporal network inference and b) inference of time-varying Gaussian Graphical Models when data is multivariate and non-stationary. Such methods have a natural application in the composition of an optimized stock markets portfolio. Overall this work sheds light on how to acknowledge the intrinsic structure of data with the aim of building statistical models that are able to capture the actual complexity of the real world.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Demma, James Daniel. « A Hardware Generator for Factor Graph Applications ». Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48599.

Texte intégral
Résumé :
A Factor Graph (FG -- http://en.wikipedia.org/wiki/Factor_graph) is a structure used to find solutions to problems that can be represented as a Probabilistic Graphical Model (PGM). They consist of interconnected variable nodes and factor nodes, which iteratively compute and pass messages to each other. FGs can be applied to solve decoding of forward error correcting codes, Markov chains and Markov Random Fields, Kalman Filtering, Fourier Transforms, and even some games such as Sudoku. In this paper, a framework is presented for rapid prototyping of hardware implementations of FG-based applications. The FG developer specifies aspects of the application, such as graphical structure, factor computation, and message passing algorithm, and the framework returns a design. A system of Python scripts and Verilog Hardware Description Language templates together are used to generate the HDL source code for the application. The generated designs are vendor/platform agnostic, but currently target the Xilinx Virtex-6-based ML605. The framework has so far been primarily applied to construct Low Density Parity Check (LDPC) decoders. The characteristics of a large basket of generated LDPC decoders, including contemporary 802.11n decoders, have been examined as a verification of the system and as a demonstration of its capabilities. As a further demonstration, the framework has been applied to construct a Sudoku solver.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
14

Gasse, Maxime. « Apprentissage de Structure de Modèles Graphiques Probabilistes : application à la Classification Multi-Label ». Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1003/document.

Texte intégral
Résumé :
Dans cette thèse, nous nous intéressons au problème spécifique de l'apprentissage de structure de modèles graphiques probabilistes, c'est-à-dire trouver la structure la plus efficace pour représenter une distribution, à partir seulement d'un ensemble d'échantillons D ∼ p(v). Dans une première partie, nous passons en revue les principaux modèles graphiques probabilistes de la littérature, des plus classiques (modèles dirigés, non-dirigés) aux plus avancés (modèles mixtes, cycliques etc.). Puis nous étudions particulièrement le problème d'apprentissage de structure de modèles dirigés (réseaux Bayésiens), et proposons une nouvelle méthode hybride pour l'apprentissage de structure, H2PC (Hybrid Hybrid Parents and Children), mêlant une approche à base de contraintes (tests statistiques d'indépendance) et une approche à base de score (probabilité postérieure de la structure). Dans un second temps, nous étudions le problème de la classification multi-label, visant à prédire un ensemble de catégories (vecteur binaire y P (0, 1)m) pour un objet (vecteur x P Rd). Dans ce contexte, l'utilisation de modèles graphiques probabilistes pour représenter la distribution conditionnelle des catégories prend tout son sens, particulièrement dans le but minimiser une fonction coût complexe. Nous passons en revue les principales approches utilisant un modèle graphique probabiliste pour la classification multi-label (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), puis nous proposons une approche générique visant à identifier une factorisation de p(y|x) en distributions marginales disjointes, en s'inspirant des méthodes d'apprentissage de structure à base de contraintes. Nous démontrons plusieurs résultats théoriques, notamment l'unicité d'une décomposition minimale, ainsi que trois procédures quadratiques sous diverses hypothèses à propos de la distribution jointe p(x, y). Enfin, nous mettons en pratique ces résultats afin d'améliorer la classification multi-label avec les fonctions coût F-loss et zero-one loss
In this thesis, we address the specific problem of probabilistic graphical model structure learning, that is, finding the most efficient structure to represent a probability distribution, given only a sample set D ∼ p(v). In the first part, we review the main families of probabilistic graphical models from the literature, from the most common (directed, undirected) to the most advanced ones (chained, mixed etc.). Then we study particularly the problem of learning the structure of directed graphs (Bayesian networks), and we propose a new hybrid structure learning method, H2PC (Hybrid Hybrid Parents and Children), which combines a constraint-based approach (statistical independence tests) with a score-based approach (posterior probability of the structure). In the second part, we address the multi-label classification problem, which aims at assigning a set of categories (binary vector y P (0, 1)m) to a given object (vector x P Rd). In this context, probabilistic graphical models provide convenient means of encoding p(y|x), particularly for the purpose of minimizing general loss functions. We review the main approaches based on PGMs for multi-label classification (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), and propose a generic approach inspired from constraint-based structure learning methods to identify the unique partition of the label set into irreducible label factors (ILFs), that is, the irreducible factorization of p(y|x) into disjoint marginal distributions. We establish several theoretical results to characterize the ILFs based on the compositional graphoid axioms, and obtain three generic procedures under various assumptions about the conditional independence properties of the joint distribution p(x, y). Our conclusions are supported by carefully designed multi-label classification experiments, under the F-loss and the zero-one loss functions
Styles APA, Harvard, Vancouver, ISO, etc.
15

Petiet, Florence. « Réseau bayésien dynamique hybride : application à la modélisation de la fiabilité de systèmes à espaces d'états discrets ». Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2014/document.

Texte intégral
Résumé :
L'analyse de fiabilité fait partie intégrante de la conception et du fonctionnement du système, en particulier pour les systèmes exécutant des applications critiques. Des travaux récents ont montré l'intérêt d'utiliser les réseaux bayésiens dans le domaine de la fiabilité, pour modélisation la dégradation d'un système. Les modèles graphiques de durée sont un cas particulier des réseaux bayésiens, qui permettent de s'affranchir de la propriété markovienne des réseaux bayésiens dynamiques. Ils s'adaptent aux systèmes dont le temps de séjour dans chaque état n'est pas nécessairement distribué exponentiellement, comme c'est le cas dans la plupart des applications industrielles. Des travaux antérieurs ont toutefois montré des limitations à ces modèles en terme de capacité de stockage et de temps de calcul, en raison du caractère discret de la variable temps de séjour. Une solution pourrait consister à considérer une variable de durée continue. Selon les avis d'experts, les variables de temps de séjour suivent une distribution de Weibull dans de nombreux systèmes. L'objectif de la thèse est d'intégrer des variables de temps de séjour suivant une distribution de Weibull dans un modèle de durée graphique en proposant une nouvelle approche. Après une présentation des réseaux bayésiens, et plus particulièrement des modèles graphiques de durée et leur limitation, ce rapport s'attache à présenter le nouveau modèle permettant la modélisation du processus de dégradation. Ce nouveau modèle est appelé modèle graphique de durée hybride Weibull. Un algorithme original permettant l'inférence dans un tel réseau a été mis en place. L'étape suivante a été la validation de l'approche. Ne disposant pas de données, il a été nécessaire de simuler des séquences d'états du système. Différentes bases de données ainsi construites ont permis d'apprendre d'un part un modèle graphique de durée, et d'autre part un modèle graphique de durée hybride-Weibull, afin de les comparer, que ce soit en terme de qualité d’apprentissage, de qualité d’inférence, de temps de calcul, et de capacité de stockage
Reliability analysis is an integral part of system design and operation, especially for systems running critical applications. Recent works have shown the interest of using Bayesian Networks in the field of reliability, for modeling the degradation of a system. The Graphical Duration Models are a specific case of Bayesian Networks, which make it possible to overcome the Markovian property of dynamic Bayesian Networks. They adapt to systems whose sojourn-time in each state is not necessarily exponentially distributed, which is the case for most industrial applications. Previous works, however, have shown limitations in these models in terms of storage capacity and computing time, due to the discrete nature of the sojourn time variable. A solution might be to allow the sojourn time variable to be continuous. According to expert opinion, sojourn time variables follow a Weibull distribution in many systems. The goal of this thesis is to integrate sojour time variables following a Weibull distribution in a Graphical Duration Model by proposing a new approach. After a presentation of the Bayesian networks, and more particularly graphical duration models, and their limitations, this report focus on presenting the new model allowing the modeling of the degradation process. This new model is called Weibull Hybrid Graphical Duration Model. An original algorithm allowing inference in such a network has been deployed. Various so built databases allowed to learn on one hand a Graphical Duration Model, and on an other hand a Graphical Duration Model Hybrid - Weibull, in order to compare them, in term of learning quality, of inference quality, of compute time, and of storage space
Styles APA, Harvard, Vancouver, ISO, etc.
16

Cruz, Fernández Francisco. « Probabilistic graphical models for document analysis ». Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/399520.

Texte intégral
Résumé :
Actualmente, más del 80\% de los documentos almacenados en papel pertenecen al ámbito empresarial. Avances en materia de digitalización de documentos han fomentado el interés en crear copias digitales para solucionar problemas de mantenimiento y almacenamiento, además de poder disponer de formas eficientes de transmisión y extracción automática de la información contenida en ellos. Esta situación ha propiciado la necesidad de crear sistemas capaces de extraer y analizar automáticamente esta información. La gran variedad en tipos de documentos hace que esta no sea una tarea trivial. Un proceso de extracción de datos numéricos de tablas o facturas difiere sustancialmente del reconocimiento de texto manuscrito en un documento con anotaciones. No obstante, hay un nexo común en las dos tareas: dado un documento, es necesario localizar la región donde está la información de interés. En el área del Análisis de Documentos, a este proceso se denomina Análisis de la estructura del documento, y tiene como objetivo la identificación y categorización de las diferentes entidades que lo componen. Estas entidades pueden ser regiones de texto, imágenes, líneas de texto, celdas de una tabla, campos de un formulario, etc. Este proceso se puede realizar desde dos enfoques diferentes: análisis físico, o análisis lógico. El análisis físico consiste en identificar la ubicación y los limites que definen el área donde se encuentra la región de interés. El análisis lógico incluye además información acerca de su función y significado dentro del ámbito del documento. Para poder modelar esta información, es necesario incorporar al proceso de análisis un conocimiento previo sobre la tarea. Este conocimiento previo se puede modelar haciendo uso de relaciones contextuales entre las diferentes entidades. El uso del contexto en tareas de visión por computador ha demostrado ser de gran utilidad para guiar el proceso de reconocimiento y reforzar los resultados. Este proceso implica dos cuestiones fundamentales: qué tipo de información contextual es la adecuada para cada problema, y como incorporamos esa información al modelo. En esta tesis abordamos el análisis de la estructura de documentos basándonos en la incorporación de información contextual en el proceso de análisis. Hacemos énfasis en el uso de modelos gráficos probabilísticos y otros mecanismos para proponer soluciones al problema de la identificación de regiones y la segmentación de líneas de texto manuscritas. Presentamos varios métodos que hacen uso de modelos gráficos probabilísticos para resolver las anteriores tareas, y varios tipos de información contextual. En primer lugar presentamos un conjunto de características que pueden modelar información contextual sobre la posición relativa entre las diferentes regiones. Utilizamos estas características junto a otras para en varios modelos basados en modelos gráficos probabilísticos, y los comparamos con un modelo sintáctico clásico basado en gramáticas libres de contexto. En segundo lugar presentamos un marco probabilístico aplicado a la segmentación de líneas de líneas de texto. Combinamos el proceso de inferencia en el modelo con la estimación de las líneas de texto. Demostramos como el uso de información contextual mediante modelos gráficos probabilísticos es de gran utilidad para estas tareas.
Currently, more than 80% of the documents stored on paper belong to the business field. Advances in digitization techniques have fostered the interest in creating digital copies in order to solve maintenance and storage problems, as well as to have efficient ways for transmission and automatic extraction of the information contained therein. This situation has led to the need to create systems that can automatically extract and analyze this kind of information. The great variety of types of documents makes this not a trivial task. The extraction process of numerical data from tables or invoices differs substantially from a task of handwriting recognition in a document with annotations. However, there is a common link in the two tasks: Given a document, we need to identify the region where the information of interest is located. In the area of Document Analysis this process is called Layout Analysis, and aims at identifying and categorizing the different entities that compose the document. These entities can be text regions, pictures, text lines or tables, among others. This process can be done from two different approaches: physical or logical analysis. Physical analysis focus on identifying the physical boundaries that define the area of interest, whereas logical analysis also models information about the role and semantics of the entities within the scope of the document. To encode this information it is necessary to incorporate prior knowledge about the task into the analysis process, which can be introduced in terms of contextual relations between entities. The use of context has proven to be useful to reinforce the recognition process and improve the results on many computer vision tasks. It presents two fundamental questions: what kind of contextual information is appropriate, and how to incorporate this information into the model. In this thesis we study several ways to incorporate contextual information on the task of document layout analysis. We focus on the study of Probabilistic Graphical Models and other mechanisms for the inclusion of contextual relations applied to the specific tasks of region identification and handwritten text line segmentation. On the one hand, we present several methods for region identification. First, we present a method for layout analysis based on Conditional Random Fields for maximum a posteriori estimation. We encode a set of structural relations between different classes of regions on a set of features. Second, we present a method based on 2D-Probabilistic Context-free Grammars and perform a comparative study between probabilistic graphical models and this syntactic approach. Third, we propose a statistical approach based on the Expectation-Maximization algorithm devised to structured documents. We perform a thorough evaluation of the proposed methods on two particular collections of documents: a historical dataset composed of ancient structured documents, and a collection of contemporary documents. On the other hand, we present a probabilistic framework applied to the task of handwritten text line segmentation. We successfully combine the EM algorithm and variational approaches for this purpose. We demonstrate that the use of contextual information using probabilistic graphical models is of great utility for these tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Rios, Felix Leopoldo. « Bayesian inference in probabilistic graphical models ». Doctoral thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214542.

Texte intégral
Résumé :
This thesis consists of four papers studying structure learning and Bayesian inference in probabilistic graphical models for both undirected and directed acyclic graphs (DAGs). Paper A presents a novel algorithm, called the Christmas tree algorithm (CTA), that incrementally construct junction trees for decomposable graphs by adding one node at a time to the underlying graph. We prove that CTA with positive probability is able to generate all junction trees of any given number of underlying nodes. Importantly for practical applications, we show that the transition probability of the CTA kernel has a computationally tractable expression. Applications of the CTA transition kernel are demonstrated in a sequential Monte Carlo (SMC) setting for counting the number of decomposable graphs. Paper B presents the SMC scheme in a more general setting specifically designed for approximating distributions over decomposable graphs. The transition kernel from CTA from Paper A is incorporated as proposal kernel. To improve the traditional SMC algorithm, a particle Gibbs sampler with a systematic refreshment step is further proposed. A simulation study is performed for approximate graph posterior inference within both log-linear and decomposable Gaussian graphical models showing efficiency of the suggested methodology in both cases. Paper C explores the particle Gibbs sampling scheme of Paper B for approximate posterior computations in the Bayesian predictive classification framework. Specifically, Bayesian model averaging (BMA) based on the posterior exploration of the class-specific model is incorporated into the predictive classifier to take full account of the model uncertainty. For each class, the dependence structure underlying the observed features is represented by a distribution over the space of decomposable graphs. Due to the intractability of explicit expression, averaging over the approximated graph posterior is performed. The proposed BMA classifier reveals superior performance compared to the ordinary Bayesian predictive classifier that does not account for the model uncertainty, as well as to a number of out-of-the-box classifiers. Paper D develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a stochastic optimization algorithm based on the layering property of DAGs is developed for performing structure learning in Bayesian networks. A simulation study shows that the algorithm along with the prior has superior performance compared with existing priors when used for learning graph with a clearly layered structure.

QC 20170915

Styles APA, Harvard, Vancouver, ISO, etc.
18

Koseler, Kaan Tamer. « Realization of Model-Driven Engineering for Big Data : A Baseball Analytics Use Case ». Miami University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=miami1524832924255132.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Emerson, Guy Edward Toh. « Functional distributional semantics : learning linguistically informed representations from a precisely annotated corpus ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/284882.

Texte intégral
Résumé :
The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? The current state of the art is to represent meanings as vectors - but vectors do not correspond to any traditional notion of meaning. In particular, there is no way to talk about 'truth', a crucial concept in logic and formal semantics. In this thesis, I develop a framework for distributional semantics which answers this challenge. The meaning of a word is not represented as a vector, but as a 'function', mapping entities (objects in the world) to probabilities of truth (the probability that the word is true of the entity). Such a function can be interpreted both in the machine learning sense of a classifier, and in the formal semantic sense of a truth-conditional function. This simultaneously allows both the use of machine learning techniques to exploit large datasets, and also the use of formal semantic techniques to manipulate the learnt representations. I define a probabilistic graphical model, which incorporates a probabilistic generalisation of model theory (allowing a strong connection with formal semantics), and which generates semantic dependency graphs (allowing it to be trained on a corpus). This graphical model provides a natural way to model logical inference, semantic composition, and context-dependent meanings, where Bayesian inference plays a crucial role. I demonstrate the feasibility of this approach by training a model on WikiWoods, a parsed version of the English Wikipedia, and evaluating it on three tasks. The results indicate that the model can learn information not captured by vector space models.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Badrinarayanan, Vijay. « Probabilistic graphical models for visual tracking of objects ». Rennes 1, 2009. http://www.theses.fr/2009REN1S014.

Texte intégral
Résumé :
This thesis puts forth graphical models for visual tracking in low and higher dimensional state spaces. For low dimensional tracking problems, such as object position tracking, a novel message switching/combination idea is introduced. Based on this concept and a new pseudo simulation viewpoint of point trackers a novel randomized feature point filter is developed. The message switching/combination ideas are then extended to construct multi-cue fusion based tracking with a set of simulation based filters. Employing pseudo simulation based point trackers and color based particle filters as their elementary filters these multi-cue fusion schemes track general objects in complex scenarios. Moving to higher dimensions, general multi-part tracking schemes are introduced. A network of local patch trackers are put to play in a stochastic simulation framework to track the position and other attributes of arbitrary objects. The difficult task of updating the process prior online is also performed under this simulation framework. Dealing with online update of the process prior enlarges the scope of application of the multi-part tracking model. A simple interactive multi-part tracking scheme is also discussed in this context. To the extent permitted by practicality, the contributions are evaluated quantitatively and/or qualitatively to convince the reader of their novelty, improvements and/or robustness. Detailed discussions of the highlights and drawbacks of the models are presented. Prospective extensions of the models based on empirical arguments and reflections on design are included
Cette thèse traite de l'utilisation de modèles de graphes pour le suivi probabiliste d'objets dans des espaces de petites et de plus grandes dimensions. Un nouveau schéma de commutation/combinaison de messages est proposé pour le cas des espaces de faible dimension, typiquement, le suivi de position. S'appuyant sur ce schéma, et sur une interprétation probabiliste du suivi par points d'intérêts, un nouveau filtre de suivi probabiliste basé sur un ensemble de points de suivi est développé. Le schéma est ensuite étendu à la construction d'un algorithme de suivi d'objets fusionnant plusieurs types d'observations, spécifiquement, le filtre probabiliste basé sur un ensemble de points de suivi et un filtre particulaire basé sur la couleur. Ce nouvel algorithme améliore la robustesse du suivi d'objets dans les scénarios complexes, en l'absence d'a priori sur la nature de l'objet suivi. Pour les espaces de plus grandes dimensions, où l'on cherche à suivre des attributs de position, d'échelle et éventuellement de forme d'un objet, des algorithmes de suivi par zones sont introduits. Ces algorithmes combinent, dans un cadre de simulation stochastique par méthodes de Monte-Carlo, plusieurs processus de suivi fonctionnant chacun sur une région d'un objet complexe, et font intervenir une mise à jour en ligne du modèle d'état. Un schéma de suivi faisant intervenir une interaction simple de l'utilisateur est également développé sur la même base. Dans la limite de ce qui est réalisable en pratique, les algorithmes de suivi développés sont évalués qualitativement et quantitativement afin de démontrer leur bénéfice, notamment en termes de robustesse, vis-à-vis de l'existant. Une analyse détaillée des forces et faiblesses des modèles utilisés est présentée. Enfin, les perspectives d'extension de ces modèles, s'appuyant sur des arguments empiriques, sont discutées, et une analyse a posteriori de la conception des modèles est présentée
Styles APA, Harvard, Vancouver, ISO, etc.
21

Wu, Di. « Human action recognition using deep probabilistic graphical models ». Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/6603/.

Texte intégral
Résumé :
Building intelligent systems that are capable of representing or extracting high-level representations from high-dimensional sensory data lies at the core of solving many A.I. related tasks. Human action recognition is an important topic in computer vision that lies in high-dimensional space. Its applications include robotics, video surveillance, human-computer interaction, user interface design, and multi-media video retrieval amongst others. A number of approaches have been proposed to extract representative features from high-dimensional temporal data, most commonly hard wired geometric or bio-inspired shape context features. This thesis first demonstrates some \emph{ad-hoc} hand-crafted rules for effectively encoding motion features, and later elicits a more generic approach for incorporating structured feature learning and reasoning, \ie deep probabilistic graphical models. The hierarchial dynamic framework first extracts high level features and then uses the learned representation for estimating emission probability to infer action sequences. We show that better action recognition can be achieved by replacing gaussian mixture models by Deep Neural Networks that contain many layers of features to predict probability distributions over states of Markov Models. The framework can be easily extended to include an ergodic state to segment and recognise actions simultaneously. The first part of the thesis focuses on analysis and applications of hand-crafted features for human action representation and classification. We show that the ``hard coded" concept of correlogram can incorporate correlations between time domain sequences and we further investigate multi-modal inputs, \eg depth sensor input and its unique traits for action recognition. The second part of this thesis focuses on marrying probabilistic graphical models with Deep Neural Networks (both Deep Belief Networks and Deep 3D Convolutional Neural Networks) for structured sequence prediction. The proposed Deep Dynamic Neural Network exhibits its general framework for structured 2D data representation and classification. This inspires us to further investigate for applying various graphical models for time-variant video sequences.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Oberhoff, Daniel [Verfasser]. « Hierarchical probabilistic graphical models for image recognition / Daniel Oberhoff ». Ulm : Universität Ulm. Fakultät für Ingenieurwissenschaften und Informatik, 2013. http://d-nb.info/1033109088/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Chechetka, Anton. « Query-Specific Learning and Inference for Probabilistic Graphical Models ». Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/171.

Texte intégral
Résumé :
In numerous real world applications, from sensor networks to computer vision to natural text processing, one needs to reason about the system in question in the face of uncertainty. A key problem in all those settings is to compute the probability distribution over the variables of interest (the query) given the observed values of other random variables (the evidence). Probabilistic graphical models (PGMs) have become the approach of choice for representing and reasoning with high-dimensional probability distributions. However, for most models capable of accurately representing real-life distributions, inference is fundamentally intractable. As a result, optimally balancing the expressive power and inference complexity of the models, as well as designing better approximate inference algorithms, remain important open problems with potential to significantly improve the quality of answers to probabilistic queries. This thesis contributes algorithms for learning and approximate inference in probabilistic graphical models that improve on the state of the art by emphasizing the computational aspects of inference over the representational properties of the models. Our contributions fall into two categories: learning accurate models where exact inference is tractable and speeding up approximate inference by focusing computation on the query variables and only spending as much effort on the remaining parts of the model as needed to answer the query accurately. First, for a case when the set of evidence variables is not known in advance and a single model is needed that can be used to answer any query well, we propose a polynomial time algorithm for learning the structure of tractable graphical models with quality guarantees, including PAC learnability and graceful degradation guarantees. Ours is the first efficient algorithm to provide this type of guarantees. A key theoretical insight of our approach is a tractable upper bound on the mutual information of arbitrarily large sets of random variables that yields exponential speedups over the exact computation. Second, for a setting where the set of evidence variables is known in advance, we propose an approach for learning tractable models that tailors the structure of the model for the particular value of evidence that become known at test time. By avoiding a commitment to a single tractable structure during learning, we are able to expand the representation power of the model without sacrificing efficient exact inference and parameter learning. We provide a general framework that allows one to leverage existing structure learning algorithms for discovering high-quality evidence-specific structures. Empirically, we demonstrate state of the art accuracy on real-life datasets and an order of magnitude speedup. Finally, for applications where the intractable model structure is a given and approximate inference is needed, we propose a principled way to speed up convergence of belief propagation by focusing the computation on the query variables and away from the variables that are of no direct interest to the user. We demonstrate significant speedups over the state of the art on large-scale relational models. Unlike existing approaches, ours does not involve model simplification, and thus has an advantage of converging to the fixed point of the full model. More generally, we argue that the common approach of concentrating on the structure of representation provided by PGMs, and only structuring the computation as representation allows, is suboptimal because of the fundamental computational problems. It is the computation that eventually yields answers to the queries, so directly focusing on structure of computation is a natural direction for improving the quality of the answers. The results of this thesis are a step towards adapting the structure of computation as a foundation of graphical models.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Himes, Blanca Elena. « Predictive genomics in asthma management using probabilistic graphical models ». Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40353.

Texte intégral
Résumé :
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2007.
Includes bibliographical references (leaves 126-142).
Complex traits are conditions that, as a result of the complex interplay among genetic and environmental factors, have wide variability in progression and manifestation. Because most common diseases with high morbidity and mortality are complex traits, uncovering the genetic architecture of these traits is an important health problem. Asthma, a chronic inflammatory airway disease, is one such trait that affects over 300 million people around the world. Although there is a large amount of human genetic information currently available and expanding at a rapid pace, traditional genetic studies have not provided a concomitant understanding of complex traits, including asthma and its related phenotypes. Despite the intricate genetic background underlying complex traits, most traditional genetic studies focus on individual genetic variants. New methods that consider multiple genetic variants are needed in order to accelerate the understanding of complex traits. In this thesis, the need for better analytic approaches for the study of complex traits is addressed with the creation of a novel method. Probabilistic graphical models (PGMs) are a powerful technique that can overcome limitations of conventional association study approaches.
(cont.) Going beyond single or pairwise gene interactions with a phenotype, PGMs are able to account for complex gene interactions and make predictions of a phenotype. Most PGMs have limited scalability with large genetic datasets. Here, a procedure called phenocentric Bayesian networks that is tailored for the discovery of complex multivariate models for a trait using large genomic datasets is presented. Resulting models can be used to predict outcomes of a phenotype, which allows for meaningful validation and potential applicability in a clinical setting. The utility of phenocentric Bayesian networks is demonstrated with the creation of predictive models for two complex traits related to asthma management: exacerbation and bronchodilator response. The good predictive accuracy of each model is established and shown to be superior to single gene analysis. The results of this work demonstrate the promise of using the phenocentric Bayesian networks to study the genetic architecture of complex traits, and the utility of multigenic predictive methods compared to traditional single-gene approaches.
by Blanca Elena Himes.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Ji, Xiaofei. « View-invariant Human Action Recognition via Probabilistic Graphical Models ». Thesis, University of Portsmouth, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523620.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Budvytis, Ignas. « Novel probabilistic graphical models for semi-supervised video segmentation ». Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648293.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Georgatzis, Konstantinos. « Dynamical probabilistic graphical models applied to physiological condition monitoring ». Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28838.

Texte intégral
Résumé :
Intensive Care Units (ICUs) host patients in critical condition who are being monitored by sensors which measure their vital signs. These vital signs carry information about a patient’s physiology and can have a very rich structure at fine resolution levels. The task of analysing these biosignals for the purposes of monitoring a patient’s physiology is referred to as physiological condition monitoring. Physiological condition monitoring of patients in ICUs is of critical importance as their health is subject to a number of events of interest. For the purposes of this thesis, the overall task of physiological condition monitoring is decomposed into the sub-tasks of modelling a patient’s physiology a) under the effect of physiological or artifactual events and b) under the effect of drug administration. The first sub-task is concerned with modelling artifact (such as the taking of blood samples, suction events etc.), and physiological episodes (such as bradycardia), while the second sub-task is focussed on modelling the effect of drug administration on a patient’s physiology. The first contribution of this thesis is the formulation, development and validation of the Discriminative Switching Linear Dynamical System (DSLDS) for the first sub-task. The DSLDS is a discriminative model which identifies the state-of-health of a patient given their observed vital signs using a discriminative probabilistic classifier, and then infers their underlying physiological values conditioned on this status. It is demonstrated on two real-world datasets that the DSLDS is able to outperform an alternative, generative approach in most cases of interest, and that an a-mixture of the two models achieves higher performance than either of the two models separately. The second contribution of this thesis is the formulation, development and validation of the Input-Output Non-Linear Dynamical System (IO-NLDS) for the second sub-task. The IO-NLDS is a non-linear dynamical system for modelling the effect of drug infusions on the vital signs of patients. More specifically, in this thesis the focus is on modelling the effect of the widely used anaesthetic drug Propofol on a patient’s monitored depth of anaesthesia and haemodynamics. A comparison of the IO-NLDS with a model derived from the Pharmacokinetics/Pharmacodynamics (PK/PD) literature on a real-world dataset shows that significant improvements in predictive performance can be provided without requiring the incorporation of expert physiological knowledge.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Jackson, Zara. « Basal Metabolic Rate (BMR) estimation using Probabilistic Graphical Models ». Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384629.

Texte intégral
Résumé :
Obesity is a growing problem globally. Currently 2.3 billion adults are overweight, and this number is rising. The most common method for weight loss is calorie counting, in which to lose weight a person should be in a calorie deficit. Basal Metabolic Rate accounts for the majority of calories a person burns in a day and it is therefore a major contributor to accurate calorie counting. This paper uses a Dynamic Bayesian Network to estimate Basal Metabolic Rate (BMR) for a sample of 219 individuals from all Body Mass Index (BMI) categories. The data was collected through the Lifesum app. A comparison of the estimated BMR values was made with the commonly used Harris Benedict equation, finding that food journaling is a sufficient method to estimate BMR. Next day weight prediction was also computed based on the estimated BMR. The results stated that the Harris Benedict equation produced more accurate predictions than the metabolic model proposed, therefore more work is necessary to find a model that accurately estimates BMR.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Hennig, Philipp. « Approximate inference in graphical models ». Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/237251.

Texte intégral
Résumé :
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents, conditional on document metadata, with a focus on computational speed. In each case, graphical models help in two important ways: They first provide important structural insight into the problem; and then suggest practical approximations to the exact probabilistic solution.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hager, Paul Andrew. « Investigation of connection between deep learning and probabilistic graphical models ». Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119552.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 21).
The field of machine learning (ML) has benefitted greatly from its relationship with the field of classical statistics. In support of that continued expansion, the following proposes an alternative perspective at the link between these fields. The link focuses on probabilistic graphical models in the context of reinforcement learning. Viewing certain algorithms as reinforcement learning gives one an ability to map ML concepts to statistics problems. Training a multi-layer nonlinear perceptron algorithm is equivalent to structure learning problems in probabilistic graphical models (PGMs). The technique of boosting weak rules into an ensemble is weighted sampling. Finally regularizing neural networks using the dropout technique is conditioning on certain observations in PGMs.
by Paul Andrew Hager.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Raman, Natraj. « Action recognition in depth videos using nonparametric probabilistic graphical models ». Thesis, Birkbeck (University of London), 2016. http://bbktheses.da.ulcc.ac.uk/220/.

Texte intégral
Résumé :
Action recognition involves automatically labelling videos that contain human motion with action classes. It has applications in diverse areas such as smart surveillance, human computer interaction and content retrieval. The recent advent of depth sensing technology that produces depth image sequences has offered opportunities to solve the challenging action recognition problem. The depth images facilitate robust estimation of a human skeleton’s 3D joint positions and a high level action can be inferred from a sequence of these joint positions. A natural way to model a sequence of joint positions is to use a graphical model that describes probabilistic dependencies between the observed joint positions and some hidden state variables. A problem with these models is that the number of hidden states must be fixed a priori even though for many applications this number is not known in advance. This thesis proposes nonparametric variants of graphical models with the number of hidden states automatically inferred from data. The inference is performed in a full Bayesian setting by using the Dirichlet Process as a prior over the model’s infinite dimensional parameter space. This thesis describes three original constructions of nonparametric graphical models that are applied in the classification of actions in depth videos. Firstly, the action classes are represented by a Hidden Markov Model (HMM) with an unbounded number of hidden states. The formulation enables information sharing and discriminative learning of parameters. Secondly, a hierarchical HMM with an unbounded number of actions and poses is used to represent activities. The construction produces a simplified model for activity classification by using logistic regression to capture the relationship between action states and activity labels. Finally, the action classes are modelled by a Hidden Conditional Random Field (HCRF) with the number of intermediate hidden states learned from data. Tractable inference procedures based on Markov Chain Monte Carlo (MCMC) techniques are derived for all these constructions. Experiments with multiple benchmark datasets confirm the efficacy of the proposed approaches for action recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Anantharam, Pramod. « Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social Systems ». Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1464417646.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Mirsad, Ćosović. « Distributed State Estimation in Power Systems using Probabilistic Graphical Models ». Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108459&source=NDLTD&language=en.

Texte intégral
Résumé :
We present a detailed study on application of factorgraphs and the belief propagation (BP) algorithm to thepower system state estimation (SE) problem. We startfrom the BP solution for the linear DC model, for whichwe provide a detailed convergence analysis. Using BPbasedDC model we propose a fast real-time stateestimator for the power system SE. The proposedestimator is easy to distribute and parallelize, thusalleviating computational limitations and allowing forprocessing measurements in real time. The presentedalgorithm may run as a continuous process, with eachnew measurement being seamlessly processed by thedistributed state estimator. In contrast to the matrixbasedSE methods, the BP approach is robust to illconditionedscenarios caused by significant differencesbetween measurement variances, thus resulting in asolution that eliminates observability analysis. Using theDC model, we numerically demonstrate the performanceof the state estimator in a realistic real-time systemmodel with asynchronous measurements. We note thatthe extension to the non-linear SE is possible within thesame framework.Using insights from the DC model, we use two differentapproaches to derive the BP algorithm for the non-linearmodel. The first method directly applies BP methodology,however, providing only approximate BP solution for thenon-linear model. In the second approach, we make a keyfurther step by providing the solution in which the BP isapplied sequentially over the non-linear model, akin towhat is done by the Gauss-Newton method. The resultingiterative Gauss-Newton belief propagation (GN-BP)algorithm can be interpreted as a distributed Gauss-Newton method with the same accuracy as thecentralized SE, however, introducing a number ofadvantages of the BP framework. The thesis providesextensive numerical study of the GN-BP algorithm,provides details on its convergence behavior, and gives anumber of useful insights for its implementation.Finally, we define the bad data test based on the BPalgorithm for the non-linear model. The presented modelestablishes local criteria to detect and identify bad datameasurements. We numerically demonstrate that theBP-based bad data test significantly improves the baddata detection over the largest normalized residual test.
Glavni rezultati ove teze su dizajn i analiza novihalgoritama za rešavanje problema estimacije stanjabaziranih na faktor grafovima i „Belief Propagation“ (BP)algoritmu koji se mogu primeniti kao centralizovani ilidistribuirani estimatori stanja u elektroenergetskimsistemima. Na samom početku, definisan je postupak zarešavanje linearnog (DC) problema korišćenjem BPalgoritma. Pored samog algoritma data je analizakonvergencije i predloženo je rešenje za unapređenjekonvergencije. Algoritam se može jednostavnodistribuirati i paralelizovati, te je pogodan za estimacijustanja u realnom vremenu, pri čemu se informacije moguprikupljati na asinhroni način, zaobilazeći neke odpostojećih rutina, kao npr. provera observabilnostisistema. Proširenje algoritma za nelinearnu estimacijustanja je moguće unutar datog modela.Dalje se predlaže algoritam baziran na probabilističkimgrafičkim modelima koji je direktno primenjen nanelinearni problem estimacije stanja, što predstavljalogičan korak u tranziciji od linearnog ka nelinearnommodelu. Zbog nelinearnosti funkcija, izrazi za određenuklasu poruka ne mogu se dobiti u zatvorenoj formi, zbogčega rezultujući algoritam predstavlja aproksimativnorešenje. Nakon toga se predlaže distribuirani Gaus-Njutnov metod baziran na probabilističkim grafičkimmodelima i BP algoritmu koji postiže istu tačnost kao icentralizovana verzija Gaus-Njutnovog metoda zaestimaciju stanja, te je dat i novi algoritam za otkrivanjenepouzdanih merenja (outliers) prilikom merenjaelektričnih veličina. Predstavljeni algoritam uspostavljalokalni kriterijum za otkrivanje i identifikacijunepouzdanih merenja, a numerički je pokazano daalgoritam značajno poboljšava detekciju u odnosu nastandardne metode.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Karri, Senanayak Sesh Kumar. « On the Links between Probabilistic Graphical Models and Submodular Optimisation ». Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE047/document.

Texte intégral
Résumé :
L’entropie d’une distribution sur un ensemble de variables aléatoires discrètes est toujours bornée par l’entropie de la distribution factorisée correspondante. Cette propriété est due à la sous-modularité de l’entropie. Par ailleurs, les fonctions sous-modulaires sont une généralisation des fonctions de rang des matroïdes ; ainsi, les fonctions linéaires sur les polytopes associés peuvent être minimisées exactement par un algorithme glouton. Dans ce manuscrit, nous exploitons ces liens entre les structures des modèles graphiques et les fonctions sous-modulaires. Nous utilisons des algorithmes gloutons pour optimiser des fonctions linéaires sur des polytopes liés aux matroïdes graphiques et hypergraphiques pour apprendre la structure de modèles graphiques, tandis que nous utilisons des algorithmes d’inférence sur les graphes pour optimiser des fonctions sous-modulaires. La première contribution de cette thèse consiste à approcher par maximum de vraisemblance une distribution de probabilité par une distribution factorisable et de complexité algorithmique contrôlée. Comme cette complexité est exponentielle dans la largeur arborescente du graphe, notre but est d’apprendre un graphe décomposable avec une largeur arborescente bornée, ce qui est connu pour être NP-difficile. Nous posons ce problème comme un problème d’optimisation combinatoire et nous proposons une relaxation convexe basée sur les matroïdes graphiques et hypergraphiques. Ceci donne lieu à une solution approchée avec une bonne performance pratique. Pour la seconde contribution principale, nous utilisons le fait que l’entropie d’une distribution est toujours bornée par l’entropie de sa distribution factorisée associée, comme conséquence principale de la sous-modularité, permettant une généralisation à toutes les fonctions sous-modulaires de bornes basées sur les concepts de modèles graphiques. Un algorithme est développé pour maximiser les fonctions sous-modulaires, un autre problème NP-difficile, en maximisant ces bornes en utilisant des algorithmes d’inférence vibrationnels sur les graphes. En troisième contribution, nous proposons et analysons des algorithmes visant à minimiser des fonctions sous-modulaires pouvant s’écrire comme somme de fonctions plus simples. Nos algorithmes n’utilisent que des oracles de ces fonctions simple basés sur minimisation sous-modulaires et de variation totale de telle fonctions
The entropy of a probability distribution on a set of discrete random variables is always bounded by the entropy of its factorisable counterpart. This is due to the submodularity of entropy on the set of discrete random variables. Submodular functions are also generalisation of matroid rank function; therefore, linear functions may be optimised on the associated polytopes exactly using a greedy algorithm. In this manuscript, we exploit these links between the structures of graphical models and submodular functions: we use greedy algorithms to optimise linear functions on the polytopes related to graphic and hypergraphic matroids for learning the structures of graphical models, while we use inference algorithms on graphs to optimise submodular functions.The first main contribution of the thesis aims at approximating a probabilistic distribution with a factorisable tractable distribution under the maximum likelihood framework. Since the tractability of exact inference is exponential in the treewidth of the decomposable graph, our goal is to learn bounded treewidth decomposable graphs, which is known to be NP-hard. We pose this as a combinatorial optimisation problem and provide convex relaxations based on graphic and hypergraphic matroids. This leads to an approximate solution with good empirical performance. In the second main contribution, we use the fact that the entropy of a probability distribution is always bounded by the entropy of its factorisable counterpart mainly as a consequence of submodularity. This property of entropy is generalised to all submodular functions and bounds based on graphical models are proposed. We refer to them as graph-based bounds. An algorithm is developped to maximise submodular functions, which is NPhard, by maximising the graph-based bound using variational inference algorithms on graphs. As third contribution, we propose and analyse algorithms aiming at minimizing submodular functions that can be written as sum of simple functions. Our algorithms only make use of submodular function minimisation and total variation oracles of simple functions
Styles APA, Harvard, Vancouver, ISO, etc.
35

CARLI, FEDERICO. « Stratified Staged Trees : Modelling, Software and Applications ». Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1057653.

Texte intégral
Résumé :
The thesis is focused on Probabilistic Graphical Models (PGMs), which are a rich framework for encoding probability distributions over complex domains. In particular, joint multivariate distributions over large numbers of random variables that interact with each other can be investigated through PGMs and conditional independence statements can be succinctly represented with graphical representations. These representations sit at the intersection of statistics and computer science, relying on concepts mainly from probability theory, graph algorithms and machine learning. They are applied in a wide variety of fields, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many more. Over the years theory and methodology have developed and been extended in a multitude of directions. In particular, in this thesis different aspects of new classes of PGMs called Staged Trees and Chain Event Graphs (CEGs) are studied. In some sense, Staged Trees are a generalization of Bayesian Networks (BNs). Indeed, BNs provide a transparent graphical tool to define a complex process in terms of conditional independent structures. Despite their strengths in allowing for the reduction in the dimensionality of joint probability distributions of the statistical model and in providing a transparent framework for causal inference, BNs are not optimal GMs in all situations. The biggest problems with their usage mainly occur when the event space is not a simple product of the sample spaces of the random variables of interest, and when conditional independence statements are true only under certain values of variables. This happens when there are context-specific conditional independence structures. Some extensions to the BN framework have been proposed to handle these issues: context-specific BNs, Bayesian Multinets, or Similarity Networks citep{geiger1996knowledge}. These adopt a hypothesis variable to encode the context-specific statements over a particular set of random variables. For each value taken by the hypothesis variable the graphical modeller has to construct a particular BN model called local network. The collection of these local networks constitute a Bayesian Multinet, Probabilistic Decision Graphs, among others. It has been showed that Chain Event Graph (CEG) models encompass all discrete BN models and its discrete variants described above as a special subclass and they are also richer than Probabilistic Decision Graphs whose semantics is actually somewhat distinct. Unlike most of its competitors, CEGs can capture all (also context-specific) conditional independences in a unique graph, obtained by a coalescence over the vertices of an appropriately constructed probability tree, called Staged Tree. CEGs have been developed for categorical variables and have been used for cohort studies, causal analysis and case-control studies. The user’s toolbox to efficiently and effectively perform uncertainty reasoning with CEGs further includes methods for inference and probability propagation, the exploration of equivalence classes and robustness studies. The main contributions of this thesis to the literature on Staged Trees are related to Stratified Staged Trees with a keen eye of application. Few observations are made on non-Stratified Staged Trees in the last part of the thesis. A core output of the thesis is an R software package which efficiently implements a host of functions for learning and estimating Staged Trees from data, relying on likelihood principles. Also structural learning algorithms based on distance or divergence between pair of categorical probability distributions and based on the clusterization of probability distributions in a fixed number of stages for each stratum of the tree are developed. Also a new class of Directed Acyclic Graph has been introduced, named Asymmetric-labeled DAG (ALDAG), which gives a BN representation of a given Staged Tree. The ALDAG is a minimal DAG such that the statistical model embedded in the Staged Tree is contained in the one associated to the ALDAG. This is possible thanks to the use of colored edges, so that each color indicates a different type of conditional dependence: total, context-specific, partial or local. Staged Trees are also adopted in this thesis as a statistical tool for classification purpose. Staged Tree Classifiers are introduced, which exhibit comparable predictive results based on accuracy with respect to algorithms from state of the art of machine learning such as neural networks and random forests. At last, algorithms to obtain an ordering of variables for the construction of the Staged Tree are designed.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ayadi, Inès. « Optimisation des politiques de maintenance préventive dans un cadre de modélisation par modèles graphiques probabilistes ». Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1072/document.

Texte intégral
Résumé :
Actuellement, les équipements employés dans les milieux industriels sont de plus en plus complexes. Ils exigent une maintenance accrue afin de garantir un niveau de service optimal en termes de fiabilité et de disponibilité. Par ailleurs, souvent cette garantie d'optimalité a un coût très élevé, ce qui est contraignant. Face à ces exigences la gestion de la maintenance des équipements est désormais un enjeu de taille : rechercher une politique de maintenance réalisant un compromis acceptable entre la disponibilité et les coûts associés à l'entretien du système. Les travaux de cette thèse partent par ailleurs du constat que dans plusieurs applications de l'industrie, le besoin de stratégies de maintenance assurant à la fois une sécurité optimale et une rentabilité maximale demeure de plus en plus croissant conduisant à se référer non seulement à l'expérience des experts, mais aussi aux résultats numériques obtenus via la résolution des problèmes d'optimisation. La résolution de cette problématique nécessite au préalable la modélisation de l'évolution des comportements des états des composants constituant le système, i.e, connaître les mécanismes de dégradation des composants. Disposant d'un tel modèle, une stratégie de maintenance est appliquée au système. Néanmoins, l'élaboration d'une telle stratégie réalisant un compromis entre toutes ces exigences représente un verrou scientifique et technique majeur. Dans ce contexte, l'optimisation de la maintenance s'impose pour atteindre les objectifs prescrits avec des coûts optimaux. Dans les applications industrielles réelles, les problèmes d'optimisation sont souvent de grande dimension faisant intervenir plusieurs paramètres. Par conséquent, les métaheuristiques s’avèrent une approche intéressante dans la mesure où d'une part, elles sacrifient la complétude de la résolution au profit de l'efficacité et du temps de calcul et d'autre part elles s'appliquent à un très large panel de problèmes.Dans son objectif de proposer une démarche de résolution d'un problème d'optimisation de la maintenance préventive, cette thèse fournit une méthodologie de résolution du problème d'optimisation des politiques de maintenance préventive systématique appliquée dans le domaine ferroviaire à la prévention des ruptures de rails. Le raisonnement de cette méthodologie s'organise autour de trois étapes principales : 1. Modélisation de l'évolution des comportements des états des composants constituant le système, i.e, connaître les mécanismes de dégradation des composants et formalisation des opérations de maintenance. 2. Formalisation d'un modèle d'évaluation de politiques de maintenance tenant compte aussi bien du facteur sûreté de fonctionnement du système que du facteur économique conséquent aux procédures de gestion de la maintenance (coûts de réparation, de diagnostic, d'indisponibilité). 3. Optimisation des paramètres de configuration des politiques de maintenance préventive systématique afin d'optimiser un ou plusieurs critères. Ces critères sont définis sur la base du modèle d'évaluation des politiques de maintenance proposé dans l'étape précédente
At present, equipments used on the industrial circles are more and more complex. They require a maintenance increased to guarantee a level of optimal service in terms of reliability and availability. Besides, often this guarantee of optimalité has a very high cost, what is binding. In the face of these requirements the management of the maintenance of equipments is from now on a stake in size: look for a politics of maintenance realizing an acceptable compromise between the availability and the costs associated to the maintenance of the system. The works of this thesis leave besides the report that in several applications of the industry, the need for strategies of maintenance assuring(insuring) at the same time an optimal safety and a maximal profitability lives furthermore there
Styles APA, Harvard, Vancouver, ISO, etc.
37

Liu, Ying Ph D. Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science. « Probabilistic graphical models : distributed inference and learning models with small feedback vertex sets ». Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89994.

Texte intégral
Résumé :
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 167-173).
In undirected graphical models, each node represents a random variable while the set of edges specifies the conditional independencies of the underlying distribution. When the random variables are jointly Gaussian, the models are called Gaussian graphical models (GGMs) or Gauss Markov random fields. In this thesis, we address several important problems in the study of GGMs. The first problem is to perform inference or sampling when the graph structure and model parameters are given. For inference in graphs with cycles, loopy belief propagation (LBP) is a purely distributed algorithm, but it gives inaccurate variance estimates in general and often diverges or has slow convergence. Previously, the hybrid feedback message passing (FMP) algorithm was developed to enhance the convergence and accuracy, where a special protocol is used among the nodes in a pseudo-FVS (an FVS, or feedback vertex set, is a set of nodes whose removal breaks all cycles) while standard LBP is run on the subgraph excluding the pseudo-FVS. In this thesis, we develop recursive FMP, a purely distributed extension of FMP where all nodes use the same integrated message-passing protocol. In addition, we introduce the subgraph perturbation sampling algorithm, which makes use of any pre-existing tractable inference algorithm for a subgraph by perturbing this algorithm so as to yield asymptotically exact samples for the intended distribution. We study the stationary version where a single fixed subgraph is used in all iterations, as well as the non-stationary version where tractable subgraphs are adaptively selected. The second problem is to perform model learning, i.e. to recover the underlying structure and model parameters from observations when the model is unknown. Families of graphical models that have both large modeling capacity and efficient inference algorithms are extremely useful. With the development of new inference algorithms for many new applications, it is important to study the families of models that are most suitable for these inference algorithms while having strong expressive power in the new applications. In particular, we study the family of GGMs with small FVSs and propose structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing an inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. We perform experiments using synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes.
by Ying Liu.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Hamze, Firas. « Monte Carlo integration in discrete undirected probabilistic models ». Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/744.

Texte intégral
Résumé :
This thesis contains the author’s work in and contributions to the field of Monte Carlo sampling for undirected graphical models, a class of statistical model commonly used in machine learning, computer vision, and spatial statistics; the aim is to be able to use the methodology and resultant samples to estimate integrals of functions of the variables in the model. Over the course of the study, three different but related methods were proposed and have appeared as research papers. The thesis consists of an introductory chapter discussing the models considered, the problems involved, and a general outline of Monte Carlo methods. The three subsequent chapters contain versions of the published work. The second chapter, which has appeared in (Hamze and de Freitas 2004), is a presentation of new MCMC algorithms for computing the posterior distributions and expectations of the unknown variables in undirected graphical models with regular structure. For demonstration purposes, we focus on Markov Random Fields (MRFs). By partitioning the MRFs into non-overlapping trees, it is possible to compute the posterior distribution of a particular tree exactly by conditioning on the remaining tree. These exact solutions allow us to construct efficient blocked and Rao-Blackwellised MCMC algorithms. We show empirically that tree sampling is considerably more efficient than other partitioned sampling schemes and the naive Gibbs sampler, even in cases where loopy belief propagation fails to converge. We prove that tree sampling exhibits lower variance than the naive Gibbs sampler and other naive partitioning schemes using the theoretical measure of maximal correlation. We also construct new information theory tools for comparing different MCMC schemes and show that, under these, tree sampling is more efficient. Although the work discussed in Chapter 2 exhibited promise on the class of graphs to which it was suited, there are many cases where limiting the topology is quite a handicap. The work in Chapter 3 was an exploration in an alternative methodology for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the notoriously difficult partition function of the graph. The algorithm, published in (Hamze and de Freitas 2005), fits into the framework of sequential Monte Carlo methods rather than the more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the idea of using “tempered” proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables that are, initially, sampled exactly from a spanning treeof the variables. We present experimental results on inference and estimation of the partition function for sparse and densely-connected graphs. The final contribution of this thesis, presented in Chapter 4 and also in (Hamze and de Freitas 2007), emerged from some empirical observations that were made while trying to optimize the sequence of edges to add to a graph so as to guide the population of samples to the high-probability regions of the model. Most important among these observations was that while several heuristic approaches, discussed in Chapter 1, certainly yielded improvements over edge sequences consisting of random choices, strategies based on forcing the particles to take large, biased random walks in the state-space resulted in a more efficient exploration, particularly at low temperatures. This motivated a new Monte Carlo approach to treating complex discrete distributions. The algorithm is motivated by the N-Fold Way, which is an ingenious event-driven MCMC sampler that avoids rejection moves at any specific state. The N-Fold Way can however get “trapped” in cycles. We surmount this problem by modifying the sampling process to result in biased state-space paths of randomly chosen length. This alteration does introduce bias, but the bias is subsequently corrected with a carefully engineered importance sampler.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Paiva, mendes Ellon. « Study on the Use of Vision and Laser Range Sensors with Graphical Models for the SLAM Problem ». Thesis, Toulouse, INSA, 2017. http://www.theses.fr/2017ISAT0016/document.

Texte intégral
Résumé :
La capacité des robots mobiles à se localiser précisément par rapport à leur environnement est indispensable à leur autonomie. Pour ce faire, les robots exploitent les données acquises par des capteurs qui observent leur état interne, tels que centrales inertielles ou l’odométrie, et les données acquises par des capteurs qui observent l’environnement, telles que les caméras et les Lidars. L’exploitation de ces derniers capteurs a suscité le développement de solutions qui estiment conjointement la position du robot et la position des éléments dans l'environnement, appelées SLAM (Simultaneous Localization and Mapping). Pour gérer le bruit des données provenant des capteurs, les solutions pour le SLAM sont mises en œuvre dans un contexte probabiliste. Les premiers développements étaient basés sur le filtre de Kalman étendu, mais des développements plus récents utilisent des modèles graphiques probabilistes pour modéliser le problème d’estimation et de le résoudre grâce à techniques d’optimisation. Cette thèse exploite cette dernière approche et propose deux techniques distinctes pour les véhicules terrestres autonomes: une utilisant la vision monoculaire, l’autre un Lidar. L’absence d’information de profondeur dans les images obtenues par une caméra a mené à l’utilisation de paramétrisations spécifiques pour les points de repères qui isolent la profondeur inconnue dans une variable, concentrant la grande incertitude sur la profondeur dans un seul paramètre. Une de ces paramétrisations, nommé paramétrisation pour l’angle de parallaxe (ou PAP, Parallax Angle Parametrization), a été introduite dans le contexte du problème d’ajustement de faisceaux, qui traite l’ensemble des données en une seule étape d’optimisation globale. Nous présentons comment exploiter cette paramétrisation dans une approche incrémentale de SLAM à base de modèles graphiques, qui intègre également les mesures de mouvement du robot. Les Lidars peuvent être utilisés pour construire des solutions d’odométrie grâce à un recalage séquentiel des nuages de points acquis le long de la trajectoire. Nous définissons une couche basée sur les modèles graphiques au dessus d’une telle couche d’odométrie, qui utilise l’algorithme ICP (Iterative Closest Points). Des repères clefs (keyframes) sont définis le long de la trajectoire du robot, et les résultats de l’algorithme ICP sont utilisés pour construire un graphe de poses, exploité pour résoudre un problème d’optimisation qui permet la correction de l’ensemble de la trajectoire du robot et de la carte de l’environnement à suite des fermetures de boucle.Après une introduction à la théorie des modèles graphiques appliquée au problème de SLAM, le manuscrit présente ces deux approches. Des résultats simulés et expérimentaux illustrent les développements tout au long du manuscrit, en utilisant des jeux des données classiques et obtenus au laboratoire
A strong requirement to deploy autonomous mobile robots is their capacity to localize themselves with a certain precision in relation to their environment. Localization exploits data gathered by sensors that either observe the inner states of the robot, like acceleration and speed, or the environment, like cameras and Light Detection And Ranging (LIDAR) sensors. The use of environment sensors has triggered the development of localization solutions that jointly estimate the robot position and the position of elements in the environment, referred to as Simultaneous Localization and Mapping (SLAM) approaches. To handle the noise inherent of the data coming from the sensors, SLAM solutions are implemented in a probabilistic framework. First developments were based on Extended Kalman Filters, while a more recent developments use probabilistic graphical models to model the estimation problem and solve it through optimization. This thesis exploits the latter approach to develop two distinct techniques for autonomous ground vehicles: oneusing monocular vision, the other one using LIDAR. The lack of depth information in camera images has fostered the use of specific landmark parametrizations that isolate the unknown depth in one variable, concentrating its large uncertainty into a single parameter. One of these parametrizations, named Parallax Angle Parametrization, was originally introduced in the context of the Bundle Adjustment problem, that processes all the gathered data in a single global optimization step. We present how to exploit this parametrization in an incremental graph-based SLAM approach in which robot motion measures are also incorporated. LIDAR sensors can be used to build odometry-like solutions for localization by sequentially registering the point clouds acquired along a robot trajectory. We define a graphical model layer on top of a LIDAR odometry layer, that uses the Iterative Closest Points (ICP) algorithm as registration technique. Reference frames are defined along the robot trajectory, and ICP results are used to build a pose graph, used to solve an optimization problem that enables the correction of the robot trajectory and the environment map upon loop closures. After an introduction to the theory of graphical models applied to SLAM problem, the manuscript depicts these two approaches. Simulated and experimental results illustrate the developments throughout the manuscript, using classic and in-house datasets
Styles APA, Harvard, Vancouver, ISO, etc.
40

Caetano, Tiberio Silva. « Graphical models and point set matching ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/4041.

Texte intégral
Résumé :
Casamento de padrões de pontos em Espaços Euclidianos é um dos problemas fundamentais em reconhecimento de padrões, tendo aplicações que vão desde Visão Computacional até Química Computacional. Sempre que dois padrões complexos estão codi- ficados em termos de dois conjuntos de pontos que identificam suas características fundamentais, sua comparação pode ser vista como um problema de casamento de padrões de pontos. Este trabalho propõe uma abordagem unificada para os problemas de casamento exato e inexato de padrões de pontos em Espaços Euclidianos de dimensão arbitrária. No caso de casamento exato, é garantida a obtenção de uma solução ótima. Para casamento inexato (quando ruído está presente), resultados experimentais confirmam a validade da abordagem. Inicialmente, considera-se o problema de casamento de padrões de pontos como um problema de casamento de grafos ponderados. O problema de casamento de grafos ponderados é então formulado como um problema de inferência Bayesiana em um modelo gráfico probabilístico. Ao explorar certos vínculos fundamentais existentes em padrões de pontos imersos em Espaços Euclidianos, provamos que, para o casamento exato de padrões de pontos, um modelo gráfico simples é equivalente ao modelo completo. É possível mostrar que inferência probabilística exata neste modelo simples tem complexidade polinomial para qualquer dimensionalidade do Espaço Euclidiano em consideração. Experimentos computacionais comparando esta técnica com a bem conhecida baseada em relaxamento probabilístico evidenciam uma melhora significativa de desempenho para casamento inexato de padrões de pontos. A abordagem proposta é signi- ficativamente mais robusta diante do aumento do tamanho dos padrões envolvidos. Na ausência de ruído, os resultados são sempre perfeitos.
Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Tanaka, Yusuke. « Probabilistic Models for Spatially Aggregated Data ». Kyoto University, 2020. http://hdl.handle.net/2433/253422.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Trajkovska, Vera [Verfasser], et Christoph [Akademischer Betreuer] Schnörr. « Learning Probabilistic Graphical Models for Image Segmentation / Vera Trajkovska ; Betreuer : Christoph Schnörr ». Heidelberg : Universitätsbibliothek Heidelberg, 2017. http://d-nb.info/1177689987/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Rathke, Fabian [Verfasser], et Christoph [Akademischer Betreuer] Schnörr. « Probabilistic Graphical Models for Medical Image Segmentation / Fabian Rathke ; Betreuer : Christoph Schnörr ». Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180395042/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Quer, Giorgio. « Optimization of Cognitive Wireless Networks using Compressive Sensing and Probabilistic Graphical Models ». Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421992.

Texte intégral
Résumé :
In-network data aggregation to increase the efficiency of data gathering solutions for Wireless Sensor Networks (WSNs) is a challenging task. In the first part of this thesis, we address the problem of accurately reconstructing distributed signals through the collection of a small number of samples at a Data Collection Point (DCP). We exploit Principal Component Analysis (PCA) to learn the relevant statistical characteristics of the signals of interest at the DCP. Then, at the DCP we use this knowledge to design a matrix required by the recovery techniques, that exploit convex optimization (Compressive Sensing, CS) in order to recover the whole signal sensed by the WSN from a small number of samples gathered. In order to integrate this monitoring model in a compression/recovery framework, we apply the logic of the cognition paradigm: we first observe the network, then we learn the relevant statistics of the signals, we apply it to recover the signal and to make decisions, that we effect through the control loop. This compression/recovery framework with a feedback control loop is named "Sensing, Compression and Recovery through ONline Estimation" (SCoRe1). The whole framework is designed for a WSN architecture, called WSN-control, that is accessible from the Internet. We also analyze with a Bayesian approach the whole framework to justify theoretically the choices made in our protocol design. The second part of the thesis deals with the application of the cognition paradigm to the optimization of a Wireless Local Area Network (WLAN). In this work, we propose an architecture for cognitive networking that can be integrated with the existing layered protocol stack. Specifically, we suggest the use of a probabilistic graphical model for modeling the layered protocol stack. In particular, we use a Bayesian Network (BN), a graphical representation of statistical relationships between random variables, in order to describe the relationships among a set of stack-wide protocol parameters and to exploit this cross-layer approach to optimize the network. In doing so, we use the knowledge learned from the observation of the data to predict the TCP throughput in a single-hop wireless network and to infer the future occurrence of congestion at the TCP layer in a multi-hop wireless network. The approach followed in the two main topics of this thesis consists of the following phases: (i) we apply the cognition paradigm to learn the specific probabilistic characteristics of the network, (ii) we exploit this knowledge acquired in the first phase to design novel protocol techniques, (iii) we analyze theoretically and through extensive simulation such techniques, comparing them with other state of the art techniques, and (iv) we evaluate their performance in real networking scenarios.
La combinazione delle informazioni nelle reti di sensori wireless è una soluzione promettente per aumentare l'efficienza delle techiche di raccolta dati. Nella prima parte di questa tesi viene affrontato il problema della ricostruzione di segnali distribuiti tramite la raccolta di un piccolo numero di campioni al punto di raccolta dati (DCP). Viene sfruttato il metodo dell'analisi delle componenti principali (PCA) per ricostruire al DCP le caratteristiche statistiche del segnale di interesse. Questa informazione viene utilizzata al DCP per determinare la matrice richiesta dalle tecniche di recupero che sfruttano algoritmi di ottimizzazione convessa (Compressive Sensing, CS) per ricostruire l'intero segnale da una sua versione campionata. Per integrare questo modello di monitoraggio in un framework di compressione e recupero del segnale, viene applicata la logica del paradigma 'cognitive': prima si osserva la rete; poi dall'osservazione si derivano le statistiche di interesse, che vengono applicate per il recupero del segnale; si sfruttano queste informazioni statistiche per prenderere decisioni e infine si rendono effettive queste decisioni con un controllo in retroazione. Il framework di compressione e recupero con controllo in retroazione è chiamato "Sensing, Compression and Recovery through ONline Estimation" (SCoRe1). L'intero framework è stato implementato in una architettura per WSN detta WSN-control, accessibile da Internet. Le scelte nella progettazione del protocollo sono state giustificate da un'analisi teorica con un approccio di tipo Bayesiano. Nella seconda parte della tesi il paradigma cognitive viene utilizzato per l'ottimizzazione di reti locali wireless (WLAN). L'architetture della rete cognitive viene integrata nello stack protocollare della rete wireless. Nello specifico, vengono utilizzati dei modelli grafici probabilistici per modellare lo stack protocollare: le relazioni probabilistiche tra alcuni parametri di diversi livelli vengono studiate con il modello delle reti Bayesiane (BN). In questo modo, è possibile utilizzare queste informazioni provenienti da diversi livelli per ottimizzare le prestazioni della rete, utilizzando un approccio di tipo cross-layer. Ad esempio, queste informazioni sono utilizzate per predire il throughput a livello di trasporto in una rete wireless di tipo single-hop, o per prevedere il verificarsi di eventi di congestione in una rete wireless di tipo multi-hop. L'approccio seguito nei due argomenti principali che compongono questa tesi è il seguente: (i) viene applicato il paradigma cognitive per ricostruire specifiche caratteristiche probabilistiche della rete, (ii) queste informazioni vengono utilizzate per progettare nuove tecniche protocollari, (iii) queste tecniche vengono analizzate teoricamente e confrontate con altre tecniche esistenti, e (iv) le prestazioni vengono simulate, confrontate con quelle di altre tecniche e valutate in scenari di rete realistici.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Wei, Wei. « Probabilistic Models of Topics and Social Events ». Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/941.

Texte intégral
Résumé :
Structured probabilistic inference has shown to be useful in modeling complex latent structures of data. One successful way in which this technique has been applied is in the discovery of latent topical structures of text data, which is usually referred to as topic modeling. With the recent popularity of mobile devices and social networking, we can now easily acquire text data attached to meta information, such as geo-spatial coordinates and time stamps. This metadata can provide rich and accurate information that is helpful in answering many research questions related to spatial and temporal reasoning. However, such data must be treated differently from text data. For example, spatial data is usually organized in terms of a two dimensional region while temporal information can exhibit periodicities. While some work existing in the topic modeling community that utilizes some of the meta information, these models largely focused on incorporating metadata into text analysis, rather than providing models that make full use of the joint distribution of metainformation and text. In this thesis, I propose the event detection problem, which is a multidimensional latent clustering problem on spatial, temporal and topical data. I start with a simple parametric model to discover independent events using geo-tagged Twitter data. The model is then improved toward two directions. First, I augmented the model using Recurrent Chinese Restaurant Process (RCRP) to discover events that are dynamic in nature. Second, I studied a model that can detect events using data from multiple media sources. I studied the characteristics of different media in terms of reported event times and linguistic patterns. The approaches studied in this thesis are largely based on Bayesian nonparametric methods to deal with steaming data and unpredictable number of clusters. The research will not only serve the event detection problem itself but also shed light into a more general structured clustering problem in spatial, temporal and textual data.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Ekdahl, Magnus. « On approximations and computations in probabilistic classification and in learning of graphical models / ». Linköping : Department of Mathematics, Linköpings universitet, 2007. http://www.bibl.liu.se/liupubl/disp/disp2007/tek1141s.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Hu, Xu. « Towards efficient learning of graphical models and neural networks with variational techniques ». Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1037.

Texte intégral
Résumé :
Dans cette thèse, je me concentrerai principalement sur l’inférence variationnelle et les modèles probabilistes. En particulier, je couvrirai plusieurs projets sur lesquels j'ai travaillé pendant ma thèse sur l'amélioration de l'efficacité des systèmes AI / ML avec des techniques variationnelles. La thèse comprend deux parties. Dans la première partie, l’efficacité des modèles probabilistes graphiques est étudiée. Dans la deuxième partie, plusieurs problèmes d’apprentissage des réseaux de neurones profonds sont examinés, qui sont liés à l’efficacité énergétique ou à l’efficacité des échantillons
In this thesis, I will mainly focus on variational inference and probabilistic models. In particular, I will cover several projects I have been working on during my PhD about improving the efficiency of AI/ML systems with variational techniques. The thesis consists of two parts. In the first part, the computational efficiency of probabilistic graphical models is studied. In the second part, several problems of learning deep neural networks are investigated, which are related to either energy efficiency or sample efficiency
Styles APA, Harvard, Vancouver, ISO, etc.
48

GATTI, ELENA. « Graphical models for continuous time inference and decision making ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19575.

Texte intégral
Résumé :
Reasoning about evolution of system in time is both an important and challenging task. We are interested in probability distributions over time of events where often observations are irregularly spaced over time. Probabilistic models have been widely used to accomplish this task but they have some limits. Indeed, Hidden Markov Models and Dynamic Bayesian Networks in general require the specification of a time granularity between consecutive observations. This requirement leads to computationally inefficient learning and inference procedures when the adopted time granularity is finer than the time spent between consecutive observations, and to possible losses of information in the opposite case. The framework of Continuous Time Bayesian Networks (CTBN) overcomes this limit, allowing the representation of temporal dynamics over a structured state space. In this dissertation an overview of the semantic and inference aspects of the framework of the CTBNs is proposed. The limits of exact inference are overcome using approximate inference, in particular the cluster-graph message passing algorithm and the Gibbs Sampling has been investigated. The CTBN has been applied to a real case study of diagnosis of cardiogenic heart failure, developed in collaboration with domain experts. Moving from the task of simply reasoning under uncertainty, to the task of deciding how to act in the world, a part of the dissertation is devoted to graphical models that allow the inclusion of decisions. We describe Influence Diagrams, which extend Bayesian Networks by introducing decisions and utilities. We then discuss an approach for approximate representation of optimal strategies in influence diagrams. The contributions of the dissertation are the following: design and development of a CTBN software package implementing two of the most important inference algorithms (Expectation Propagation and Gibbs Sampling), development of a realistic diagnosis scenario of cardiogenic heart failure (to the best of our knowledge it is the first clinical application of this type), the approach of information enhancement to reduce the domain of the policy in large influence diagrams together with an important contribution concerning the identification of informational links to add in the graph.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Schiegg, Martin [Verfasser], et Fred A. [Akademischer Betreuer] Hamprecht. « Multi-Target Tracking with Probabilistic Graphical Models / Martin Josef Schiegg ; Betreuer : Fred A. Hamprecht ». Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180396758/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Klukowski, Piotr. « Nuclear magnetic resonance spectroscopy interpretation for protein modeling using computer vision and probabilistic graphical models ». Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4720.

Texte intégral
Résumé :
Dynamic development of nuclear magnetic resonance spectroscopy (NMR) allowed fast acquisition of experimental data which determine structure and dynamics of macromolecules. Nevertheless, due to lack of appropriate computational methods, NMR spectra are still analyzed manually by researchers what takes weeks or years depending on protein complexity. Therefore automation of this process is extremely desired and can significantly reduce time of protein structure solving. In presented work, a new approach to automated three-dimensional protein NMR spectra analysis is presented. It is based on Histogram of Oriented Gradients and Bayesian Network which have not been ever applied in that context in the history of research in the area. Proposed method was evaluated using benchmark data which was established by manual labeling of 99 spectroscopic images taken from 6 different NMR experiments. Afterwards subsequent validation was made using spectra of upstream of N-ras protein. With the use of proposed method, a three-dimensional structure of mentioned protein was calculated. Comparison with reference structure from protein databank reveals no significant differences what has proven that proposed method can be used in practice in NMR laboratories.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie