To see the other types of publications on this topic, follow the link: Probabilistik.

Dissertations / Theses on the topic 'Probabilistik'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Probabilistik.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Proske, Dirk. "2. Dresdner Probabilistik-Symposium – Sicherheit und Risiko im Bauwesen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1218786674781-31766.

Full text
Abstract:
Das Dresdner Probabilistik-Symposium findet erfreulicherweise zum zweiten Mal statt. Vielleicht gelingt es den Besuchern, Vortragenden und Veranstaltern, damit eine Tradition aufzubauen. Zweifelsohne ist das Thema der Tagung ein sehr spezielles. Welcher Bauingenieur hat schon bei seiner alltäglichen Arbeit Zeit, über die Sicherheit seiner Konstruktionen nachzudenken? In der Regel hält sich der Bauingenieur an die Regeln der Technik und kann davon ausgehen, daß die Konstruktion dann die erforderliche Sicherheit erbringt. Gerade aber bei anspruchsvollen Aufgaben, Fällen, in denen der Bauingenieur gezwungen ist, eigene Modelle zu entwickeln, muß er die Sicherheit prüfen. In zunehmendem Maße versuchen Bauingenieure sich durch die Bewältigung solcher Aufgaben vom Markt abzugrenzen. Deshalb sehen wir als Veranstalter langfristig auch einen steigenden Bedarf für die Vermittlung von Grundlagen der Sicherheitstheorien im Bauwesen.... (aus dem Vorwort)
APA, Harvard, Vancouver, ISO, and other styles
2

Proske, Dirk. "1. Dresdner Probabilistik-Symposium – Sicherheit und Risiko im Bauwesen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1218813448200-90769.

Full text
Abstract:
.... Das Wissen über die Baustoffe allein reicht nicht aus, um sichere Bauwerke zu errichten. Auch das Wissen über das Verhalten von Baustrukturen unter Einwirkungen ist dafür notwendig. Der Lehrstuhl für Statik, der sich hauptsächlich dieser Thematik widmet, darf mit Recht als Vorreiter an der Fakultät Bauingenieurwesen bei der Untersuchung von Sicherheitsfragen für Bauwerke gelten.... (aus dem Vorwort)
APA, Harvard, Vancouver, ISO, and other styles
3

Proske, Dirk, Milad Mehdianpour, and Lucjan Gucma. "4th International Probabilistic Workshop: 12th-13th October 2006, Berlin, BAM (Federal Institute for Materials Research and Testing)." Universität für Bodenkultur Wien, 2009. https://slub.qucosa.de/id/qucosa%3A284.

Full text
Abstract:
Die heutige Welt der Menschen wird durch große Dynamik geprägt. Eine Vielzahl verschiedener Prozesse entfaltet sich parallel und teilweise auf unsichtbare Weise miteinander verbunden. Nimmt man z.B. den Prozess der Globalisierung: Hier erleben wir ein exponentielles Wachstum der internationalen Verknüpfungen von der Ebene einzelner Menschen und bis zur Ebene der Kulturen. Solche Verknüpfungen führen uns zum Begriff der Komplexität. Diese wird oft als Produkt der Anzahl der Elemente eines Systems mal Umfang der Verknüpfungen im System verstanden. In anderen Worten, die Welt wird zunehmend komplexer, denn die Verknüpfungen nehmen zu. Komplexität wiederum ist ein Begriff für etwas unverstandenes, unkontrollierbares, etwas unbestimmtes. Genau wie bei einem Menschen: Aus einer Zelle wächst ein Mensch, dessen Verhalten wir im Detail nur schwer vorhersagen können. Immerhin besitzt sein Gehirn 1011 Elemente (Zellen). Wenn also diese dynamischen sozialen Prozesse zu höherer Komplexität führen, müssen wir auch mehr Unbestimmtheit erwarten. Es bleibt zu Hoffen, dass die Unbestimmtheit nicht existenzielle Grundlagen betrifft. Was die Komplexität der Technik angeht, so versucht man hier im Gegensatz zu den gesellschaftlichen Unsicherheiten die Unsicherheiten zu erfassen und gezielt mit ihnen umzugehen. Das gilt für alle Bereiche, ob nun Naturgefahrenmanagement, beim Bau und Betrieb von Kernkraftwerken, im Bauwesen oder in der Schifffahrt. Und so verschieden diese Fachgebiete auch scheinen mögen, die an diesem Symposium teilnehmen: Sie haben erkannt, das verantwortungsvoller Umgang mit Technik einer Berücksichtigung der Unbestimmtheit bedarf. Soweit sind wir in gesellschaftlichen Prozessen noch nicht. Wünschenswert wäre, dass in einigen Jahren nicht nur Bauingenieure, Maschinenbauer, Mathematiker oder Schiffsbauer an einem solchen Probabilistik- Symposium teilnehmen, sondern auch Soziologen, Politiker oder Manager... (aus dem Vorwort) --- HINWEIS: Das Volltextdokument besteht aus einzelnen Beiträgen mit separater Seitenzählung.
PREFACE: The world today is shaped by high dynamics. Multitude of processes evolves parallel and partly connected invisible. For example, the globalisation is such a process. Here one can observe the exponential growing of connections form the level of single humans to the level of cultures. Such connections guide as to the term complexity. Complexity is often understood as product of the number of elements and the amount of connections in the system. In other words, the world is going more complex, if the connections increase. Complexity itself is a term for a system, which is not fully understood, which is partly uncontrollable and indeterminated: exactly as humans. Growing from a single cell, the humans will show latter a behaviour, which we can not predict in detail. After all, the human brain consists of 1011 elements (cells). If the social dynamical processes yield to more complexity, we have to accept more indetermination. Well, one has to hope, that such an indetermination does not affect the basic of human existence. If we look at the field of technology, we can detect, that here indetermination or uncertainty is often be dealt with explicitly. This is valid for natural risk management, for nuclear engineering, civil engineering or for the design of ships. And so different the fields are which contribute to this symposium for all is valid: People working in this field have realised, that a responsible usage of technology requires consideration of indetermination and uncertainty. This level is not yet reached in the social sciences. It is the wish of the organisers of this symposium, that not only civil engineers, mechanical engineers, mathematicians, ship builders take part in this symposium, but also sociologists, managers and even politicians. Therefore there is still a great opportunity to grow for this symposium. Indetermination does not have to be negative: it can also be seen as chance.
APA, Harvard, Vancouver, ISO, and other styles
4

Steinert, Rebecca. "Probabilistic Fault Management in Networked Systems." Doctoral thesis, KTH, Beräkningsbiologi, CB, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144608.

Full text
Abstract:
Technical advances in network communication systems (e.g. radio access networks) combined with evolving concepts based on virtualization (e.g. clouds), require new management algorithms in order to handle the increasing complexity in the network behavior and variability in the network environment. Current network management operations are primarily centralized and deterministic, and are carried out via automated scripts and manual interventions, which work for mid-sized and fairly static networks. The next generation of communication networks and systems will be of significantly larger size and complexity, and will require scalable and autonomous management algorithms in order to meet operational requirements on reliability, failure resilience, and resource-efficiency. A promising approach to address these challenges includes the development of probabilistic management algorithms, following three main design goals. The first goal relates to all aspects of scalability, ranging from efficient usage of network resources to computational efficiency. The second goal relates to adaptability in maintaining the models up-to-date for the purpose of accurately reflecting the network state. The third goal relates to reliability in the algorithm performance in the sense of improved performance predictability and simplified algorithm control. This thesis is about probabilistic approaches to fault management that follow the concepts of probabilistic network management (PNM). An overview of existing network management algorithms and methods in relation to PNM is provided. The concepts of PNM and the implications of employing PNM-algorithms are presented and discussed. Moreover, some of the practical differences of using a probabilistic fault detection algorithm compared to a deterministic method are investigated. Further, six probabilistic fault management algorithms that implement different aspects of PNM are presented. The algorithms are highly decentralized, adaptive and autonomous, and cover several problem areas, such as probabilistic fault detection and controllable detection performance; distributed and decentralized change detection in modeled link metrics; root-cause analysis in virtual overlays; event-correlation and pattern mining in data logs; and, probabilistic failure diagnosis. The probabilistic models (for a large part based on Bayesian parameter estimation) are memory-efficient and can be used and re-used for multiple purposes, such as performance monitoring, detection, and self-adjustment of the algorithm behavior.

QC 20140509

APA, Harvard, Vancouver, ISO, and other styles
5

Cutajar, Kurt. "Broadening the scope of gaussian processes for large-scale learning." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS063.

Full text
Abstract:
L'importance renouvelée de la prise de décisions dans un contexte d'incertitude exige une réévaluation de techniques d'inférence bayésiennes appliquées aux grands jeux de données. Les processus gaussiens (GPs) sont une composante fondamentale de nombreux algorithmes probabilistes ; cependant, l'application des GPs est entravée par leur complexité de calcul cubique due aux opérations d'algèbre linéaire impliquées. Nous étudions d'abord l'efficacité de l'inférence exacte des GPs à budget de calcul donné en proposant un nouveau procédé qui applique le préconditionnement aux matrices noyaux. En prenant en considération le domaine du calcul numérique probabiliste, nous montrons également comment l'incertitude numérique introduite par ces techniques d'approximation doit être identifiée et évaluée de manière raisonnable. La deuxième grande contribution de cette thèse est d'établir et de renforcer le rôle des GPs, et leurs extension profondes (DGPs), en vu des exigences et contraintes posées par les grands jeux de données. Alors que les GPs et DGPs étaient autrefois jugés inaptes à rivaliser avec les techniques d'apprentissage profond les plus modernes, les modèles présentés dans cette thèse ont contribué à un changement de perspective sur leur capacités et leur limites
The renewed importance of decision making under uncertainty calls for a re-evaluation of Bayesian inference techniques targeting this goal in the big data regime. Gaussian processes (GPs) are a fundamental building block of many probabilistic kernel machines; however, the computational and storage complexity of GPs hinders their scaling to large modern datasets. The contributions presented in this thesis are two-fold. We first investigate the effectiveness of exact GP inference on a computational budget by proposing a novel scheme for accelerating regression and classification by way of preconditioning. In the spirit of probabilistic numerics, we also show how the numerical uncertainty introduced by approximate linear algebra should be adequately evaluated and incorporated. Bridging the gap between GPs and deep learning techniques remains a pertinent research goal, and the second broad contribution of this thesis is to establish and reinforce the role of GPs, and their deep counterparts (DGPs), in this setting. Whereas GPs and DGPs were once deemed unfit to compete with alternative state-of-the-art methods, we demonstrate how such models can also be adapted to the large-scale and complex tasks to which machine learning is now being applied
APA, Harvard, Vancouver, ISO, and other styles
6

Andriushchenko, Roman. "Computer-Aided Synthesis of Probabilistic Models." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417269.

Full text
Abstract:
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
APA, Harvard, Vancouver, ISO, and other styles
7

Cruz, de Echeverria Loebell Nicole. "Sur le rôle de la déduction dans le raisonnement à partir de prémisses incertaines." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEP023/document.

Full text
Abstract:
L’approche probabiliste du raisonnement émet l’hypothèse que la plupart des raisonnements, aussi bien dans la vie quotidienne qu’en science, se réalisent dans des contextes d’incertitude. Les concepts déductifs centraux de la logique classique, consistance et validité, peuvent être généralisés afin d’englober des degrés de croyance incertains. La consistance binaire peut être généralisée à travers la dénomination de cohérence, lorsque les jugements de probabilité à deux affirmations sont cohérents seulement s’ils respectent les axiomes de la théorie de la probabilité. La validité binaire peut se généraliser comme validité probabiliste (validité-p), lorsqu’une interférence est valide-p seulement si l’incertitude de sa conclusion ne peut être de façon cohérente plus grande que la somme des incertitudes de ses prémisses. Cependant le fait que cette généralisation soit possible dans une logique formelle n’implique pas le fait que les gens utilisent la déduction de manière probabiliste. Le rôle de la déduction dans le raisonnement à partir de prémisses incertaines a été étudié à travers dix expériences et 23 inférences de complexités différentes. Les résultats mettent en évidence le fait que la cohérence et la validité-p ne sont pas juste des formalismes abstraits, mais que les gens vont suivre les contraintes normatives établies par eux dans leur raisonnement. Que les prémisses soient certaines ou incertaines n’a pas créé de différence qualitative, mais la certitude pourrait être interprétée comme l’aboutissement d’une échelle commune de degrés de croyance. Les observations sont la preuve de la pertinence descriptive de la cohérence et de la validité-p comme principes de niveau de calcul pour le raisonnement. Ils ont des implications pour l’interprétation d’observations antérieures sur les rôles de la déduction et des degrés de croyance. Enfin, ils offrent une perspective pour générer de nouvelles hypothèses de recherche quant à l’interface entre raisonnement déductif et inductif
The probabilistic approach to reasoning hypothesizes that most reasoning, both in everyday life and in science, takes place in contexts of uncertainty. The central deductive concepts of classical logic, consistency and validity, can be generalised to cover uncertain degrees of belief. Binary consistency can be generalised to coherence, where the probability judgments for two statements are coherent if and only if they respect the axioms of probability theory. Binary validity can be generalised to probabilistic validity (p-validity), where an inference is p-valid if and only if the uncertainty of its conclusion cannot be coherently greater than the sum of the uncertainties of its premises. But the fact that this generalisation is possible in formal logic does not imply that people will use deduction in a probabilistic way. The role of deduction in reasoning from uncertain premises was investigated across ten experiments and 23 inferences of differing complexity. The results provide evidence that coherence and p-validity are not just abstract formalisms, but that people follow the normative constraints set by them in their reasoning. It made no qualitative difference whether the premises were certain or uncertain, but certainty could be interpreted as the endpoint of a common scale for degrees of belief. The findings are evidence for the descriptive adequacy of coherence and p-validity as computational level principles for reasoning. They have implications for the interpretation of past findings on the roles of deduction and degrees of belief. And they offer a perspective for generating new research hypotheses in the interface between deductive and inductive reasoning
APA, Harvard, Vancouver, ISO, and other styles
8

Borges, Luís António Costa. "Probabilistic evaluation of the rotation capacity of steel joints = Avaliação probabilistica da capacidade de rotação de ligações metálicas." Master's thesis, Departamento de Engenharia Civil, 2003. http://hdl.handle.net/10316/15652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vu, Ngoc tru. "Contribution à l'étude de la corrosion par carbonatation du béton armé : approche expérimentale et probabiliste." Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0008/document.

Full text
Abstract:
La corrosion de l’acier par carbonatation du béton est un phénomène de dégradation majeur des structures en béton armé, qui débute par la dépassivation de l'acier due à l'abaissement du pH de la solution interstitielle, se concrétise par une initiation effective avant de se propager. Nous nous sommes focalisé sur la dépassivation et l'initiation effective. Une large campagne expérimentale a permis de comprendre l'incidence des conditions d'exposition, de la nature des ciments utilisés dans les bétons et des conditions de carbonatation de l'enrobage, sur la dépassivation des armatures et le démarrage effectif de la corrosion. Au total 27 configurations ont été étudiées. Le potentiel libre de corrosion et la résistance de polarisation ont été mesurés au cours de l'expérimentation sur une durée voisine d'une année. Parallèlement, à échéances régulières, les coefficients de Tafel et la masse de produits de corrosion ont été également mesurés. L'ensemble des données a été analysé pour conduire, à partir du calcul des probabilités de bonne ou de mauvaise alarme, aux seuils de détection du démarrage effectif de la corrosion associés aux paramètres électrochimiques ainsi que la masse seuil de produits de corrosion correspondant à cette détection. Alimentée par les résultats des essais de caractérisation des bétons, une simulation numérique par éléments finis du démarrage de la corrosion a été développée permettant de corroborer de façon satisfaisant les résultats expérimentaux
The steel corrosion induced by carbonation is a major cause of degradation of the reinforced concrete structures. Two stages arise: the steel depassivation due to the decrease of pH of the pore solution and the effective initiation, and then the propagation. A wide experimental study was carried out focusing on the first stage, in order to emphasize the effect of the exposure conditions, the type of cement and the concrete mixes, and the carbonation conditions of the concrete cover. In all a set of 27 configurations was investigated. The free potential of corrosion and the resistance of polarization were measured in the course of the experiment during one year. Regularly the Tafel coefficients along with the mass of corrosion products were also measured. The set of data was analyzed in order to derive the detection thresholds of the effective onset of corrosion associated with the electrochemical parameters, from the calculation of the probabilities of good or bad alarm. The threshold of the mass of corrosion products corresponding to this detection was also derived. The tests on concrete probes (porosity, permeability, etc.) supplied data that were used to calibrate a finite element model of the onset of corrosion: this model was found in fairly good agreement with the experimental results
APA, Harvard, Vancouver, ISO, and other styles
10

Ayadi, Inès. "Optimisation des politiques de maintenance préventive dans un cadre de modélisation par modèles graphiques probabilistes." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1072/document.

Full text
Abstract:
Actuellement, les équipements employés dans les milieux industriels sont de plus en plus complexes. Ils exigent une maintenance accrue afin de garantir un niveau de service optimal en termes de fiabilité et de disponibilité. Par ailleurs, souvent cette garantie d'optimalité a un coût très élevé, ce qui est contraignant. Face à ces exigences la gestion de la maintenance des équipements est désormais un enjeu de taille : rechercher une politique de maintenance réalisant un compromis acceptable entre la disponibilité et les coûts associés à l'entretien du système. Les travaux de cette thèse partent par ailleurs du constat que dans plusieurs applications de l'industrie, le besoin de stratégies de maintenance assurant à la fois une sécurité optimale et une rentabilité maximale demeure de plus en plus croissant conduisant à se référer non seulement à l'expérience des experts, mais aussi aux résultats numériques obtenus via la résolution des problèmes d'optimisation. La résolution de cette problématique nécessite au préalable la modélisation de l'évolution des comportements des états des composants constituant le système, i.e, connaître les mécanismes de dégradation des composants. Disposant d'un tel modèle, une stratégie de maintenance est appliquée au système. Néanmoins, l'élaboration d'une telle stratégie réalisant un compromis entre toutes ces exigences représente un verrou scientifique et technique majeur. Dans ce contexte, l'optimisation de la maintenance s'impose pour atteindre les objectifs prescrits avec des coûts optimaux. Dans les applications industrielles réelles, les problèmes d'optimisation sont souvent de grande dimension faisant intervenir plusieurs paramètres. Par conséquent, les métaheuristiques s’avèrent une approche intéressante dans la mesure où d'une part, elles sacrifient la complétude de la résolution au profit de l'efficacité et du temps de calcul et d'autre part elles s'appliquent à un très large panel de problèmes.Dans son objectif de proposer une démarche de résolution d'un problème d'optimisation de la maintenance préventive, cette thèse fournit une méthodologie de résolution du problème d'optimisation des politiques de maintenance préventive systématique appliquée dans le domaine ferroviaire à la prévention des ruptures de rails. Le raisonnement de cette méthodologie s'organise autour de trois étapes principales : 1. Modélisation de l'évolution des comportements des états des composants constituant le système, i.e, connaître les mécanismes de dégradation des composants et formalisation des opérations de maintenance. 2. Formalisation d'un modèle d'évaluation de politiques de maintenance tenant compte aussi bien du facteur sûreté de fonctionnement du système que du facteur économique conséquent aux procédures de gestion de la maintenance (coûts de réparation, de diagnostic, d'indisponibilité). 3. Optimisation des paramètres de configuration des politiques de maintenance préventive systématique afin d'optimiser un ou plusieurs critères. Ces critères sont définis sur la base du modèle d'évaluation des politiques de maintenance proposé dans l'étape précédente
At present, equipments used on the industrial circles are more and more complex. They require a maintenance increased to guarantee a level of optimal service in terms of reliability and availability. Besides, often this guarantee of optimalité has a very high cost, what is binding. In the face of these requirements the management of the maintenance of equipments is from now on a stake in size: look for a politics of maintenance realizing an acceptable compromise between the availability and the costs associated to the maintenance of the system. The works of this thesis leave besides the report that in several applications of the industry, the need for strategies of maintenance assuring(insuring) at the same time an optimal safety and a maximal profitability lives furthermore there
APA, Harvard, Vancouver, ISO, and other styles
11

Shirmohammadi, Mahsa. "Qualitative analysis of synchronizing probabilistic systems." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0054/document.

Full text
Abstract:
Les Markov Decision Process (MDP) sont des systèmes finis probabilistes avec à la fois des choix aléatoires et des stratégies, et sont ainsi reconnus comme de puissants outils pour modéliser les interactions entre un contrôleur et les réponses aléatoires de l'environment. Mathématiquement, un MDP peut être vu comme un jeu stochastique à un joueur et demi où le contrôleur choisit à chaque tour une action et l'environment répond en choisissant un successeur selon une distribution de probabilités fixée.Il existe deux incomparables représentations du comportement d'un MDP une fois les choix de la stratégie fixés.Dans la représentation classique, un MDP est un générateur de séquences d'états, appelées state-outcome; les conditions gagnantes du joueur sont ainsi exprimées comme des ensembles de séquences désirables d'états qui sont visités pendant le jeu, e.g. les conditions de Borel telles que l'accessibilité. La complexité des problèmes de décision ainsi que la capacité mémoire requise des stratégies gagnantes pour les conditions dites state-outcome ont été déjà fortement étudiées.Depuis peu, les MDPs sont également considérés comme des générateurs de séquences de distributions de probabilités sur les états, appelées distribution-outcome. Nous introduisons des conditions de synchronisation sur les distributions-outcome, qui intuitivement demandent à ce que la masse de probabilité s'accumule dans un (ensemble d') état, potentiellement de façon asymptotique.Une distribution de probabilités est p-synchrone si la masse de probabilité est d'au moins p dans un état; et la séquence de distributions de probabilités est toujours, éventuellement, faiblement, ou fortement p-synchrone si, respectivement toutes, certaines, infiniment plusieurs ou toutes sauf un nombre fini de distributions dans la séquence sont p-synchrones.Pour chaque type de synchronisation, un MDP peut être(i) assurément gagnant si il existe une stratégie qui génère une séquence 1-synchrone;(ii) presque-assurément gagnant si il existe une stratégie qui génère une séquence (1-epsilon)-synchrone et cela pour tout epsilon strictement positif;(iii) asymptotiquement gagnant si pour tout epsilon strictement positif, il existe une stratégie produisant une séquence (1-epsilon)-synchrone.Nous considérons le problème consistant à décider si un MDP est gagnant, pour chaque type de synchronisation et chaque mode gagnant: nous établissons les limites supérieures et inférieures de la complexité de ces problèmes ainsi que la capacité mémoire requise pour une stratégie gagnante optimale.En outre, nous étudions les problèmes de synchronisation pour les automates probabilistes (PAs) qui sont en fait des instances de MDP où les contrôleurs sont restreint à utiliser uniquement des stratégies-mots; c'est à dire qu'ils n'ont pas la possibilité d'observer l'historique de l'exécution du système et ne peuvent connaitre que le nombre de choix effectués jusque là. Les langages synchrones d'un PA sont donc l'ensemble des stratégies-mots synchrones: nous établissons la complexité des problèmes des langages synchrones vides et universels pour chaque mode gagnant.Nous répercutons nos résultats obtenus pour les problèmes de synchronisation sur les MDPs et PAs aux jeux tour à tour à deux joueurs ainsi qu'aux automates finis non-déterministes. En plus de nos résultats principaux, nous établissons de nouveaux résultats de complexité sur les automates finis alternants avec des alphabets à une lettre. Enfin, nous étudions plusieurs variations de synchronisation sur deux instances de systèmes infinis que sont les automates temporisés et pondérés
Markov decision processes (MDPs) are finite-state probabilistic systems with bothstrategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment.An MDP can be mathematically viewed as a one and half player stochastic game played in rounds when the controller chooses an action,and the environment chooses a successor according to a fixedprobability distribution.There are two incomparable views on the behavior of an MDP, when thestrategic choices are fixed. In the traditional view, an MDP is a generator of sequence of states, called the state-outcome; the winning condition of the player is thus expressed as a set of desired sequences of states that are visited during the game, e.g. Borel condition such as reachability.The computational complexity of related decision problems and memory requirement of winning strategies for the state-outcome conditions are well-studied.Recently, MDPs have been viewed as generators of sequences of probability distributions over states, calledthe distribution-outcome. We introduce synchronizing conditions defined on distribution-outcomes,which intuitively requires that the probability mass accumulates insome (group of) state(s), possibly in limit.A probability distribution is p-synchronizing if the probabilitymass is at least p in some state, anda sequence of probability distributions is always, eventually,weakly, or strongly p-synchronizing if respectively all, some, infinitely many, or all but finitely many distributions in the sequence are p-synchronizing.For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence.We consider the problem of deciding whether an MDP is winning, for each synchronizing and winning mode: we establish matching upper and lower complexity bounds of the problems, as well as the memory requirementfor optimal winning strategies.As a further contribution, we study synchronization in probabilistic automata (PAs), that are kind of MDPs where controllers are restricted to use only word-strategies; i.e. no ability to observe the history of the system execution, but the number of choices made so far.The synchronizing languages of a PA is then the set of all synchronizing word-strategies: we establish the computational complexity of theemptiness and universality problems for all synchronizing languages in all winning modes.We carry over results for synchronizing problems from MDPs and PAs to two-player turn-based games and non-deterministic finite state automata. Along with the main results, we establish new complexity results foralternating finite automata over a one-letter alphabet.In addition, we study different variants of synchronization for timed andweighted automata, as two instances of infinite-state systems
APA, Harvard, Vancouver, ISO, and other styles
12

Saad, Feras Ahmad Khaled. "Probabilistic data analysis with probabilistic programming." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113164.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 48-50).
Probabilistic techniques are central to data analysis, but dierent approaches can be challenging to apply, combine, and compare. This thesis introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.
by Feras Ahmad Khaled Saad.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Gendra, Casalí Bernat. "Probabilistic quantum metrology." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/371132.

Full text
Abstract:
La Metrologia és el camp d’investigació sobre les eines estadístiques i el disseny tecnològic dels aparells de mesura necessaris per inferir informació precisa sobre paràmetres físics. En un sistema físic el soroll és inherent en última instància amb el de les seves parts, i per tant en un nivell microscòpic està governat per les lleis de la física quàntica. Les mesures quàntiques són intrínsecament sorolloses i per tant limiten la precisió amb la qual es pot obtenir en qualsevol escenari de metrologia. El camp de la metrologia quàntica està dedicat a l’estudi d’aquests límits i al desenvolupament de noves eines per ajudar a superar-los, sovint fent ús de les característiques exclusivament quàntiques com la superposició o l'entrellaçament.En el procés de disseny d’un protocol d’estimació és necessari utilitzar una figura de mèrit per optimitzar el rendiment d’aquests protocols. Fins ara la majoria de plantejaments de metrologia quàntica i els límits que en deriven han estat deterministes, és a dir, que estan optimitzats per tal de proporcionar una estimació vàlida per a cadascun dels possibles resultats de la mesura i minimitzar-ne l’error promig entre el valor estimat i el real del paràmetre. Aquesta avaluació dels protocols mitjançant el seu error promig és molt natural i convenient, però pot haver-hi algunes situacions en què això no sigui suficient per a expressar l’ús concret que se li donarà al valor obtingut.Un punt central d’aquesta tesi és observar que resultats concrets d’una mesura poden proporcionar una estimació amb una millor precisió que la mitjana. Perquè això succeeixi hi ha d’haver altres resultats imprecisos que compensin la mitjana perquè aquesta no violi els límits deterministes. En aquesta tesi hem escollit una figura de merit que reflecteix la màxima precisió que es pot obtenir. Optimitzem la precisió d’un subconjunt de resultats senyalats, i quantifiquem la probabilitat d’obtenir-ne algun d’ells, o en altres paraules, la probabilitat que el protocol proporcioni una estimació. Això pot ser entès com proposar una opció addicional que està sempre disponible per la mesura, a saber, la possibilitat de post-seleccionar els resultats i donar amb només certa probabilitat una resposta concloent. Aquests protocols probabilístics garanteixen una precisió mínima pels resultats senyalats. En la mecànica quàntica hi ha moltes maneres de poder llegir les dades d’un sistema quàntic. Per tant, l’optimització dels esquemes probabilístics no es pot reduir a la reinterpretació de resultats a partir dels esquemes (determinsitic) de metrologia quàntica canòniques, sinó que implica la recerca de mesures quàntiques completament diferents. Concretament, hem dissenyat protocols probabilístics per a l’estimació de fases, direccions i de sistemes de referència. Hem vist que la post-selecció té dos efectes possibles: compensar una mala elecció de l’estat inicial o contrarestar els efectes negatius del soroll presents en l’estat del sistema o en el procés de mesurament. En particular, trobem que afegir la possibilitat d’abstenció en l’estimació de fases en presència de soroll pot produir una millora en la precisió que supera la cota trobada per protocols deterministes. Trobem una cota que correspon a la millor precisió que es pot obtenir.
Metrology is the field of research on statistical tools and technological design of measurement devices to infer accurate information about physical parameters. The noise in a physical setup is ultimately related to that of its constituents, and at a microscopic level this is in turn dictated by the rules of quantum physics. Quantum measurements are inherently noisy and hence limit the precision that can be reached by any metrology scheme. The field of quantum metrology is devoted to the study of such limits and to the development of new tools that help to surmount them, often make use unique quantum features such as superposition or entanglement. In the process of designing an estimation protocol, the experimentalist uses a figure of merit to optimise the performance of such protocols. Up until now most quantum metrology schemes and known bounds have been deterministic, that is, they are optimized in order to provide a valid estimate for each possible measurement outcome and minimize the average error between the estimated and true value of the parameter. This benchmarking of a protocol by its average performance is very natural and convenient, but there can be some scenarios in which this is not enough to express the concrete use that will be given to the obtained value. A central point in this thesis is that particular measurement outcomes can provide an estimate with a better precision than the average one. Notice that for this to happen there must be other imprecise outcomes so that the average does not violate the deterministic bounds. In this thesis we choose a figure of merit that reflects the maximum precision one can obtain. We optimise the precision of a set of heralded outcomes, and quantify the chance of such outcomes to occur, or in other words the probability that the protocol fails to provide an estimate. This can be understood as putting forward an extra feature that is always available to the experimentalist, namely the possibility of post-selecting the outcomes of their measurements and giving with some probability an inconclusive answer. These probabilistic protocols guarantee a minimal precision upon a heralded outcome. In quantum mechanics there are many ways in which data can be read-off from a quantum system. Hence, the optimization of probabilistic schemes cannot be reduced to reinterpreting results from the canonical (determinsitic) quantum metrology schemes, but rather it entails the search of completly different quantum generalized measurements. Specifically, we design probabilistic protocols for phase, direction and reference frame estimation. We show that post-selection has two possible effects: to compensate a bad choice of probe state or to counterbal¬ance the negative effects of noise present in the state system or in the measurement process. In particular, we show that adding the possibility of abstaining in phase estimation in presence of noise can produce an enhancement in precision that overtake the ultimate bound of deterministic protocols. The bound derived is the best precision that can be obtained, and in this sense one can speak of ultimate bound in precision.
APA, Harvard, Vancouver, ISO, and other styles
14

Munch, Mélanie. "Améliorer le raisonnement dans l'incertain en combinant les modèles relationnels probabilistes et la connaissance experte." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASB011.

Full text
Abstract:
Cette thèse se concentre sur l'intégration des connaissances d'experts pour améliorer le raisonnement dans l'incertitude. Notre objectif est de guider l'apprentissage des relations probabilistes avec les connaissances d'experts pour des domaines décrits par les ontologies.Pour ce faire, nous proposons de coupler des bases de connaissances (BC) et une extension orientée objet des réseaux bayésiens, les modèles relationnels probabilistes (PRM). Notre objectif est de compléter l'apprentissage statistique par des connaissances expertes afin d'apprendre un modèle aussi proche que possible de la réalité et de l'analyser quantitativement (avec des relations probabilistes) et qualitativement (avec la découverte causale). Nous avons développé trois algorithmes à travers trois approches distinctes, dont les principales différences résident dans leur automatisation et l'intégration (ou non) de la supervision d'experts humains.L'originalité de notre travail est la combinaison de deux philosophies opposées : alors que l'approche bayésienne privilégie l'analyse statistique des données fournies pour raisonner avec, l'approche ontologique est basée sur la modélisation de la connaissance experte pour représenter un domaine. La combinaison de la force des deux permet d'améliorer à la fois le raisonnement dans l'incertitude et la connaissance experte
This thesis focuses on integrating expert knowledge to enhance reasoning under uncertainty. Our goal is to guide the probabilistic relations’ learning with expert knowledge for domains described by ontologies.To do so we propose to couple knowledge bases (KBs) and an oriented-object extension of Bayesian networks, the probabilistic relational models (PRMs). Our aim is to complement the statistical learning with expert knowledge in order to learn a model as close as possible to the reality and analyze it quantitatively (with probabilistic relations) and qualitatively (with causal discovery). We developped three algorithms throught three distinct approaches, whose main differences lie in their automatisation and the integration (or not) of human expert supervision.The originality of our work is the combination of two broadly opposed philosophies: while the Bayesian approach favors the statistical analysis of the given data in order to reason with it, the ontological approach is based on the modelization of expert knowledge to represent a domain. Combining the strenght of the two allows to improve both the reasoning under uncertainty and the expert knowledge
APA, Harvard, Vancouver, ISO, and other styles
15

Asafu-Adjei, Joseph Kwaku. "Probabilistic Methods." VCU Scholars Compass, 2007. http://hdl.handle.net/10156/1420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Baier, Christel, Benjamin Engel, Sascha Klüppelholz, Steffen Märcker, Hendrik Tews, and Marcus Völp. "A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-129917.

Full text
Abstract:
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
APA, Harvard, Vancouver, ISO, and other styles
17

Weidner, Thomas. "Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal Logic." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208732.

Full text
Abstract:
The classic theorems of Büchi and Kleene state the expressive equivalence of finite automata to monadic second order logic and regular expressions, respectively. These fundamental results enjoy applications in nearly every field of theoretical computer science. Around the same time as Büchi and Kleene, Rabin investigated probabilistic finite automata. This equally well established model has applications ranging from natural language processing to probabilistic model checking. Here, we give probabilistic extensions Büchi\\\'s theorem and Kleene\\\'s theorem to the probabilistic setting. We obtain a probabilistic MSO logic by adding an expected second order quantifier. In the scope of this quantifier, membership is determined by a Bernoulli process. This approach turns out to be universal and is applicable for finite and infinite words as well as for finite trees. In order to prove the expressive equivalence of this probabilistic MSO logic to probabilistic automata, we show a Nivat-theorem, which decomposes a recognisable function into a regular language, homomorphisms, and a probability measure. For regular expressions, we build upon existing work to obtain probabilistic regular expressions on finite and infinite words. We show the expressive equivalence between these expressions and probabilistic Muller-automata. To handle Muller-acceptance conditions, we give a new construction from probabilistic regular expressions to Muller-automata. Concerning finite trees, we define probabilistic regular tree expressions using a new iteration operator, called infinity-iteration. Again, we show that these expressions are expressively equivalent to probabilistic tree automata. On a second track of our research we investigate Constraint LTL over multidimensional data words with data values from the infinite tree. Such LTL formulas are evaluated over infinite words, where every position possesses several data values from the infinite tree. Within Constraint LTL on can compare these values from different positions. We show that the model checking problem for this logic is PSPACE-complete via investigating the emptiness problem of Constraint Büchi automata.
APA, Harvard, Vancouver, ISO, and other styles
18

Schmitt, Lucie. "Durabilité des ouvrages en béton soumis à la corrosion : optimisation par une approche probabiliste." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0009/document.

Full text
Abstract:
La maîtrise de la durabilité des ouvrages neufs et la nécessité de prolonger la durée de vie des structures existantes correspondent à des enjeux sociétaux de tout premier ordre et s’inscrivent dans les principes d’une économie circulaire. La durabilité des ouvrages en béton occupe ainsi une position centrale dans le contexte normatif. Ces travaux de thèse font suite à ceux de J. Mai-Nhu* et ont pour objectif d’étendre le domaine d’application du modèle SDReaM-crete en intégrant les bétons à base d’additions et en définissant un critère d’état limite basé sur une quantité de produits corrodés. Une approche basée sur une optimisation numérique des calculs prédictifs est mise en place pour réaliser des calculs d’indices de fiabilité en considérant les principaux mécanismes liés à la corrosion des armatures, carbonatation et chlorures. Ce modèle permet d’optimiser le dimensionnement des enrobages et les performances du béton en intégrant davantage les conditions environnementales telles qu’elles sont définies dans les normes
Mastering the durability of new structures and the need to extand the lifespan of existing constructions correspond to social issues of the highest order and are part of the principles of a circular economy. The durability of concrete structures thus occupies a central position in the normative context. This thesis works follow those of J. Mai-Nhu* and aims at extending the field of application the SDReaM-crete model by integrating mineral additions based concretes and by defining a limit state criterion based on a quantity of corroded products. An approach based on a numerical optimization of predictive computations is set up to perform reliability analyses by considering the main mechanisms related to the corrosion of reinforcement, carbonation and chlorides. This model enables the optimization of the sizing of the concrete covers and performances by further integrating the environmental conditions as defined by the standards
APA, Harvard, Vancouver, ISO, and other styles
19

Larsson, Emelie. "Utvärdering av osäkerhet och variabilitet vid beräkning av riktvärden för förorenad mark." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-218289.

Full text
Abstract:
I Sverige finns cirka 80 000 identifierade förorenade områden som i vissa fall behöver efterbehandling för att hantera föroreningssituationen. Naturvårdsverket publicerade 2009 ett reviderat vägledningsmaterial för riskbedömningar av förorenade områden tillsammans med en beräkningsmodell för att ta fram riktvärden. Riktvärdesmodellen är deterministisk och genererar enskilda riktvärden för ämnen under givna förutsättningar. Modellen tar inte explicit hänsyn till osäkerhet och variabilitet utan hanterar istället det implicit med säkerhets­faktorer och genom att användaren alltid utgår från ett rimligt värsta scenario vid val av parametervärden. En metod för att hantera osäkerhet och variabilitet i riskbedömningar är att göra en så kallad probabilistisk riskbedömning med Monte Carlo-simuleringar. Fördelen med detta är att ingångsparametrar kan definieras med sannolikhetsfördelningar och på så vis hantera inverkan av osäkerhet och variabilitet. I examensarbetet genomfördes en probabilistisk riskbedömning genom en vidare egen implementering av Naturvårdsverkets metodik varefter probabilistiska riktvärden beräknades för ett antal ämnen. Modellen tillämpades med två parameter­uppsättningar vars värden hade förankrats i litteraturen respektive Naturvårdsverkets metodik. Uppsättningarna genererade kumulativa fördelningsfunktioner av riktvärden som överensstämde olika mycket med de deterministiska riktvärden som Naturvårdsverket definierat. Generellt överensstämde deterministiska riktvärden för markanvändningsscenariot känslig mark­användning (KM) mer med den probabilistiska riskbedömningen än för scenariot mindre känslig markanvändning (MKM). Enligt resultatet i examensarbetet skulle dioxin och PCB-7 behöva en sänkning av riktvärden för att fullständigt skydda människor och miljö för MKM. En fallstudie över ett uppdrag som Geosigma AB utfört under hösten 2013 genomfördes också. Det var generellt en överensstämmelse mellan de platsspecifika riktvärden (PRV) som beräknats i undersökningsrapporten och den probabilistiska risk­bedömningen. Undantaget var ämnet koppar som enligt studien skulle behöva halverade riktvärden för att skydda människor och miljö. I den probabilistiska riskbedömningen kvantifierades hur olika skyddsobjekt respektive exponeringsvägar blev styrande för olika ämnens riktvärden mellan simuleringar. För några ämnen skedde avvikelser jämfört med de deterministiska motsvarigheterna i mellan 70-90 % av fallen. Exponeringsvägarnas bidrag till det ojusterade hälsoriskbaserade riktvärdet kvantifierades också i en probabilistisk hälsoriskbaserad riskbedömning. Riktvärden med likvärdiga numeriska värden erhölls för riktvärden med skild sammansättning. Detta motiverade att riktvärdenas sammansättning och styrande exponeringsvägar alltid bör kvantifieras vid en probabilistisk riskbedömning.
In Sweden, approximately 80,000 contaminated areas have been identified. Some of these areas are in need of remediation to cope with the effects that the contaminants have on both humans and the environment. The Swedish Environmental Protection Agency (EPA) has published a methodology on how to perform risk assessments for contaminated soils together with a complex model for calculating soil guideline values. The guideline value model is deterministic and calculates single guideline values for contaminants. The model does not account explicitly for uncertainty and variability in parameters but rather handles it implicitly by using safety-factors and reasonable worst-case assumptions for different parameters. One method to account explicitly for uncertainty and variability in a risk assessment is to perform a probabilistic risk assessment (PRA) through Monte Carlo-simulations. A benefit with this is that the parameters can be defined with probability density functions (PDFs) that account for the uncertainty and variability of the parameters. In this Master's Thesis a PRA was conducted and followed by calculations of probabilistic guideline values for selected contaminants. The model was run for two sets of PDFs for the parameters: one was collected from extensive research in published articles and another one included the deterministic values set by the Swedish EPA for all parameters. The sets generated cumulative probability distributions (CPDs) of guideline values that, depending on the contaminant, corresponded in different levels to the deterministic guideline values that the Swedish EPA had calculated. In general, there was a stronger correlation between the deterministic guideline values and the CPDs for the sensitive land-use scenario compared to the less sensitive one. For contaminants, such as dioxin and PCB-7, a lowering of the guideline values would be required to fully protect humans and the environment based on the results in this thesis. Based on a recent soil investigation that Geosigma AB has performed, a case study was also conducted. In general there was a correlation between the deterministic site specific guideline values and the CPDs in the case study. In addition to this, a health oriented risk assessment was performed in the thesis where unexpected exposure pathways were found to be governing for the guideline values. For some contaminants the exposure pathway governing the guideline values in the PRA differed from the deterministic ones in 70-90 % of the simulations. Also, the contributing part of the exposure pathways to the unadjusted health guideline values differed from the deterministic ones. This indicated the need of always quantifying the composition of guideline values in probabilistic risk assessments.
APA, Harvard, Vancouver, ISO, and other styles
20

Faix, Marvin. "Conception de machines probabilistes dédiées aux inférences bayésiennes." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM079/document.

Full text
Abstract:
Ces travaux de recherche ont pour but de concevoir des ordinateurs baséssur une organisation du calcul mieux adaptée au raisonnement probabiliste.Notre intérêt s’est porté sur le traitement des données incertaines et lescalculs à réaliser sur celles-ci. Pour cela, nous proposons des architectures demachines se soustrayant au modèle Von Neumann, supprimant notammentl’utilisation de l’arithmétique en virgule fixe ou flottante. Les applicationscomme le traitement capteurs ou la robotique en générale sont des exemplesd’utilisation des architectures proposées.Plus spécifiquement, ces travaux décrivent deux types de machines probabilistes, radicalement différentes dans leur conception, dédiées aux problèmesd’inférences bayésiennes et utilisant le calcul stochastique. La première traiteles problèmes d’inférence de faibles dimensions et utilise le calcul stochas-tique pour réaliser les opérations nécessaires au calcul de l’inférence. Cettemachine est basée sur le concept de bus probabiliste et possède un très fortparallélisme. La deuxième machine permet de traiter les problèmes d’infé-rence en grandes dimensions. Elle implémente une méthode MCMC sous laforme d’un algorithme de Gibbs au niveau binaire. Dans ce cas, le calculstochastique est utilisé pour réaliser l’échantillonnage, bit à bit, du modèle.Une importante caractéristique de cette machine est de contourner les pro-blèmes de convergence généralement attribués au calcul stochastique. Nousprésentons en fin de manuscrit une extension de ce second type de machine :une machine générique et programmable permettant de trouver une solutionapprochée à n’importe quel problème d’inférence
The aim of this research is to design computers best suited to do probabilistic reasoning. The focus of the research is on the processing of uncertain data and on the computation of probabilistic distribution. For this, new machine architectures are presented. The concept they are designed on is different to the one proposed by Von Neumann, without any fixed or floating point arithmetic. These architectures could replace the current processors in sensor processing and robotic fields.In this thesis, two types of probabilistic machines are presented. Their designs are radically different, but both are dedicated to Bayesian inferences and use stochastic computing. The first deals with small-dimension inference problems and uses stochastic computing to perform the necessary operations to calculate the inference. This machine is based on the concept of probabilistic bus and has a strong parallelism.The second machine can deal with intractable inference problems. It implements a particular MCMC method: the Gibbs algorithm at the binary level. In this case, stochastic computing is used for sampling the distribution of interest. An important feature of this machine is the ability to circumvent the convergence problems generally attributed to stochastic computing. Finally, an extension of this second type of machine is presented. It consists of a generic and programmable machine designed to approximate solution to any inference problem
APA, Harvard, Vancouver, ISO, and other styles
21

Cheng, Chi Wa. "Probabilistic topic modeling and classification probabilistic PCA for text corpora." HKBU Institutional Repository, 2011. http://repository.hkbu.edu.hk/etd_ra/1263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Crubillé, Raphaëlle. "Behavioural distances for probabilistic higher-order programs." Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC084.

Full text
Abstract:
Cette thèse est consacrée à l’étude d’équivalences et de distances comportementales destinées à comparer des programmes probabilistes d’ordre supérieur. Le manuscrit est divisé en trois parties. La première partie consiste en une présentation des langages probabilistes d’ordre supérieur, et des notions d’équivalence et de distance contextuelles pour de tels langages.Dans une deuxième partie, on suit une approche opérationnelle pour construire des notions d’équivalences et de métriques plus simples à manipuler que les notions contextuelles : on prend comme point de départ les deux équivalences comportementales pour le lambda-calcul probabiliste équipé d’une stratégie d’évaluation basée sur l’appel par nom introduites par Dal Lago, Sangiorgi and Alberti : ces derniers définissent deux équivalences–la trace équivalence, et la bisimulation probabiliste, et montrent que pour ce langage, la trace équivalence permet de complètement caractériser–i.e. est pleinement abstraite– l’équivalence contextuelle, tandis que la bisimulation probabiliste est une approximation correcte de l’équivalence contextuelle, mais n’est pas pleinement abstraite. Dans la partie opérationnelle de cette thèse, on montre que la bisimulation probabiliste redevient pleinement abstraite quand on remplace la stratégie d’évaluation par nom par une stratégie d’évaluation par valeur. Le reste de cette partie est consacrée à une généralisation quantitative de la trace équivalence, i.e. une trace distance sur les programmes. On introduit d’abord une trace distance pour un λ-calcul probabiliste affine, i.e. où le contexte peut utiliser son argument au plus une fois, et ensuite pour un λ-calcul probabiliste où les contextes ont la capacité de copier leur argument; dans ces deux cas, on montre que les distances traces obtenues sont pleinement abstraites.La troisième partie considère deux modèles dénotationnels de langages probabilistes d’ordre supérieur : le modèle des espaces cohérents probabiliste, dû à Danos et Ehrhard, qui interprète le langage obtenu en équipant PCF avec des probabilités discrètes, et le modèle des cônes mesurables et des fonctions stables et mesurables, développé plus récemment par Ehrhard, Pagani and Tasson pour le langage obtenu en enrichissant PCF avec des probabilités continues. Cette thèse établit deux résultats sur la structure de ces modèles. On montre d’abord que l’exponentielle de la catégorie des espaces cohérents peut être exprimée en utilisant le comonoide commutatif libre : il s’agit d’un résultat de généricité de cette catégorie vue comme un modèle de la logique linéaire. Le deuxième résultat éclaire les liens entre ces deux modèles : on montre que la catégorie des cônes mesurables et des fonctions stables et mesurable est une extension conservatrice de la catégorie de co-Kleisli des espaces cohérents probabilistes. Cela signifie que le modèle récemment introduit par Ehrhard, Pagani et Tasson peut être vu comme la généralisation au cas continu du modèle de PCF équipé avec des probabilités discrètes dans les espaces cohérents probabilistes
The present thesis is devoted to the study of behavioral equivalences and distances for higher-order probabilistic programs. The manuscript is divided into three parts. In the first one, higher-order probabilistic languages are presented, as well as how to compare such programs with context equivalence and context distance.The second part follows an operational approach in the aim of building equivalences and metrics easier to handle as their contextual counterparts. We take as starting point here the two behavioral equivalences introduced by Dal Lago, Sangiorgi and Alberti for the probabilistic lambda-calculus equipped with a call-by-name evaluation strategy: the trace equivalence and the bisimulation equivalence. These authors showed that for their language, trace equivalence completely characterizes context equivalence—i.e. is fully abstract, while probabilistic bisimulation is a sound approximation of context equivalence, but is not fully abstract. In the operational part of the present thesis, we show that probabilistic bisimulation becomes fully abstract when we replace the call-by-name paradigm by the call-by-value one. The remainder of this part is devoted to a quantitative generalization of trace equivalence, i.e. a trace distance on programs. We introduce first e trace distance for an affine probabilistic lambda-calculus—i.e. where a function can use its argument at most once, and then for a more general probabilistic lambda-calculus where functions have the ability to duplicate their arguments. In these two cases, we show that these trace distances are fully abstract.In the third part, two denotational models of higher-order probabilistic languages are considered: the Danos and Ehrhard's model based on probabilistic coherence spaces that interprets the language PCF enriched with discrete probabilities, and the Ehrhard, Pagani and Tasson's one based on measurable cones and measurable stable functions that interpret PCF equipped with continuous probabilities. The present thesis establishes two results on these models structure. We first show that the exponential comonad of the category of probabilistic coherent spaces can be expressed using the free commutative comonoid: it consists in a genericity result for this category seen as a model of Linear Logic. The second result clarify the connection between these two models: we show that the category of measurable cones and measurable stable functions is a conservative extension of the co-Kleisli category of probabilistic coherent spaces. It means that the recently introduced model of Ehrhard, Pagani and Tasson can be seen as the generalization to the continuous case of the model for PCF with discrete probabilities in probabilistic coherent spaces
APA, Harvard, Vancouver, ISO, and other styles
23

Ben, Mrad Ali. "Observations probabilistes dans les réseaux bayésiens." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0018/document.

Full text
Abstract:
Dans un réseau bayésien, une observation sur une variable signifie en général que cette variable est instanciée. Ceci signifie que l’observateur peut affirmer avec certitude que la variable est dans l’état signalé. Cette thèse porte sur d’autres types d’observations, souvent appelées observations incertaines, qui ne peuvent pas être représentées par la simple affectation de la variable. Cette thèse clarifie et étudie les différents concepts d’observations incertaines et propose différentes applications des observations incertaines dans les réseaux bayésiens.Nous commençons par dresser un état des lieux sur les observations incertaines dans les réseaux bayésiens dans la littérature et dans les logiciels, en termes de terminologie, de définition, de spécification et de propagation. Il en ressort que le vocabulaire n'est pas clairement établi et que les définitions proposées couvrent parfois des notions différentes.Nous identifions trois types d’observations incertaines dans les réseaux bayésiens et nous proposons la terminologie suivante : observation de vraisemblance, observation probabiliste fixe et observation probabiliste non-fixe. Nous exposons ensuite la façon dont ces observations peuvent être traitées et propagées.Enfin, nous donnons plusieurs exemples d’utilisation des observations probabilistes fixes dans les réseaux bayésiens. Le premier exemple concerne la propagation d'observations sur une sous-population, appliquée aux systèmes d'information géographique. Le second exemple concerne une organisation de plusieurs agents équipés d'un réseau bayésien local et qui doivent collaborer pour résoudre un problème. Le troisième exemple concerne la prise en compte d'observations sur des variables continues dans un RB discret. Pour cela, l'algorithme BN-IPFP-1 a été implémenté et utilisé sur des données médicales de l'hôpital Bourguiba de Sfax
In a Bayesian network, evidence on a variable usually signifies that this variable is instantiated, meaning that the observer can affirm with certainty that the variable is in the signaled state. This thesis focuses on other types of evidence, often called uncertain evidence, which cannot be represented by the simple assignment of the variables. This thesis clarifies and studies different concepts of uncertain evidence in a Bayesian network and offers various applications of uncertain evidence in Bayesian networks.Firstly, we present a review of uncertain evidence in Bayesian networks in terms of terminology, definition, specification and propagation. It shows that the vocabulary is not clear and that some terms are used to represent different concepts.We identify three types of uncertain evidence in Bayesian networks and we propose the followingterminology: likelihood evidence, fixed probabilistic evidence and not-fixed probabilistic evidence. We define them and describe updating algorithms for the propagation of uncertain evidence. Finally, we propose several examples of the use of fixed probabilistic evidence in Bayesian networks. The first example concerns evidence on a subpopulation applied in the context of a geographical information system. The second example is an organization of agent encapsulated Bayesian networks that have to collaborate together to solve a problem. The third example concerns the transformation of evidence on continuous variables into fixed probabilistic evidence. The algorithm BN-IPFP-1 has been implemented and used on medical data from CHU Habib Bourguiba in Sfax
APA, Harvard, Vancouver, ISO, and other styles
24

Qiu, Feng. "Probabilistic covering problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47567.

Full text
Abstract:
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints. In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches. In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
APA, Harvard, Vancouver, ISO, and other styles
25

Taylor, Jonathan 1981. "Lax probabilistic bisimulation." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111546.

Full text
Abstract:
Probabilistic bisimulation is a widely studied equivalence relation for stochastic systems. However, it requires the behavior of the states to match on actions with matching labels. This does not allow bisimulation to capture symmetries in the system. In this thesis we define lax probabilistic bisimulation, in which actions are only required to match within given action equivalence classes. We provide a logical characterization and an algorithm for computing this equivalence relation for finite systems. We also specify a metric on states which assigns distance 0 to lax-bisimilar states. We end by examining the use of lax bisimulation for analyzing Markov Decision Processes (MDPs) and show that it corresponds to the notion of a MDP homomorphism, introduced by Ravindran & Barto. Our metric provides an algorithm for generating an approximate MDP homomorphism and provides bounds on the quality of the best control policy that can be computed using this approximation.
APA, Harvard, Vancouver, ISO, and other styles
26

Seidel, Karen. "Probabilistic communicating processes." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Jeong-Gyoo. "Probabilistic shape models :." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Power, Christopher. "Probabilistic symmetry reduction." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3493/.

Full text
Abstract:
Model checking is a technique used for the formal verification of concurrent systems. A major hindrance to model checking is the so-called state space explosion problem where the number of states in a model grows exponentially as variables are added. This means even trivial systems can require millions of states to define and are often too large to feasibly verify. Fortunately, models often exhibit underlying replication which can be exploited to aid in verification. Exploiting this replication is known as symmetry reduction and has yielded considerable success in non probabilistic verification. The main contribution of this thesis is to show how symmetry reduction techniques can be applied to explicit state probabilistic model checking. In probabilistic model checking the need for such techniques is particularly acute since it requires not only an exhaustive state-space exploration, but also a numerical solution phase to compute probabilities or other quantitative values. The approach we take enables the automated detection of arbitrary data and component symmetries from a probabilistic specification. We define new techniques to exploit the identified symmetry and provide efficient generation of the quotient model. We prove the correctness of our approach, and demonstrate its viability by implementing a tool to apply symmetry reduction to an explicit state model checker.
APA, Harvard, Vancouver, ISO, and other styles
29

Binter, Roman. "Applied probabilistic forecasting." Thesis, London School of Economics and Political Science (University of London), 2012. http://etheses.lse.ac.uk/559/.

Full text
Abstract:
In any actual forecast, the future evolution of the system is uncertain and the forecasting model is mathematically imperfect. Both, ontic uncertainties in the future (due to true stochasticity) and epistemic uncertainty of the model (reflecting structural imperfections) complicate the construction and evaluation of probabilistic forecast. In almost all nonlinear forecast models, the evolution of uncertainty in time is not tractable analytically and Monte Carlo approaches (”ensemble forecasting”) are widely used. This thesis advances our understanding of the construction of forecast densities from ensembles, the evolution of the resulting probability forecasts and methods of establishing skill (benchmarks). A novel method of partially correcting the model error is introduced and shown to outperform a competitive approach. The properties of Kernel dressing, a method of transforming ensembles into probability density functions, are investigated and the convergence of the approach is illustrated. A connection between forecasting and Information theory is examined by demonstrating that Kernel dressing via minimization of Ignorance implicitly leads to minimization of Kulback-Leibler divergence. The Ignorance score is critically examined in the context of other Information theory measures. The method of Dynamic Climatology is introduced as a new approach to establishing skill (benchmarking). Dynamic Climatology is a new, relatively simple, nearest neighbor based model shown to be of value in benchmarking of global circulation models of the ENSEMBLES project. ENSEMBLES is a project funded by the European Union bringing together all major European weather forecasting institutions in order to develop and test state-of-the-art seasonal weather forecasting models. Via benchmarking the seasonal forecasts of the ENSEMBLES models we demonstrate that Dynamic Climatology can help us better understand the value and forecasting performance of large scale circulation models. Lastly, a new approach to correcting (improving) imperfect model is presented, an idea inspired by [63]. The main idea is based on a two-stage procedure where a second stage ‘corrective’ model iteratively corrects systematic parts of forecasting errors produced by a first stage ‘core’ model. The corrector is of an iterative nature so that at a given time t the core model forecast is corrected and then used as an input into the next iteration of the core model to generate a time t + 1 forecast. Using two nonlinear systems we demonstrate that the iterative corrector is superior to alternative approaches based on direct (non-iterative) forecasts. While the choice of the corrector model class is flexible, we use radial basis functions. Radial basis functions are frequently used in statistical learning and/or surface approximations and involve a number of computational aspects which we discuss in some detail.
APA, Harvard, Vancouver, ISO, and other styles
30

Jones, Claire. "Probabilistic non-determinism." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/413.

Full text
Abstract:
Much of theoretical computer science is based on use of inductive complete partially ordered sets (or ipos). The aim of this thesis is to extend this successful theory to make it applicable to probabilistic computations. The method is to construct a "probabilistic powerdomain" on any ipo to represent the outcome of a probabilistic program which has outputs in the original ipo. In this thesis it is shown that evaluations (functions which assign a probability to open sets with various conditions) form such a powerdomain. Further, the powerdomain is a monadic functor on the categoy Ipo. For restricted classes of ipos a powerdomain of probability distributions, or measures which only take values less than one, has been constructed (by Saheb-Djahromi). In the thesis we show that this powerdomain may be constructed for continuous ipos where it is isomorphic to that of evaluations. The powerdomain of evaluations is shown to have a simple Stone type duality between it and sets of upper continuous functions. This is then used to give a Hoare style logic for an imperative probabilistic language, which is the dual of the probabilistic semantics. Finally the powerdomain is used to give a denotational semantics of a probabilistic metalanguage which is an extension of Moggi's lambda-c-calculus for the powerdomain monad. This semantics is then shown to be equivalent to an operational semantics.
APA, Harvard, Vancouver, ISO, and other styles
31

Angelopoulos, Nicos. "Probabilistic finite domains." Thesis, City University London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ranganathan, Ananth. "Probabilistic topological maps." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22643.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008.
Committee Chair: Dellaert, Frank; Committee Member: Balch, Tucker; Committee Member: Christensen, Henrik; Committee Member: Kuipers, Benjamin; Committee Member: Rehg, Jim.
APA, Harvard, Vancouver, ISO, and other styles
33

Iyer, Ranjit. "Probabilistic distributed control." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1568128211&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chien, Yung-hsin. "Probabilistic preference modeling /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Morris, Quaid Donald Jozef 1972. "Practical probabilistic inference." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29989.

Full text
Abstract:
Thesis (Ph. D. in Computational Neuroscience)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2003.
Includes bibliographical references (leaves 157-163).
The design and use of expert systems for medical diagnosis remains an attractive goal. One such system, the Quick Medical Reference, Decision Theoretic (QMR-DT), is based on a Bayesian network. This very large-scale network models the appearance and manifestation of disease and has approximately 600 unobservable nodes and 4000 observable nodes that represent, respectively, the presence and measurable manifestation of disease in a patient. Exact inference of posterior distributions over the disease nodes is extremely intractable using generic algorithms. Inference can be made much more efficient by exploiting the QMR-DT's unique structure. Indeed, tailor-made inference algorithms for the QMR-DT efficiently generate exact disease posterior marginals for some diagnostic problems and accurate approximate posteriors for others. In this thesis, I identify a risk with using the QMR-DT disease posteriors for medical diagnosis. Specifically, I show that patients and physicians conspire to preferentially report findings that suggest the presence of disease. Because the QMR-DT does not contain an explicit model of this reporting bias, its disease posteriors may not be useful for diagnosis. Correcting these posteriors requires augmenting the QMR-DT with additional variables and dependencies that model the diagnostic procedure. I introduce the diagnostic QMR-DT (dQMR-DT), a Bayesian network containing both the QMR-DT and a simple model of the diagnostic procedure. Using diagnostic problems sampled from the dQMR-DT, I show the danger of doing diagnosis using disease posteriors from the unaugmented QMR-DT.
(cont.) I introduce a new class of approximate inference methods, based on feed-forward neural networks, for both the QMR-DT and the dQMR-DT. I show that these methods, recognition models, generate accurate approximate posteriors on the QMR-DT, on the dQMR-DT, and on a version of the dQMR-DT specified only indirectly through a set of presolved diagnostic problems.
by Quaid Donald Jozef Morris.
Ph.D.in Computational Neuroscience
APA, Harvard, Vancouver, ISO, and other styles
36

Mansinghka, Vikash Kumar. "Natively probabilistic computation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47892.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
Includes bibliographical references (leaves 129-135).
I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.
(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.
by Vikash Kumar Mansinghka.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
37

Conduit, Bryce David. "Probabilistic alloy design." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cowans, Philip John. "Probabilistic document modelling." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Barbosa, Fábio Daniel Moreira. "Probabilistic propositional logic." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22198.

Full text
Abstract:
Mestrado em Matemática e Aplicações
O termo Lógica Probabilística, em geral, designa qualquer lógica que incorpore conceitos probabilísticos num sistema lógico formal. Nesta dissertacção o principal foco de estudo e uma lógica probabilística (designada por Lógica Proposicional Probabilística Exógena), que tem por base a Lógica Proposicional Clássica. São trabalhados sobre essa lógica probabilística a síntaxe, a semântica e um cálculo de Hilbert, provando-se diversos resultados clássicos de Teoria de Probabilidade no contexto da EPPL. São também estudadas duas propriedades muito importantes de um sistema lógico - correcção e completude. Prova-se a correcção da EPPL da forma usual, e a completude fraca recorrendo a um algoritmo de satisfazibilidade de uma fórmula da EPPL. Serão também considerados na EPPL conceitos de outras lógicas probabilísticas (incerteza e probabilidades intervalares) e Teoria de Probabilidades (condicionais e independência).
The term Probabilistic Logic generally refers to any logic that incorporates probabilistic concepts in a formal logic system. In this dissertation, the main focus of study is a probabilistic logic (called Exogenous Probabilistic Propo- sitional Logic), which is based in the Classical Propositional Logic. There will be introduced, for this probabilistic logic, its syntax, semantics and a Hilbert calculus, proving some classical results of Probability Theory in the context of EPPL. Moreover, there will also be studied two important properties of a logic system - soundness and completeness. We prove the EPPL soundness in a standard way, and weak completeness using a satis ability algorithm for a formula of EPPL. It will be considered in EPPL concepts of other probabilistic logics (uncertainty and intervalar probability) and of Probability Theory (independence and conditional).
APA, Harvard, Vancouver, ISO, and other styles
40

Carvalho, Elsa Cristina Batista Bento. "Probabilistic constraint reasoning." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8603.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
The continuous constraint paradigm has been often used to model safe reasoning in applications where uncertainty arises. Constraint propagation propagates intervals of uncertainty among the variables of the problem, eliminating values that do not belong to any solution. However, constraint programming is very conservative: if initial intervals are wide (reflecting large uncertainty), the obtained safe enclosure of all consistent scenarios may be inadequately wide for decision support. Since all scenarios are considered equally likely, insufficient pruning leads to great inefficiency if some costly decisions may be justified by very unlikely scenarios. Even when probabilistic information is available for the variables of the problem, the continuous constraint paradigm is unable to incorporate and reason with such information. Therefore, it is incapable of distinguishing between different scenarios, based on their likelihoods. This thesis presents a probabilistic continuous constraint paradigm that associates a probabilistic space to the variables of the problem, enabling probabilistic reasoning to complement the underlying constraint reasoning. Such reasoning is used to address probabilistic queries and requires the computation of multi-dimensional integrals on possibly non linear integration regions. Suitable algorithms for such queries are developed, using safe or approximate integration techniques and relying on methods from continuous constraint programming in order to compute safe covers of the integration region. The thesis illustrates the adequacy of the probabilistic continuous constraint framework for decision support in nonlinear continuous problems with uncertain information, namely on inverse and reliability problems, two different types of engineering problems where the developed framework is particularly adequate to support decision makers.
APA, Harvard, Vancouver, ISO, and other styles
41

Tran, Vinh Phuc. "Modélisation à plusieurs échelles d'un milieu continu hétérogène aléatoire." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1159/document.

Full text
Abstract:
Lorsque les longueurs caractéristiques sont bien séparées, la théorie de l'homogénéisation propose un cadre théorique rigoureux pour les matériaux hétérogènes. Dans ce contexte, les propriétés macroscopiques peuvent être calculées à partir de la résolution d’un problème auxiliaire formulé sur un volume élémentaire représentatif (avec des conditions limites adéquates). Dans le présent travail, nous nous intéressons à l’homogénéisation de matériaux hétérogènes décrits à l’échelle la plus fine par deux modèles différents (tous deux dépendant d’une longueur caractéristique spécifique) alors que le milieu homogène équivalent se comporte, dans les deux cas, comme un milieu de Cauchy classique.Dans la première partie, une microstructure aléatoire de type Cauchy est considérée. La résolution numérique du problème auxiliaire, réalisée sur plusieurs réalisations, implique un coût de calcul important lorsque les longueurs caractéristiques des constituants ne sont pas bien séparées et/ou lorsque le contraste mécanique est élevé. Pour surmonter ces limitations, nous basons notre étude sur une description mésoscopique du matériau combinée à la théorie de l'information. Dans cette mésostructure, obtenue par filtrage, les détails les plus fins sont lissés.Dans la seconde partie, nous nous intéressons aux matériaux à gradient dans lesquels il existe au moins une longueur interne, qui induit des effets de taille à l’échelle macroscopique. La microstructure aléatoire est décrite par un modèle à gradient de contrainte récemment proposé. Malgré leur similarité conceptuelle, nous montrerons que le modèle de stress-gradient et strain-gradient définissent deux classes de matériaux distinctes. Nous proposons ensuite des approches simples (méthodes de champs moyens) pour mieux comprendre les hypothèses de modélisation. Les résultats semi-analytiques obtenus nous permettent d’explorer l'influence des paramètres du modèle sur les propriétés macroscopiques et constituent la première étape vers la simulation en champs complets
If the length-scales are well separated, homogenization theory can provide a robust theoretical framework for heterogeneous materials. In this context, the macroscopic properties can be retrieved from the solution to an auxiliary problem, formulated over the representative volume element (with appropriate boundary conditions). In the present work, we focus on the homogenization of heterogeneous materials which are described at the finest scale by two different materials models (both depending on a specific characteristic length) while the homogeneous medium behaves as a classical Cauchy medium in both cases.In the first part, the random microstructure of a Cauchy medium is considered. Solving the auxiliary problem on multiple realizations can be very costly due to constitutive phases exhibiting not well-separated characteristic length scales and/or high mechanical contrasts. In order to circumvent these limitations, our study is based on a mesoscopic description of the material, combined with information theory. In the mesostructure, defined by a filtering framework, the fine-scale features are smoothed out.The second part is dedicated to gradient materials which induce microscopic size-effect due to the existence of microscopic material internal length(s). The random microstructure is described by a newly introduced stress-gradient model. Despite being conceptually similar, we show that the stress-gradient and strain-gradient models define two different classes of materials. Next, simple approaches such as mean-field homogenization techniques are proposed to better understand the assumptions underlying the stress-gradient model. The obtained semi-analytical results allow us to explore the influence on the homogenized properties of the model parameters and constitute a first step toward full-field simulations
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Xiujuan. "A Probabilistic Model of Flower Fertility and Factors Influencing Seed Production in Winter Oilseed rape (Brassica napus L.)." Phd thesis, Châtenay-Malabry, Ecole centrale de Paris, 2011. http://tel.archives-ouvertes.fr/tel-00635536.

Full text
Abstract:
The number of pods per plant and the number of seeds per pod are the most variable yield components in winter oilseed rape (WOSR). The production of a seed is the combination of several physiological processes, namely formation of ovules and pollen grains, fertilization of the ovules and development of young embryos, any problem in these processes may result in seed abortion or pod abortion. Both the number of ovules per pod and the potential for the ovule to develop into a mature seed may depend on pod position in the plant architecture and time of appearance. The complex developmental pattern of WOSR makes it difficult to analyse.In this study, we first investigate the variability of the following yield components (a) ovules/pod, (b) seeds/pod, and (c) pods/axis in relation to two explanatory variables. These two variables include (1) flower and inflorescence position and (2) time of pod appearance, linked to the effect of assimilate availability. Based on the biological phenomena of flower fertility, we developed a probabilistic model to simulate the number of ovules per ovary and seeds per pod. The model can predict the number of pollen grains per flower and distinguish the factors that influence the yield. Field experiments were conducted in 2008 and 2009. The number and position of flowers that bloomed within the inflorescence were recorded based on observations every two to three days throughout the flowering season. Different trophic states were created by clipping the main stem or ramifications to investigate the effect of assimilate competition.The results indicate that the amount of available assimilates was the primary determinant of pod and seed production. The distribution of resources was significantly affected by both the positions of pods within an inflorescence and the position of inflorescences within a plant in WOSR. In addition, model estimation for distribution parameter of pollen grain number indicated that pollination limitation could influence the seed production. Furthermore, the ovule viability could result in the decrease of the number of pods and the number of seeds per pod at the distal position of inflorescence. The model of flower fertility could be a tool to study the strategy of improving seed yield in flowering plants
APA, Harvard, Vancouver, ISO, and other styles
43

Ammar, Moez. "Estimation du contexte par vision embarquée et schémas de commande pour l’automobile." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112425/document.

Full text
Abstract:
Les systèmes dotés d’autonomie doivent continument évaluer leur environnement, via des capteurs embarqués, afin de prendre des décisions pertinentes au regard de leur mission, mais aussi de l’endosystème et de l’exosystème. Dans le cas de véhicules dits ‘intelligents’, l’attention quant au contexte environnant se porte principalement d’une part sur des objets parfaitement normalisés, comme la signalisation routière verticale ou horizontale, et d’autre part sur des objets difficilement modélisables de par leur nombre et leur variété (piétons, cyclistes, autres véhicules, animaux, ballons, obstacles quelconques sur la chaussée, etc…). La décision a contrario offre un cadre formel, adapté à ce problème de détection d’objets variables, car modélisant le bruit plutôt qu’énumérant les objets à détecter. La contribution principale de cette thèse est d’adapter des mesures probabilistes de type NFA (Nombre de Fausses Alarmes) au problème de la détection d’objets soit ayant un mouvement propre, soit saillants par rapport au plan de la route. Un point fort des algorithmes développés est qu’ils s’affranchissent de tout seuil de détection. Une première mesure NFA permet d’identifier le sous-domaine de l'image (pixels non nécessairement connexes) dont les valeurs de niveau de gris sont les plus étonnantes, sous hypothèse de bruit gaussien (modèle naïf). Une seconde mesure NFA permet ensuite d’identifier le sous-ensemble des fenêtres de significativité maximale, sous hypothèse de loi binômiale (modèle naïf). Nous montrons que ces mesures NFA peuvent également servir de critères d’optimisation de paramètres, qu’il s’agisse du mouvement 6D de la caméra embarquée, ou d’un seuil de binarisation sur les niveaux de gris. Enfin, nous montrons que les algorithmes proposés sont génériques au sens où ils s’appliquent à différents types d’images en entrée, radiométriques ou de disparité.A l’opposé de l’approche a contrario, les modèles markoviens permettent d’injecter des connaissances a priori sur les objets recherchés. Nous les exploitons dans le cas de la classification de marquages routiers.A partir de l’estimation du contexte (signalisation, détection d’objets ‘inconnus’), la partie commande comporte premièrement une spécification des trajectoires possibles et deuxièmement des lois en boucle fermée assurant le suivi de la trajectoire sélectionnée. Les diverses trajectoires possibles sont regroupées en un faisceau, soit un ensemble de fonctions du temps où divers paramètres permettent de régler les invariants géométriques locaux (pente, courbure). Ces paramètres seront globalement fonction du contexte extérieur au véhicule (présence de vulnérables, d'obstacles fixes, de limitations de vitesse, etc.) et permettent de déterminer l'élément du faisceau choisi. Le suivi de la trajectoire choisie s'effectue alors en utilisant des techniques de type platitude différentielle, qui s'avèrent particulièrement bien adaptées aux problèmes de suivi de trajectoire. Un système différentiellement plat est en effet entièrement paramétré par ses sorties plates et leurs dérivées. Une autre propriété caractéristique de ce type de systèmes est d'être linéarisable de manière exacte (et donc globale) par bouclage dynamique endogène et transformation de coordonnées. Le suivi stabilisant est alors trivialement obtenu sur le système linéarisé
To take relevant decisions, autonomous systems have to continuously estimate their environment via embedded sensors. In the case of 'intelligent' vehicles, the estimation of the context focuses both on objects perfectly known such as road signs (vertical or horizontal), and on objects unknown or difficult to describe due to their number and variety (pedestrians, cyclists, other vehicles, animals, any obstacles on the road, etc.). Now, the a contrario modelling provides a formal framework adapted to the problem of detection of variable objects, by modeling the noise rather than the objects to detect. Our main contribution in this PhD work was to adapt the probabilistic NFA (Number of False Alarms) measurements to the problem of detection of objects simply defined either as having an own motion, or salient to the road plane. A highlight of the proposed algorithms is that they are free from any detection parameter, in particular threshold. A first NFA criterion allows the identification of the sub-domain of the image (not necessarily connected pixels) whose gray level values are the most amazing under Gaussian noise assumption (naive model). A second NFA criterion allows then identifying the subset of maximum significant windows under binomial hypothesis (naive model). We prove that these measurements (NFA) can also be used for the estimation of intrinsec parameters, for instance either the 6D movement of the onboard camera, or a binarisation threshold. Finally, we prove that the proposed algorithms are generic and can be applied to different kinds of input images, for instance either radiometric images or disparity maps. Conversely to the a contrario approach, the Markov models allow to inject a priori knowledge about the objects sought. We use it in the case of the road marking classification. From the context estimation (road signs, detected objects), the control part includes firstly a specification of the possible trajectories and secondly the laws to achieve the selected path. The possible trajectories are grouped into a bundle, and various parameters are used to set the local geometric invariants (slope, curvature). These parameters depend on the vehicle context (presence of vulnerables, fixed obstacles, speed limits, etc ... ), and allows determining the selected the trajectory from the bundle. Differentially flat system is indeed fully parameterized by its flat outputs and their derivatives. Another feature of this kind of systems is to be accurately linearized by endogenous dynamics feed-back and coordinate transformation. Tracking stabilizer is then trivially obtained from the linearized system
APA, Harvard, Vancouver, ISO, and other styles
44

Ugarte, Ari. "Combining machine learning and evolution for the annotation of metagenomics data." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066732/document.

Full text
Abstract:
La métagénomique sert à étudier les communautés microbiennes en analysant de l’ADN extrait directement d’échantillons pris dans la nature, elle permet également d’établir un catalogue très étendu des gènes présents dans les communautés microbiennes. Ce catalogue doit être comparé contre les gènes déjà référencés dans les bases des données afin de retrouver des séquences similaires et ainsi déterminer la fonction des séquences qui le composent. Au cours de cette thèse, nous avons développé MetaCLADE, une nouvelle méthodologie qui améliore la détection des domaines protéiques déjà référencés pour des séquences issues des données métagénomiques et métatranscriptomiques. Pour le développement de MetaCLADE, nous avons modifié un système d’annotations de domaines protéiques qui a été développé au sein du Laboratoire de Biologie Computationnelle et Quantitative appelé CLADE (CLoser sequences for Annotations Directed by Evolution) [17]. En général les méthodes pour l’annotation de domaines protéiques caractérisent les domaines connus avec des modèles probabilistes. Ces modèles probabilistes, appelés Sequence Consensus Models (SCMs) sont construits à partir d’un alignement des séquences homologues appartenant à différents clades phylogénétiques et ils représentent le consensus à chaque position de l’alignement. Cependant, quand les séquences qui forment l’ensemble des homologues sont très divergentes, les signaux des SCMs deviennent trop faibles pour être identifiés et donc l’annotation échoue. Afin de résoudre ce problème d’annotation de domaines très divergents, nous avons utilisé une approche fondée sur l’observation que beaucoup de contraintes fonctionnelles et structurelles d’une protéine ne sont pas globalement conservées parmi toutes les espèces, mais elles peuvent être conservées localement dans des clades. L’approche consiste donc à élargir le catalogue de modèles probabilistes en créant de nouveaux modèles qui mettent l’accent sur les caractéristiques propres à chaque clade. MetaCLADE, un outil conçu dans l’objectif d’annoter avec précision des séquences issues des expériences métagénomiques et métatranscriptomiques utilise cette libraire afin de trouver des correspondances entre les modèles et une base de données de séquences métagénomiques ou métatranscriptomiques. En suite, il se sert d’une étape pré-calculée pour le filtrage des séquences qui permet de déterminer la probabilité qu’une prédiction soit considérée vraie. Cette étape pré-calculée est un processus d’apprentissage qui prend en compte la fragmentation de séquences métagénomiques pour les classer.Nous avons montré que l’approche multi source en combinaison avec une stratégie de méta apprentissage prenant en compte la fragmentation atteint une très haute performance
Metagenomics is used to study microbial communities by the analyze of DNA extracted directly from environmental samples. It allows to establish a catalog very extended of genes present in the microbial communities. This catalog must be compared against the genes already referenced in the databases in order to find similar sequences and thus determine their function. In the course of this thesis, we have developed MetaCLADE, a new methodology that improves the detection of protein domains already referenced for metagenomic and metatranscriptomic sequences. For the development of MetaCLADE, we modified an annotation system of protein domains that has been developed within the Laboratory of Computational and Quantitative Biology clade called (closer sequences for Annotations Directed by Evolution) [17]. In general, the methods for the annotation of protein domains characterize protein domains with probabilistic models. These probabilistic models, called sequence consensus models (SCMs) are built from the alignment of homolog sequences belonging to different phylogenetic clades and they represent the consensus at each position of the alignment. However, when the sequences that form the homolog set are very divergent, the signals of the SCMs become too weak to be identified and therefore the annotation fails. In order to solve this problem of annotation of very divergent domains, we used an approach based on the observation that many of the functional and structural constraints in a protein are not broadly conserved among all species, but they can be found locally in the clades. The approach is therefore to expand the catalog of probabilistic models by creating new models that focus on the specific characteristics of each clade. MetaCLADE, a tool designed with the objective of annotate with precision sequences coming from metagenomics and metatranscriptomics studies uses this library in order to find matches between the models and a database of metagenomic or metatranscriptomic sequences. Then, it uses a pre-computed step for the filtering of the sequences which determine the probability that a prediction is a true hit. This pre-calculated step is a learning process that takes into account the fragmentation of metagenomic sequences to classify them. We have shown that the approach multi source in combination with a strategy of meta-learning taking into account the fragmentation outperforms current methods
APA, Harvard, Vancouver, ISO, and other styles
45

Bertolini, André Carlos 1980. "Probabilistic history matching methodology for real-time reservoir surveillance = Metodologia de ajuste de histórico probabilístico para monitoramento contínuo de reservatórios." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265767.

Full text
Abstract:
Orientador: Denis José Schiozer
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências
Made available in DSpace on 2018-08-28T00:15:31Z (GMT). No. of bitstreams: 1 Bertolini_AndreCarlos_D.pdf: 30287486 bytes, checksum: e38cb30df0864b5bbc8c5bf4829d4346 (MD5) Previous issue date: 2015
Resumo: Este trabalho propõe uma metodologia de ajuste de histórico probabilístico em tempo real a fim de melhorar a previsão do reservatório ao longo do tempo. A metodologia proposta utiliza uma avaliação rigorosa nos modelos sincronizada com a frequência de aquisição de dados históricos. Esta avaliação contínua permite uma rápida identificação de deficiência do modelo e reação para iniciar um processo de recaracterização conforme necessário. Além disso, a metodologia inclui uma técnica de quantificação de incertezas utilizando os dados dinâmicos para reduzir as incertezas do reservatório, e um passo para incluir erros de medição e margens de tolerância para os dados históricos. O fluxo de trabalho da metodologia é composto por nove etapas. O fluxo começa com um conjunto de modelos representativos selecionados através de uma abordagem probabilística, as incertezas do reservatório, e um intervalo de aceitação dos dados históricos. Os modelos são simulados e os resultados comparados com os dados históricos. Os passos seguintes são a redução da incerteza e uma segunda avaliação do modelo para garantir um melhor ajuste de histórico. Depois, os modelos são filtrados para descartar aqueles que estejam fora da faixa de aceitação e, em seguida, usados para fazer previsões do reservatório. O último passo é a verificação de novos dados observados, que é sincronizada com a aquisição de dados. O método também apresenta uma maneira inovadora e eficiente para apoiar o monitoramento do reservatório através de indicadores gráficos da qualidade do ajuste. Um modelo de reservatório sintético foi usado em todo o trabalho a fim de controlar os resultados de todos os métodos que apoiam a metodologia proposta. Além disso, a metodologia foi aplicada no modelo UNISIM-IH, baseado no campo de Namorado, localizado na Bacia de Campos, Brasil. Os estudos de caso realizados mostraram que a metodologia proposta assimila continuamente os dados observados do reservatório, avalia o desempenho do modelo, e mantém um conjunto de modelos de reservatórios calibrados em tempo real
Abstract: This work focuses on probabilistic real-time history matching to improve reservoir forecast over time. The proposed methodology uses a rigorous model evaluation, which is synchronized with history data acquisition frequency. A continuous model evaluation allows a quick model deficiency identification and reaction to start a model reparametrization process as needed. In addition, the methodology includes an uncertainty quantification technique, which uses the dynamic data to reduce reservoir uncertainties, and a step to include measurement errors and observed data tolerance margin. The real-time history matching workflow is composed of nine steps. It starts with a set of representative models selected through a probabilistic approach, the uncertainties of the reservoir and an acceptance history data range. The models are run and the results compared with the history data. The following steps are uncertainty reduction and a second model evaluation to guarantee an improved history matching. The models are then filtered to discard any model outside the acceptance range, and then used to make reservoir forecast. In the final step, the workflow searches for new data observed. The methodology also presents a novel and efficient way to support reservoir surveillance through graphical indicators of matching quality. To better control the results of all the methods, which supports the proposed methodology, a synthetic reservoir model was used in the entire work. In addition, the proposed methodology was applied in the UNISIM-I-H model, which is based on the Namorado field, located in the Campos Basin, Brazil. The performed study cases were shown that the proposed history matching procedure assimilates continuously the observed reservoir data, evaluates the model performances through quality indicators and maintains a set of calibrated reservoir models in real-time
Doutorado
Reservatórios e Gestão
Doutor em Ciências e Engenharia de Petróleo
APA, Harvard, Vancouver, ISO, and other styles
46

Drouard, Vincent. "Localisation et suivi de visages à partir d'images et de sons : une approche Bayésienne temporelle et commumative." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM094/document.

Full text
Abstract:
Dans cette thèse, nous abordons le problème de l’estimation de pose de visage dans le contexte des interactions homme-robot. Nous abordons la résolution de cette tâche à l’aide d’une approche en deux étapes. Tout d’abord en nous inspirant de [Deleforge 15], nous proposons une nouvelle façon d’estimer la pose d’un visage, en apprenant un lien entre deux espaces, l’espace des paramètres de pose et un espace de grande dimension représentant les observations perçues par une caméra. L’apprentissage de ce lien se fait à l’aide d’une approche probabiliste, utilisant un mélange de regressions affines. Par rapport aux méthodes d’estimation de pose de visage déjà existantes, nous incorporons de nouvelles informations à l’espace des paramètres de pose, ces additions sont nécessaires afin de pouvoir prendre en compte la diversité des observations, comme les differents visages et expressions mais aussi lesdécalages entre les positions des visages détectés et leurs positions réelles, cela permet d’avoir une méthode robuste aux conditions réelles. Les évaluations ont montrées que cette méthode permettait d’avoir de meilleurs résultats que les méthodes de regression standard et des résultats similaires aux méthodes de l’état de l’art qui pour certaines utilisent plus d’informations, comme la profondeur, pour estimer la pose. Dans un second temps, nous développons un modèle temporel qui utilise les capacités des traqueurs pour combiner l’information du présent avec celle du passé. Le but à travers cela est de produire une estimation de la pose plus lisse dans le temps, mais aussi de corriger les oscillations entre deux estimations consécutives indépendantes. Le modèle proposé intègre le précédent modèle de régression dans une structure de filtrage de Kalman. Cette extension fait partie de la famille des modèles dynamiques commutatifs et garde tous les avantages du mélange de regressionsaffines précédent. Globalement, le modèle temporel proposé permet d’obtenir des estimations de pose plus précises et plus lisses sur une vidéo. Le modèle dynamique commutatif donne de meilleurs résultats qu’un modèle de suivi utilsant un filtre de Kalman standard. Bien qu’appliqué à l’estimation de pose de visage le modèle presenté dans cette thèse est très général et peut être utilisé pour résoudre d’autres problèmes de régressions et de suivis
In this thesis, we address the well-known problem of head-pose estimationin the context of human-robot interaction (HRI). We accomplish this taskin a two step approach. First, we focus on the estimation of the head pose from visual features. We design features that could represent the face under different orientations and various resolutions in the image. The resulting is a high-dimensional representation of a face from an RGB image. Inspired from [Deleforge 15] we propose to solve the head-pose estimation problem by building a link between the head-pose parameters and the high-dimensional features perceived by a camera. This link is learned using a high-to-low probabilistic regression built using probabilistic mixture of affine transformations. With respect to classic head-pose estimation methods we extend the head-pose parameters by adding some variables to take into account variety in the observations (e.g. misaligned face bounding-box), to obtain a robust method under realistic conditions. Evaluation of the methods shows that our approach achieve better results than classic regression methods and similar results thanstate of the art methods in head pose that use additional cues to estimate the head pose (e.g depth information). Secondly, we propose a temporal model by using tracker ability to combine information from both the present and the past. Our aim through this is to give a smoother estimation output, and to correct oscillations between two consecutives independent observations. The proposed approach embeds the previous regression into a temporal filtering framework. This extention is part of the family of switching dynamic models and keeps all the advantages of the mixture of affine regressions used. Overall the proposed tracker gives a more accurate and smoother estimation of the head pose on a video sequence. In addition, the proposed switching dynamic model gives better results than standard tracking models such as Kalman filter. While being applied to the head-pose estimation problem the methodology presented in this thesis is really general and can be used to solve various regression and tracking problems, e.g. we applied it to the tracking of a sound source in an image
APA, Harvard, Vancouver, ISO, and other styles
47

Bertsimas, Dimitris J. "The Probabilistic Minimum Spanning Tree, Part II: Probabilistic Analysis and Asymptotic Results." Massachusetts Institute of Technology, Operations Research Center, 1988. http://hdl.handle.net/1721.1/5284.

Full text
Abstract:
In this paper, which is a sequel to [3], we perform probabilistic analysis under the random Euclidean and the random length models of the probabilistic minimum spanning tree (PMST) problem and the two re-optimization strategies, in which we find the MST or the Steiner tree respectively among the points that are present at a particular instance. Under the random Euclidean model we prove that with probability 1, as the number of points goes to infinity, the expected length of the PMST is the same with the expectation of the MST re-optimization strategy and within a constant of the Steiner re-optimization strategy. In the random length model, using a result of Frieze [6], we prove that with probability 1 the expected length of the PMST is asymptotically smaller than the expectation of the MST re-optimization strategy. These results add evidence that a priori strategies may offer a useful and practical method for resolving combinatorial optimization problems on modified instances. Key words: Probabilistic analysis, combinatorial optimization, minimum spanning tree, Steiner tree.
APA, Harvard, Vancouver, ISO, and other styles
48

Gutti, Praveen. "Semistructured probabilistic object query language a query language for semistructured probabilistic data /." Lexington, Ky. : [University of Kentucky Libraries], 2007. http://hdl.handle.net/10225/701.

Full text
Abstract:
Thesis (M.S.)--University of Kentucky, 2007.
Title from document title page (viewed on April 2, 2008). Document formatted into pages; contains: vii, 42 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 39-40).
APA, Harvard, Vancouver, ISO, and other styles
49

Hohn, Jennifer Lynn. "Generalized Probabilistic Bowling Distributions." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/82.

Full text
Abstract:
Have you ever wondered if you are better than the average bowler? If so, there are a variety of ways to compute the average score of a bowling game, including methods that account for a bowler’s skill level. In this thesis, we discuss several different ways to generate bowling scores randomly. For each distribution, we give results for the expected value and standard deviation of each frame's score, the expected value of the game’s final score, and the correlation coefficient between the score of the first and second roll of a single frame. Furthermore, we shall generalize the results in each distribution for an frame game on pins. Additionally, we shall generalize the number of possible games when bowling frames on pins. Then, we shall derive the frequency distribution of each frame’s scores and the arithmetic mean for frames on pins. Finally, to summarize the variety of distributions, we shall make tables that display the results obtained from each distribution used to model a particular bowler’s score. We evaluate the special case when bowling 10 frames on 10 pins, which represents a standard bowling game.
APA, Harvard, Vancouver, ISO, and other styles
50

Sharkey, Michael Ian. "Probabilistic Proof-carrying Code." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22720.

Full text
Abstract:
Proof-carrying code is an application of software verification techniques to the problem of ensuring the safety of mobile code. However, previous proof-carrying code systems have assumed that mobile code will faithfully execute the instructions of the program. Realistic implementations of computing systems are susceptible to probabilistic behaviours that can alter the execution of a program in ways that can result in corruption or security breaches. We investigate the use of a probabilistic bytecode language to model deterministic programs that are executed on probabilistic computing systems. To model probabilistic safety properties, a probabilistic logic is adapted to out bytecode instruction language, and soundness is proven. A sketch of a completeness proof of the logic is also shown.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography