Dissertations / Theses on the topic 'Probabilistik'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Probabilistik.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Proske, Dirk. "2. Dresdner Probabilistik-Symposium – Sicherheit und Risiko im Bauwesen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1218786674781-31766.
Full textProske, Dirk. "1. Dresdner Probabilistik-Symposium – Sicherheit und Risiko im Bauwesen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1218813448200-90769.
Full textProske, Dirk, Milad Mehdianpour, and Lucjan Gucma. "4th International Probabilistic Workshop: 12th-13th October 2006, Berlin, BAM (Federal Institute for Materials Research and Testing)." Universität für Bodenkultur Wien, 2009. https://slub.qucosa.de/id/qucosa%3A284.
Full textPREFACE: The world today is shaped by high dynamics. Multitude of processes evolves parallel and partly connected invisible. For example, the globalisation is such a process. Here one can observe the exponential growing of connections form the level of single humans to the level of cultures. Such connections guide as to the term complexity. Complexity is often understood as product of the number of elements and the amount of connections in the system. In other words, the world is going more complex, if the connections increase. Complexity itself is a term for a system, which is not fully understood, which is partly uncontrollable and indeterminated: exactly as humans. Growing from a single cell, the humans will show latter a behaviour, which we can not predict in detail. After all, the human brain consists of 1011 elements (cells). If the social dynamical processes yield to more complexity, we have to accept more indetermination. Well, one has to hope, that such an indetermination does not affect the basic of human existence. If we look at the field of technology, we can detect, that here indetermination or uncertainty is often be dealt with explicitly. This is valid for natural risk management, for nuclear engineering, civil engineering or for the design of ships. And so different the fields are which contribute to this symposium for all is valid: People working in this field have realised, that a responsible usage of technology requires consideration of indetermination and uncertainty. This level is not yet reached in the social sciences. It is the wish of the organisers of this symposium, that not only civil engineers, mechanical engineers, mathematicians, ship builders take part in this symposium, but also sociologists, managers and even politicians. Therefore there is still a great opportunity to grow for this symposium. Indetermination does not have to be negative: it can also be seen as chance.
Steinert, Rebecca. "Probabilistic Fault Management in Networked Systems." Doctoral thesis, KTH, Beräkningsbiologi, CB, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144608.
Full textQC 20140509
Cutajar, Kurt. "Broadening the scope of gaussian processes for large-scale learning." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS063.
Full textThe renewed importance of decision making under uncertainty calls for a re-evaluation of Bayesian inference techniques targeting this goal in the big data regime. Gaussian processes (GPs) are a fundamental building block of many probabilistic kernel machines; however, the computational and storage complexity of GPs hinders their scaling to large modern datasets. The contributions presented in this thesis are two-fold. We first investigate the effectiveness of exact GP inference on a computational budget by proposing a novel scheme for accelerating regression and classification by way of preconditioning. In the spirit of probabilistic numerics, we also show how the numerical uncertainty introduced by approximate linear algebra should be adequately evaluated and incorporated. Bridging the gap between GPs and deep learning techniques remains a pertinent research goal, and the second broad contribution of this thesis is to establish and reinforce the role of GPs, and their deep counterparts (DGPs), in this setting. Whereas GPs and DGPs were once deemed unfit to compete with alternative state-of-the-art methods, we demonstrate how such models can also be adapted to the large-scale and complex tasks to which machine learning is now being applied
Andriushchenko, Roman. "Computer-Aided Synthesis of Probabilistic Models." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417269.
Full textCruz, de Echeverria Loebell Nicole. "Sur le rôle de la déduction dans le raisonnement à partir de prémisses incertaines." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEP023/document.
Full textThe probabilistic approach to reasoning hypothesizes that most reasoning, both in everyday life and in science, takes place in contexts of uncertainty. The central deductive concepts of classical logic, consistency and validity, can be generalised to cover uncertain degrees of belief. Binary consistency can be generalised to coherence, where the probability judgments for two statements are coherent if and only if they respect the axioms of probability theory. Binary validity can be generalised to probabilistic validity (p-validity), where an inference is p-valid if and only if the uncertainty of its conclusion cannot be coherently greater than the sum of the uncertainties of its premises. But the fact that this generalisation is possible in formal logic does not imply that people will use deduction in a probabilistic way. The role of deduction in reasoning from uncertain premises was investigated across ten experiments and 23 inferences of differing complexity. The results provide evidence that coherence and p-validity are not just abstract formalisms, but that people follow the normative constraints set by them in their reasoning. It made no qualitative difference whether the premises were certain or uncertain, but certainty could be interpreted as the endpoint of a common scale for degrees of belief. The findings are evidence for the descriptive adequacy of coherence and p-validity as computational level principles for reasoning. They have implications for the interpretation of past findings on the roles of deduction and degrees of belief. And they offer a perspective for generating new research hypotheses in the interface between deductive and inductive reasoning
Borges, Luís António Costa. "Probabilistic evaluation of the rotation capacity of steel joints = Avaliação probabilistica da capacidade de rotação de ligações metálicas." Master's thesis, Departamento de Engenharia Civil, 2003. http://hdl.handle.net/10316/15652.
Full textVu, Ngoc tru. "Contribution à l'étude de la corrosion par carbonatation du béton armé : approche expérimentale et probabiliste." Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0008/document.
Full textThe steel corrosion induced by carbonation is a major cause of degradation of the reinforced concrete structures. Two stages arise: the steel depassivation due to the decrease of pH of the pore solution and the effective initiation, and then the propagation. A wide experimental study was carried out focusing on the first stage, in order to emphasize the effect of the exposure conditions, the type of cement and the concrete mixes, and the carbonation conditions of the concrete cover. In all a set of 27 configurations was investigated. The free potential of corrosion and the resistance of polarization were measured in the course of the experiment during one year. Regularly the Tafel coefficients along with the mass of corrosion products were also measured. The set of data was analyzed in order to derive the detection thresholds of the effective onset of corrosion associated with the electrochemical parameters, from the calculation of the probabilities of good or bad alarm. The threshold of the mass of corrosion products corresponding to this detection was also derived. The tests on concrete probes (porosity, permeability, etc.) supplied data that were used to calibrate a finite element model of the onset of corrosion: this model was found in fairly good agreement with the experimental results
Ayadi, Inès. "Optimisation des politiques de maintenance préventive dans un cadre de modélisation par modèles graphiques probabilistes." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1072/document.
Full textAt present, equipments used on the industrial circles are more and more complex. They require a maintenance increased to guarantee a level of optimal service in terms of reliability and availability. Besides, often this guarantee of optimalité has a very high cost, what is binding. In the face of these requirements the management of the maintenance of equipments is from now on a stake in size: look for a politics of maintenance realizing an acceptable compromise between the availability and the costs associated to the maintenance of the system. The works of this thesis leave besides the report that in several applications of the industry, the need for strategies of maintenance assuring(insuring) at the same time an optimal safety and a maximal profitability lives furthermore there
Shirmohammadi, Mahsa. "Qualitative analysis of synchronizing probabilistic systems." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0054/document.
Full textMarkov decision processes (MDPs) are finite-state probabilistic systems with bothstrategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment.An MDP can be mathematically viewed as a one and half player stochastic game played in rounds when the controller chooses an action,and the environment chooses a successor according to a fixedprobability distribution.There are two incomparable views on the behavior of an MDP, when thestrategic choices are fixed. In the traditional view, an MDP is a generator of sequence of states, called the state-outcome; the winning condition of the player is thus expressed as a set of desired sequences of states that are visited during the game, e.g. Borel condition such as reachability.The computational complexity of related decision problems and memory requirement of winning strategies for the state-outcome conditions are well-studied.Recently, MDPs have been viewed as generators of sequences of probability distributions over states, calledthe distribution-outcome. We introduce synchronizing conditions defined on distribution-outcomes,which intuitively requires that the probability mass accumulates insome (group of) state(s), possibly in limit.A probability distribution is p-synchronizing if the probabilitymass is at least p in some state, anda sequence of probability distributions is always, eventually,weakly, or strongly p-synchronizing if respectively all, some, infinitely many, or all but finitely many distributions in the sequence are p-synchronizing.For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence.We consider the problem of deciding whether an MDP is winning, for each synchronizing and winning mode: we establish matching upper and lower complexity bounds of the problems, as well as the memory requirementfor optimal winning strategies.As a further contribution, we study synchronization in probabilistic automata (PAs), that are kind of MDPs where controllers are restricted to use only word-strategies; i.e. no ability to observe the history of the system execution, but the number of choices made so far.The synchronizing languages of a PA is then the set of all synchronizing word-strategies: we establish the computational complexity of theemptiness and universality problems for all synchronizing languages in all winning modes.We carry over results for synchronizing problems from MDPs and PAs to two-player turn-based games and non-deterministic finite state automata. Along with the main results, we establish new complexity results foralternating finite automata over a one-letter alphabet.In addition, we study different variants of synchronization for timed andweighted automata, as two instances of infinite-state systems
Saad, Feras Ahmad Khaled. "Probabilistic data analysis with probabilistic programming." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113164.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 48-50).
Probabilistic techniques are central to data analysis, but dierent approaches can be challenging to apply, combine, and compare. This thesis introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.
by Feras Ahmad Khaled Saad.
M. Eng.
Gendra, Casalí Bernat. "Probabilistic quantum metrology." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/371132.
Full textMetrology is the field of research on statistical tools and technological design of measurement devices to infer accurate information about physical parameters. The noise in a physical setup is ultimately related to that of its constituents, and at a microscopic level this is in turn dictated by the rules of quantum physics. Quantum measurements are inherently noisy and hence limit the precision that can be reached by any metrology scheme. The field of quantum metrology is devoted to the study of such limits and to the development of new tools that help to surmount them, often make use unique quantum features such as superposition or entanglement. In the process of designing an estimation protocol, the experimentalist uses a figure of merit to optimise the performance of such protocols. Up until now most quantum metrology schemes and known bounds have been deterministic, that is, they are optimized in order to provide a valid estimate for each possible measurement outcome and minimize the average error between the estimated and true value of the parameter. This benchmarking of a protocol by its average performance is very natural and convenient, but there can be some scenarios in which this is not enough to express the concrete use that will be given to the obtained value. A central point in this thesis is that particular measurement outcomes can provide an estimate with a better precision than the average one. Notice that for this to happen there must be other imprecise outcomes so that the average does not violate the deterministic bounds. In this thesis we choose a figure of merit that reflects the maximum precision one can obtain. We optimise the precision of a set of heralded outcomes, and quantify the chance of such outcomes to occur, or in other words the probability that the protocol fails to provide an estimate. This can be understood as putting forward an extra feature that is always available to the experimentalist, namely the possibility of post-selecting the outcomes of their measurements and giving with some probability an inconclusive answer. These probabilistic protocols guarantee a minimal precision upon a heralded outcome. In quantum mechanics there are many ways in which data can be read-off from a quantum system. Hence, the optimization of probabilistic schemes cannot be reduced to reinterpreting results from the canonical (determinsitic) quantum metrology schemes, but rather it entails the search of completly different quantum generalized measurements. Specifically, we design probabilistic protocols for phase, direction and reference frame estimation. We show that post-selection has two possible effects: to compensate a bad choice of probe state or to counterbal¬ance the negative effects of noise present in the state system or in the measurement process. In particular, we show that adding the possibility of abstaining in phase estimation in presence of noise can produce an enhancement in precision that overtake the ultimate bound of deterministic protocols. The bound derived is the best precision that can be obtained, and in this sense one can speak of ultimate bound in precision.
Munch, Mélanie. "Améliorer le raisonnement dans l'incertain en combinant les modèles relationnels probabilistes et la connaissance experte." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASB011.
Full textThis thesis focuses on integrating expert knowledge to enhance reasoning under uncertainty. Our goal is to guide the probabilistic relations’ learning with expert knowledge for domains described by ontologies.To do so we propose to couple knowledge bases (KBs) and an oriented-object extension of Bayesian networks, the probabilistic relational models (PRMs). Our aim is to complement the statistical learning with expert knowledge in order to learn a model as close as possible to the reality and analyze it quantitatively (with probabilistic relations) and qualitatively (with causal discovery). We developped three algorithms throught three distinct approaches, whose main differences lie in their automatisation and the integration (or not) of human expert supervision.The originality of our work is the combination of two broadly opposed philosophies: while the Bayesian approach favors the statistical analysis of the given data in order to reason with it, the ontological approach is based on the modelization of expert knowledge to represent a domain. Combining the strenght of the two allows to improve both the reasoning under uncertainty and the expert knowledge
Asafu-Adjei, Joseph Kwaku. "Probabilistic Methods." VCU Scholars Compass, 2007. http://hdl.handle.net/10156/1420.
Full textBaier, Christel, Benjamin Engel, Sascha Klüppelholz, Steffen Märcker, Hendrik Tews, and Marcus Völp. "A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-129917.
Full textWeidner, Thomas. "Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal Logic." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208732.
Full textSchmitt, Lucie. "Durabilité des ouvrages en béton soumis à la corrosion : optimisation par une approche probabiliste." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0009/document.
Full textMastering the durability of new structures and the need to extand the lifespan of existing constructions correspond to social issues of the highest order and are part of the principles of a circular economy. The durability of concrete structures thus occupies a central position in the normative context. This thesis works follow those of J. Mai-Nhu* and aims at extending the field of application the SDReaM-crete model by integrating mineral additions based concretes and by defining a limit state criterion based on a quantity of corroded products. An approach based on a numerical optimization of predictive computations is set up to perform reliability analyses by considering the main mechanisms related to the corrosion of reinforcement, carbonation and chlorides. This model enables the optimization of the sizing of the concrete covers and performances by further integrating the environmental conditions as defined by the standards
Larsson, Emelie. "Utvärdering av osäkerhet och variabilitet vid beräkning av riktvärden för förorenad mark." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-218289.
Full textIn Sweden, approximately 80,000 contaminated areas have been identified. Some of these areas are in need of remediation to cope with the effects that the contaminants have on both humans and the environment. The Swedish Environmental Protection Agency (EPA) has published a methodology on how to perform risk assessments for contaminated soils together with a complex model for calculating soil guideline values. The guideline value model is deterministic and calculates single guideline values for contaminants. The model does not account explicitly for uncertainty and variability in parameters but rather handles it implicitly by using safety-factors and reasonable worst-case assumptions for different parameters. One method to account explicitly for uncertainty and variability in a risk assessment is to perform a probabilistic risk assessment (PRA) through Monte Carlo-simulations. A benefit with this is that the parameters can be defined with probability density functions (PDFs) that account for the uncertainty and variability of the parameters. In this Master's Thesis a PRA was conducted and followed by calculations of probabilistic guideline values for selected contaminants. The model was run for two sets of PDFs for the parameters: one was collected from extensive research in published articles and another one included the deterministic values set by the Swedish EPA for all parameters. The sets generated cumulative probability distributions (CPDs) of guideline values that, depending on the contaminant, corresponded in different levels to the deterministic guideline values that the Swedish EPA had calculated. In general, there was a stronger correlation between the deterministic guideline values and the CPDs for the sensitive land-use scenario compared to the less sensitive one. For contaminants, such as dioxin and PCB-7, a lowering of the guideline values would be required to fully protect humans and the environment based on the results in this thesis. Based on a recent soil investigation that Geosigma AB has performed, a case study was also conducted. In general there was a correlation between the deterministic site specific guideline values and the CPDs in the case study. In addition to this, a health oriented risk assessment was performed in the thesis where unexpected exposure pathways were found to be governing for the guideline values. For some contaminants the exposure pathway governing the guideline values in the PRA differed from the deterministic ones in 70-90 % of the simulations. Also, the contributing part of the exposure pathways to the unadjusted health guideline values differed from the deterministic ones. This indicated the need of always quantifying the composition of guideline values in probabilistic risk assessments.
Faix, Marvin. "Conception de machines probabilistes dédiées aux inférences bayésiennes." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM079/document.
Full textThe aim of this research is to design computers best suited to do probabilistic reasoning. The focus of the research is on the processing of uncertain data and on the computation of probabilistic distribution. For this, new machine architectures are presented. The concept they are designed on is different to the one proposed by Von Neumann, without any fixed or floating point arithmetic. These architectures could replace the current processors in sensor processing and robotic fields.In this thesis, two types of probabilistic machines are presented. Their designs are radically different, but both are dedicated to Bayesian inferences and use stochastic computing. The first deals with small-dimension inference problems and uses stochastic computing to perform the necessary operations to calculate the inference. This machine is based on the concept of probabilistic bus and has a strong parallelism.The second machine can deal with intractable inference problems. It implements a particular MCMC method: the Gibbs algorithm at the binary level. In this case, stochastic computing is used for sampling the distribution of interest. An important feature of this machine is the ability to circumvent the convergence problems generally attributed to stochastic computing. Finally, an extension of this second type of machine is presented. It consists of a generic and programmable machine designed to approximate solution to any inference problem
Cheng, Chi Wa. "Probabilistic topic modeling and classification probabilistic PCA for text corpora." HKBU Institutional Repository, 2011. http://repository.hkbu.edu.hk/etd_ra/1263.
Full textCrubillé, Raphaëlle. "Behavioural distances for probabilistic higher-order programs." Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC084.
Full textThe present thesis is devoted to the study of behavioral equivalences and distances for higher-order probabilistic programs. The manuscript is divided into three parts. In the first one, higher-order probabilistic languages are presented, as well as how to compare such programs with context equivalence and context distance.The second part follows an operational approach in the aim of building equivalences and metrics easier to handle as their contextual counterparts. We take as starting point here the two behavioral equivalences introduced by Dal Lago, Sangiorgi and Alberti for the probabilistic lambda-calculus equipped with a call-by-name evaluation strategy: the trace equivalence and the bisimulation equivalence. These authors showed that for their language, trace equivalence completely characterizes context equivalence—i.e. is fully abstract, while probabilistic bisimulation is a sound approximation of context equivalence, but is not fully abstract. In the operational part of the present thesis, we show that probabilistic bisimulation becomes fully abstract when we replace the call-by-name paradigm by the call-by-value one. The remainder of this part is devoted to a quantitative generalization of trace equivalence, i.e. a trace distance on programs. We introduce first e trace distance for an affine probabilistic lambda-calculus—i.e. where a function can use its argument at most once, and then for a more general probabilistic lambda-calculus where functions have the ability to duplicate their arguments. In these two cases, we show that these trace distances are fully abstract.In the third part, two denotational models of higher-order probabilistic languages are considered: the Danos and Ehrhard's model based on probabilistic coherence spaces that interprets the language PCF enriched with discrete probabilities, and the Ehrhard, Pagani and Tasson's one based on measurable cones and measurable stable functions that interpret PCF equipped with continuous probabilities. The present thesis establishes two results on these models structure. We first show that the exponential comonad of the category of probabilistic coherent spaces can be expressed using the free commutative comonoid: it consists in a genericity result for this category seen as a model of Linear Logic. The second result clarify the connection between these two models: we show that the category of measurable cones and measurable stable functions is a conservative extension of the co-Kleisli category of probabilistic coherent spaces. It means that the recently introduced model of Ehrhard, Pagani and Tasson can be seen as the generalization to the continuous case of the model for PCF with discrete probabilities in probabilistic coherent spaces
Ben, Mrad Ali. "Observations probabilistes dans les réseaux bayésiens." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0018/document.
Full textIn a Bayesian network, evidence on a variable usually signifies that this variable is instantiated, meaning that the observer can affirm with certainty that the variable is in the signaled state. This thesis focuses on other types of evidence, often called uncertain evidence, which cannot be represented by the simple assignment of the variables. This thesis clarifies and studies different concepts of uncertain evidence in a Bayesian network and offers various applications of uncertain evidence in Bayesian networks.Firstly, we present a review of uncertain evidence in Bayesian networks in terms of terminology, definition, specification and propagation. It shows that the vocabulary is not clear and that some terms are used to represent different concepts.We identify three types of uncertain evidence in Bayesian networks and we propose the followingterminology: likelihood evidence, fixed probabilistic evidence and not-fixed probabilistic evidence. We define them and describe updating algorithms for the propagation of uncertain evidence. Finally, we propose several examples of the use of fixed probabilistic evidence in Bayesian networks. The first example concerns evidence on a subpopulation applied in the context of a geographical information system. The second example is an organization of agent encapsulated Bayesian networks that have to collaborate together to solve a problem. The third example concerns the transformation of evidence on continuous variables into fixed probabilistic evidence. The algorithm BN-IPFP-1 has been implemented and used on medical data from CHU Habib Bourguiba in Sfax
Qiu, Feng. "Probabilistic covering problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47567.
Full textTaylor, Jonathan 1981. "Lax probabilistic bisimulation." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111546.
Full textSeidel, Karen. "Probabilistic communicating processes." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306194.
Full textKim, Jeong-Gyoo. "Probabilistic shape models :." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433472.
Full textPower, Christopher. "Probabilistic symmetry reduction." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3493/.
Full textBinter, Roman. "Applied probabilistic forecasting." Thesis, London School of Economics and Political Science (University of London), 2012. http://etheses.lse.ac.uk/559/.
Full textJones, Claire. "Probabilistic non-determinism." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/413.
Full textAngelopoulos, Nicos. "Probabilistic finite domains." Thesis, City University London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342823.
Full textRanganathan, Ananth. "Probabilistic topological maps." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22643.
Full textCommittee Chair: Dellaert, Frank; Committee Member: Balch, Tucker; Committee Member: Christensen, Henrik; Committee Member: Kuipers, Benjamin; Committee Member: Rehg, Jim.
Iyer, Ranjit. "Probabilistic distributed control." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1568128211&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.
Full textChien, Yung-hsin. "Probabilistic preference modeling /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.
Full textMorris, Quaid Donald Jozef 1972. "Practical probabilistic inference." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29989.
Full textIncludes bibliographical references (leaves 157-163).
The design and use of expert systems for medical diagnosis remains an attractive goal. One such system, the Quick Medical Reference, Decision Theoretic (QMR-DT), is based on a Bayesian network. This very large-scale network models the appearance and manifestation of disease and has approximately 600 unobservable nodes and 4000 observable nodes that represent, respectively, the presence and measurable manifestation of disease in a patient. Exact inference of posterior distributions over the disease nodes is extremely intractable using generic algorithms. Inference can be made much more efficient by exploiting the QMR-DT's unique structure. Indeed, tailor-made inference algorithms for the QMR-DT efficiently generate exact disease posterior marginals for some diagnostic problems and accurate approximate posteriors for others. In this thesis, I identify a risk with using the QMR-DT disease posteriors for medical diagnosis. Specifically, I show that patients and physicians conspire to preferentially report findings that suggest the presence of disease. Because the QMR-DT does not contain an explicit model of this reporting bias, its disease posteriors may not be useful for diagnosis. Correcting these posteriors requires augmenting the QMR-DT with additional variables and dependencies that model the diagnostic procedure. I introduce the diagnostic QMR-DT (dQMR-DT), a Bayesian network containing both the QMR-DT and a simple model of the diagnostic procedure. Using diagnostic problems sampled from the dQMR-DT, I show the danger of doing diagnosis using disease posteriors from the unaugmented QMR-DT.
(cont.) I introduce a new class of approximate inference methods, based on feed-forward neural networks, for both the QMR-DT and the dQMR-DT. I show that these methods, recognition models, generate accurate approximate posteriors on the QMR-DT, on the dQMR-DT, and on a version of the dQMR-DT specified only indirectly through a set of presolved diagnostic problems.
by Quaid Donald Jozef Morris.
Ph.D.in Computational Neuroscience
Mansinghka, Vikash Kumar. "Natively probabilistic computation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47892.
Full textIncludes bibliographical references (leaves 129-135).
I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.
(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.
by Vikash Kumar Mansinghka.
Ph.D.
Conduit, Bryce David. "Probabilistic alloy design." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648162.
Full textCowans, Philip John. "Probabilistic document modelling." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614041.
Full textBarbosa, Fábio Daniel Moreira. "Probabilistic propositional logic." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22198.
Full textO termo Lógica Probabilística, em geral, designa qualquer lógica que incorpore conceitos probabilísticos num sistema lógico formal. Nesta dissertacção o principal foco de estudo e uma lógica probabilística (designada por Lógica Proposicional Probabilística Exógena), que tem por base a Lógica Proposicional Clássica. São trabalhados sobre essa lógica probabilística a síntaxe, a semântica e um cálculo de Hilbert, provando-se diversos resultados clássicos de Teoria de Probabilidade no contexto da EPPL. São também estudadas duas propriedades muito importantes de um sistema lógico - correcção e completude. Prova-se a correcção da EPPL da forma usual, e a completude fraca recorrendo a um algoritmo de satisfazibilidade de uma fórmula da EPPL. Serão também considerados na EPPL conceitos de outras lógicas probabilísticas (incerteza e probabilidades intervalares) e Teoria de Probabilidades (condicionais e independência).
The term Probabilistic Logic generally refers to any logic that incorporates probabilistic concepts in a formal logic system. In this dissertation, the main focus of study is a probabilistic logic (called Exogenous Probabilistic Propo- sitional Logic), which is based in the Classical Propositional Logic. There will be introduced, for this probabilistic logic, its syntax, semantics and a Hilbert calculus, proving some classical results of Probability Theory in the context of EPPL. Moreover, there will also be studied two important properties of a logic system - soundness and completeness. We prove the EPPL soundness in a standard way, and weak completeness using a satis ability algorithm for a formula of EPPL. It will be considered in EPPL concepts of other probabilistic logics (uncertainty and intervalar probability) and of Probability Theory (independence and conditional).
Carvalho, Elsa Cristina Batista Bento. "Probabilistic constraint reasoning." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8603.
Full textThe continuous constraint paradigm has been often used to model safe reasoning in applications where uncertainty arises. Constraint propagation propagates intervals of uncertainty among the variables of the problem, eliminating values that do not belong to any solution. However, constraint programming is very conservative: if initial intervals are wide (reflecting large uncertainty), the obtained safe enclosure of all consistent scenarios may be inadequately wide for decision support. Since all scenarios are considered equally likely, insufficient pruning leads to great inefficiency if some costly decisions may be justified by very unlikely scenarios. Even when probabilistic information is available for the variables of the problem, the continuous constraint paradigm is unable to incorporate and reason with such information. Therefore, it is incapable of distinguishing between different scenarios, based on their likelihoods. This thesis presents a probabilistic continuous constraint paradigm that associates a probabilistic space to the variables of the problem, enabling probabilistic reasoning to complement the underlying constraint reasoning. Such reasoning is used to address probabilistic queries and requires the computation of multi-dimensional integrals on possibly non linear integration regions. Suitable algorithms for such queries are developed, using safe or approximate integration techniques and relying on methods from continuous constraint programming in order to compute safe covers of the integration region. The thesis illustrates the adequacy of the probabilistic continuous constraint framework for decision support in nonlinear continuous problems with uncertain information, namely on inverse and reliability problems, two different types of engineering problems where the developed framework is particularly adequate to support decision makers.
Tran, Vinh Phuc. "Modélisation à plusieurs échelles d'un milieu continu hétérogène aléatoire." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1159/document.
Full textIf the length-scales are well separated, homogenization theory can provide a robust theoretical framework for heterogeneous materials. In this context, the macroscopic properties can be retrieved from the solution to an auxiliary problem, formulated over the representative volume element (with appropriate boundary conditions). In the present work, we focus on the homogenization of heterogeneous materials which are described at the finest scale by two different materials models (both depending on a specific characteristic length) while the homogeneous medium behaves as a classical Cauchy medium in both cases.In the first part, the random microstructure of a Cauchy medium is considered. Solving the auxiliary problem on multiple realizations can be very costly due to constitutive phases exhibiting not well-separated characteristic length scales and/or high mechanical contrasts. In order to circumvent these limitations, our study is based on a mesoscopic description of the material, combined with information theory. In the mesostructure, defined by a filtering framework, the fine-scale features are smoothed out.The second part is dedicated to gradient materials which induce microscopic size-effect due to the existence of microscopic material internal length(s). The random microstructure is described by a newly introduced stress-gradient model. Despite being conceptually similar, we show that the stress-gradient and strain-gradient models define two different classes of materials. Next, simple approaches such as mean-field homogenization techniques are proposed to better understand the assumptions underlying the stress-gradient model. The obtained semi-analytical results allow us to explore the influence on the homogenized properties of the model parameters and constitute a first step toward full-field simulations
Wang, Xiujuan. "A Probabilistic Model of Flower Fertility and Factors Influencing Seed Production in Winter Oilseed rape (Brassica napus L.)." Phd thesis, Châtenay-Malabry, Ecole centrale de Paris, 2011. http://tel.archives-ouvertes.fr/tel-00635536.
Full textAmmar, Moez. "Estimation du contexte par vision embarquée et schémas de commande pour l’automobile." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112425/document.
Full textTo take relevant decisions, autonomous systems have to continuously estimate their environment via embedded sensors. In the case of 'intelligent' vehicles, the estimation of the context focuses both on objects perfectly known such as road signs (vertical or horizontal), and on objects unknown or difficult to describe due to their number and variety (pedestrians, cyclists, other vehicles, animals, any obstacles on the road, etc.). Now, the a contrario modelling provides a formal framework adapted to the problem of detection of variable objects, by modeling the noise rather than the objects to detect. Our main contribution in this PhD work was to adapt the probabilistic NFA (Number of False Alarms) measurements to the problem of detection of objects simply defined either as having an own motion, or salient to the road plane. A highlight of the proposed algorithms is that they are free from any detection parameter, in particular threshold. A first NFA criterion allows the identification of the sub-domain of the image (not necessarily connected pixels) whose gray level values are the most amazing under Gaussian noise assumption (naive model). A second NFA criterion allows then identifying the subset of maximum significant windows under binomial hypothesis (naive model). We prove that these measurements (NFA) can also be used for the estimation of intrinsec parameters, for instance either the 6D movement of the onboard camera, or a binarisation threshold. Finally, we prove that the proposed algorithms are generic and can be applied to different kinds of input images, for instance either radiometric images or disparity maps. Conversely to the a contrario approach, the Markov models allow to inject a priori knowledge about the objects sought. We use it in the case of the road marking classification. From the context estimation (road signs, detected objects), the control part includes firstly a specification of the possible trajectories and secondly the laws to achieve the selected path. The possible trajectories are grouped into a bundle, and various parameters are used to set the local geometric invariants (slope, curvature). These parameters depend on the vehicle context (presence of vulnerables, fixed obstacles, speed limits, etc ... ), and allows determining the selected the trajectory from the bundle. Differentially flat system is indeed fully parameterized by its flat outputs and their derivatives. Another feature of this kind of systems is to be accurately linearized by endogenous dynamics feed-back and coordinate transformation. Tracking stabilizer is then trivially obtained from the linearized system
Ugarte, Ari. "Combining machine learning and evolution for the annotation of metagenomics data." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066732/document.
Full textMetagenomics is used to study microbial communities by the analyze of DNA extracted directly from environmental samples. It allows to establish a catalog very extended of genes present in the microbial communities. This catalog must be compared against the genes already referenced in the databases in order to find similar sequences and thus determine their function. In the course of this thesis, we have developed MetaCLADE, a new methodology that improves the detection of protein domains already referenced for metagenomic and metatranscriptomic sequences. For the development of MetaCLADE, we modified an annotation system of protein domains that has been developed within the Laboratory of Computational and Quantitative Biology clade called (closer sequences for Annotations Directed by Evolution) [17]. In general, the methods for the annotation of protein domains characterize protein domains with probabilistic models. These probabilistic models, called sequence consensus models (SCMs) are built from the alignment of homolog sequences belonging to different phylogenetic clades and they represent the consensus at each position of the alignment. However, when the sequences that form the homolog set are very divergent, the signals of the SCMs become too weak to be identified and therefore the annotation fails. In order to solve this problem of annotation of very divergent domains, we used an approach based on the observation that many of the functional and structural constraints in a protein are not broadly conserved among all species, but they can be found locally in the clades. The approach is therefore to expand the catalog of probabilistic models by creating new models that focus on the specific characteristics of each clade. MetaCLADE, a tool designed with the objective of annotate with precision sequences coming from metagenomics and metatranscriptomics studies uses this library in order to find matches between the models and a database of metagenomic or metatranscriptomic sequences. Then, it uses a pre-computed step for the filtering of the sequences which determine the probability that a prediction is a true hit. This pre-calculated step is a learning process that takes into account the fragmentation of metagenomic sequences to classify them. We have shown that the approach multi source in combination with a strategy of meta-learning taking into account the fragmentation outperforms current methods
Bertolini, André Carlos 1980. "Probabilistic history matching methodology for real-time reservoir surveillance = Metodologia de ajuste de histórico probabilístico para monitoramento contínuo de reservatórios." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265767.
Full textTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências
Made available in DSpace on 2018-08-28T00:15:31Z (GMT). No. of bitstreams: 1 Bertolini_AndreCarlos_D.pdf: 30287486 bytes, checksum: e38cb30df0864b5bbc8c5bf4829d4346 (MD5) Previous issue date: 2015
Resumo: Este trabalho propõe uma metodologia de ajuste de histórico probabilístico em tempo real a fim de melhorar a previsão do reservatório ao longo do tempo. A metodologia proposta utiliza uma avaliação rigorosa nos modelos sincronizada com a frequência de aquisição de dados históricos. Esta avaliação contínua permite uma rápida identificação de deficiência do modelo e reação para iniciar um processo de recaracterização conforme necessário. Além disso, a metodologia inclui uma técnica de quantificação de incertezas utilizando os dados dinâmicos para reduzir as incertezas do reservatório, e um passo para incluir erros de medição e margens de tolerância para os dados históricos. O fluxo de trabalho da metodologia é composto por nove etapas. O fluxo começa com um conjunto de modelos representativos selecionados através de uma abordagem probabilística, as incertezas do reservatório, e um intervalo de aceitação dos dados históricos. Os modelos são simulados e os resultados comparados com os dados históricos. Os passos seguintes são a redução da incerteza e uma segunda avaliação do modelo para garantir um melhor ajuste de histórico. Depois, os modelos são filtrados para descartar aqueles que estejam fora da faixa de aceitação e, em seguida, usados para fazer previsões do reservatório. O último passo é a verificação de novos dados observados, que é sincronizada com a aquisição de dados. O método também apresenta uma maneira inovadora e eficiente para apoiar o monitoramento do reservatório através de indicadores gráficos da qualidade do ajuste. Um modelo de reservatório sintético foi usado em todo o trabalho a fim de controlar os resultados de todos os métodos que apoiam a metodologia proposta. Além disso, a metodologia foi aplicada no modelo UNISIM-IH, baseado no campo de Namorado, localizado na Bacia de Campos, Brasil. Os estudos de caso realizados mostraram que a metodologia proposta assimila continuamente os dados observados do reservatório, avalia o desempenho do modelo, e mantém um conjunto de modelos de reservatórios calibrados em tempo real
Abstract: This work focuses on probabilistic real-time history matching to improve reservoir forecast over time. The proposed methodology uses a rigorous model evaluation, which is synchronized with history data acquisition frequency. A continuous model evaluation allows a quick model deficiency identification and reaction to start a model reparametrization process as needed. In addition, the methodology includes an uncertainty quantification technique, which uses the dynamic data to reduce reservoir uncertainties, and a step to include measurement errors and observed data tolerance margin. The real-time history matching workflow is composed of nine steps. It starts with a set of representative models selected through a probabilistic approach, the uncertainties of the reservoir and an acceptance history data range. The models are run and the results compared with the history data. The following steps are uncertainty reduction and a second model evaluation to guarantee an improved history matching. The models are then filtered to discard any model outside the acceptance range, and then used to make reservoir forecast. In the final step, the workflow searches for new data observed. The methodology also presents a novel and efficient way to support reservoir surveillance through graphical indicators of matching quality. To better control the results of all the methods, which supports the proposed methodology, a synthetic reservoir model was used in the entire work. In addition, the proposed methodology was applied in the UNISIM-I-H model, which is based on the Namorado field, located in the Campos Basin, Brazil. The performed study cases were shown that the proposed history matching procedure assimilates continuously the observed reservoir data, evaluates the model performances through quality indicators and maintains a set of calibrated reservoir models in real-time
Doutorado
Reservatórios e Gestão
Doutor em Ciências e Engenharia de Petróleo
Drouard, Vincent. "Localisation et suivi de visages à partir d'images et de sons : une approche Bayésienne temporelle et commumative." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM094/document.
Full textIn this thesis, we address the well-known problem of head-pose estimationin the context of human-robot interaction (HRI). We accomplish this taskin a two step approach. First, we focus on the estimation of the head pose from visual features. We design features that could represent the face under different orientations and various resolutions in the image. The resulting is a high-dimensional representation of a face from an RGB image. Inspired from [Deleforge 15] we propose to solve the head-pose estimation problem by building a link between the head-pose parameters and the high-dimensional features perceived by a camera. This link is learned using a high-to-low probabilistic regression built using probabilistic mixture of affine transformations. With respect to classic head-pose estimation methods we extend the head-pose parameters by adding some variables to take into account variety in the observations (e.g. misaligned face bounding-box), to obtain a robust method under realistic conditions. Evaluation of the methods shows that our approach achieve better results than classic regression methods and similar results thanstate of the art methods in head pose that use additional cues to estimate the head pose (e.g depth information). Secondly, we propose a temporal model by using tracker ability to combine information from both the present and the past. Our aim through this is to give a smoother estimation output, and to correct oscillations between two consecutives independent observations. The proposed approach embeds the previous regression into a temporal filtering framework. This extention is part of the family of switching dynamic models and keeps all the advantages of the mixture of affine regressions used. Overall the proposed tracker gives a more accurate and smoother estimation of the head pose on a video sequence. In addition, the proposed switching dynamic model gives better results than standard tracking models such as Kalman filter. While being applied to the head-pose estimation problem the methodology presented in this thesis is really general and can be used to solve various regression and tracking problems, e.g. we applied it to the tracking of a sound source in an image
Bertsimas, Dimitris J. "The Probabilistic Minimum Spanning Tree, Part II: Probabilistic Analysis and Asymptotic Results." Massachusetts Institute of Technology, Operations Research Center, 1988. http://hdl.handle.net/1721.1/5284.
Full textGutti, Praveen. "Semistructured probabilistic object query language a query language for semistructured probabilistic data /." Lexington, Ky. : [University of Kentucky Libraries], 2007. http://hdl.handle.net/10225/701.
Full textTitle from document title page (viewed on April 2, 2008). Document formatted into pages; contains: vii, 42 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 39-40).
Hohn, Jennifer Lynn. "Generalized Probabilistic Bowling Distributions." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/82.
Full textSharkey, Michael Ian. "Probabilistic Proof-carrying Code." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22720.
Full text