To see the other types of publications on this topic, follow the link: OR with probabilistic transitions.

Dissertations / Theses on the topic 'OR with probabilistic transitions'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'OR with probabilistic transitions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lees, Benjamin T. "Quantum spin systems, probabilistic representations and phase transitions." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/82123/.

Full text
Abstract:
This thesis investigates properties of classical and quantum spin systems on lattices. These models have been widely studied due to their relevance to condensed matter physics. We identify the ground states of an antiferromagnetic RP2 model, these ground states are very di�erent from the ferromagnetic model and there was some disagreement over their structure, we settle this disagreement. Correlation inequalities are proved for the spin- 1/2 XY model and the ground state of the spin-1 XY model. This provides fresh results in a topic that had been stagnant and allows the proof of some new results, for example existence of some correlation functions in the thermodynamic limit. The occurrence of nematic order at low temperature in a quantum nematic model is proved using the method of reflection positivity and infrared bounds. Previous results on this nematic order were achieved indirectly via a probabilistic representation. This result is maintained in the presence of a small antiferromagnetic interaction, this case was not previously covered. Probabilistic representations for quantum spin systems are introduced and some consequences are presented. In particular, N´eel order is proved in a bilinear-biquadratic spin-1 system at low temperature. This result extends the famous result of Dyson, Lieb and Simon [35]. Dilute spin systems are introduced and the occurrence of a phase transition at low temperature characterised by preferential occupation of the even or odd sublattice of a cubic box is proved. This result is the first of its type for such a mixed classical and quantum system. A probabilistic representation of the spin-1 Bose-Hubbard model is also presented and some consequences are proved.
APA, Harvard, Vancouver, ISO, and other styles
2

Sato, Tetsuya. "Identifying All Preorders on the Subdistribution Monad." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xia, Victoria(Victoria F. ). "Learning of probabilistic transition models for robotic actions via templates." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121497.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 71-72).<br>In this work we present templates as an approach for learning probabilistic transition models for actions. By constructing templates via a greedy procedure for building up lists of deictic references that select relevant objects to pass to a predictor, we learn compact representations for a transition model whose training time and performance do not suffer from the presence of additional objects in more complex scenes. We present various algorithms for simultaneously separating training data into corresponding templates and learning template parameters, through the use of clustering-based approaches for initial assignment of samples to templates, followed by EM-like methods to further separate the data and train templates. We evaluate templates on variants of a simulated, 3D table-top pushing task involving stacks of objects. In comparing our approach to a baseline that considers all objects in the scene, we find that the templates approach is more data-efficient in terms of impact of number of training samples on performance.<br>by Victoria Xia.<br>M. Eng.<br>M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
4

Workman, Michael. "On Probabilistic Transition Rates Used in Markov Models for Pitting Corrosion." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1396448113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Delgado, Daniel Javier Casani. "Planejamento probabilístico como busca num espaço de transição de estados." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04062013-060258/.

Full text
Abstract:
Um dos modelos mais usados para descrever problemas de planejamento probabilístico, i.e., planejamento de ações com efeitos probabilísticos, é o processo de decisão markoviano (Markov Decision Process - MDP). Soluções tradicionais são baseadas em programação dinâmica, sendo as mais ecientes aquelas baseadas em programação dinâmica em tempo real (Real-Time Dynamic Programming - RTDP), por explorarem somente os estados alcançáveis a partir de um dado estado inicial. Por outro lado, existem soluções ecientes baseadas em métodos de busca heurística em um grafo AND/OR, sendo que os nós AND representam os efeitos probabilísticos das ações e os nós OR representam as escolhas de ações alternativas. Tais soluções também exploram somente estados alcançáveis a partir de um estado inicial porém, guardam um subgrafo solução parcial e usam programação dinâmica para a atualização do custo dos nós desse subgrafo. No entanto, problemas com grandes espaços de estados limitam o uso prático desses métodos. MDPs fatorados permitem explorar a estrutura do problema, representando MDPs muito grandes de maneira compacta e assim, favorecer a escalabilidade das soluções. Neste trabalho, apresentamos uma análise comparativa das diferentes soluções para MDPs, com ênfase naquelas que fazem busca heurística e as comparamos com soluções baseadas em programação dinâmica assíncrona, consideradas o estado da arte das soluções de MPDs. Além disso, propomos um novo algoritmo de busca heurística para MDPs fatorados baseado no algoritmo ILAO* e o testamos nos problemas da competição de planejamento probabilístico IPPC-2011.<br>One of the most widely used models to describe probabilistic planning problems, i.e., planning of actions with probabilistic eects, is the Markov Decision Process - MDP. The traditional solutions are based on dynamic programming, whereas the most ecient solutions are based on Real-Time Dynamic Programming - RTDP, which explore only the reachable states from a given initial state. Moreover, there are ecient solutions based on search methods in a AND/OR graph, where AND nodes represent the probabilistic eects of an action and OR nodes represent the choices of alternative actions. These solutions also explore only reachable states but maintain the parcial subgraph solution, using dynamic programming for updating the cost of nodes of these subgraph. However, problems with large state spaces limit the practical use of these methods. Factored representation of MDPs allow to explore the structure of the problem, and can represent very large MDPs compactly and thus improve the scalability of the solutions. In this dissertation, we present a comparative analysis of dierent solutions for MDPs, with emphasis on heuristic search methods. We compare the solutions which are based on asynchronous dynamic programming which are also considered the state of the art. We also propose a new factored algorithm based on the search algorithm ILAO*. It is also tested by using the problems of the International Probabilistic Planning Competition IPPC-2011.
APA, Harvard, Vancouver, ISO, and other styles
6

Bona, Glauber De. "Satisfazibilidade probabilística." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-02062011-181639/.

Full text
Abstract:
Este trabalho estuda o problema da Satisfazibilidade Probabilística (PSAT), revendo a sua solução via programação linear, além de propor novos algoritmos para resolvê-lo através da redução ao SAT. Construímos uma redução polinomial do PSAT para o SAT, chamada de Redução Canônica, codificando operações da aritmética racional em bits, como variáveis lógicas. Analisamos a complexidade computacional dessa redução e propomos uma Redução Canônica de Precisão Limitada para contornar tal complexidade. Apresentamos uma Redução de Turing do PSAT ao SAT, baseada no algoritmo Simplex e na Forma Normal Atômica que introduzimos. Sugerimos modificações em tal redução em busca de eficiência computacional. Por fim, implementamos essas reduções a m de investigar o perl de complexidade do PSAT, observamos o fenômeno de transição de fase e discutimos as condições para sua detecção.<br>This work studies the Probabilistic Satisfiability problem (PSAT), reviewing its solution through linear programming, and proposing new algorithms to solve it. We construct a polynomial many-to-one reduction from PSAT to SAT, called Canonical Reduction, codifying rational arithmetic operations into bits, as logical variables. We analyze the computational complexity of this reduction and we propose a Limited Precision Canonical Reduction to reduce such complexity. We present a Turing Reduction from PSAT to SAT, based on the Simplex algorithm and the Atomic Normal Form we introduced. We suggest modifications in such reduction looking for computational eficiency. Finally, we implement these reductions in order to investigate the complexity profile of PSAT, the phase transition phenomenom is observed and the conditions for its detection are discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Khapko, Taras. "Edge states and transition to turbulence in boundary layers." Doctoral thesis, KTH, Stabilitet, Transition, Kontroll, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186038.

Full text
Abstract:
The focus of this thesis is the numerical study of subcritical transition to turbulence in boundary-layer flows. For the most part, boundary layers with uniform suction are considered. Constant homogeneous suction counteracts the spatial growth of the boundary layer, rendering the flow parallel. This enables research approaches which are not feasible in the context of spatially developing flows. In the first part, the laminar–turbulent separatrix of the asymptotic suction boundary layer (ASBL) is investigated numerically by means of an edge-tracking algorithm. The obtained edge states experience recurrent dynamics, going through calm and bursting phases. The self-sustaining mechanism bears many similarities with the classical regeneration cycle of near-wall turbulence. The recurrent simple structure active during calm phases is compared to the nucleation of turbulence events in bypass transition originating from delocalised initial conditions. The implications on the understanding of the bypass-transition process and the edge state's role are discussed. Based on this understanding, a model is constructed which predicts the position of the nucleation of turbulent spots during free-stream turbulence induced transition in spatially developing boundary-layer flow. This model is used together with a probabilistic cellular automaton (PCA), which captures the spatial spreading of the spots, correctly reproducing the main statistical characteristics of the transition process. The last part of the thesis is concerned with the spatio-temporal aspects of turbulent ASBL in extended numerical domains near the onset of sustained turbulence. The different behaviour observed in ASBL, i.e. absence of sustained laminar–turbulent patterns, which have been reported in other wall-bounded flows, is associated with different character of the large-scale flow. In addition, an accurate quantitative estimate for the lowest Reynolds number with sustained turbulence is obtained<br><p>QC 20160429</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Lancaster, Joseph Paul Jr. "Predicting the behavior of robotic swarms in discrete simulation." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/18980.

Full text
Abstract:
Doctor of Philosophy<br>Department of Computing and Information Sciences<br>David Gustafson<br>We use probabilistic graphs to predict the location of swarms over 100 steps in simulations in grid worlds. One graph can be used to make predictions for worlds of different dimensions. The worlds are constructed from a single 5x5 square pattern, each square of which may be either unoccupied or occupied by an obstacle or a target. Simulated robots move through the worlds avoiding the obstacles and tagging the targets. The interactions between the robots and the robots and the environment lead to behavior that, even in deterministic simulations, can be difficult to anticipate. The graphs capture the local rate and direction of swarm movement through the pattern. The graphs are used to create a transition matrix, which along with an occupancy matrix, can be used to predict the occupancy in the patterns in the 100 steps using 100 matrix multiplications. In the future, the graphs could be used to predict the movement of physical swarms though patterned environments such as city blocks in applications such as disaster response search and rescue. The predictions could assist in the design and deployment of such swarms and help rule out undesirable behavior.
APA, Harvard, Vancouver, ISO, and other styles
9

Stec, Mateusz. "Micromechanical modeling of cleavage fracture in polycrystalline materials." Doctoral thesis, Stockholm : Hållfasthetslära, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Henestroza, Anguiano Enrique. "Analyse syntaxique probabiliste en dépendances : approches efficaces à large contexte avec ressources lexicales distributionnelles." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00860720.

Full text
Abstract:
Cette thèse présente des méthodes pour améliorer l'analyse syntaxique probabiliste en dépendances. Nous employons l'analyse à base de transitions avec une modélisation effectuée par des machines à vecteurs supports (Cortes and Vapnik, 1995), et nos expériences sont réalisées sur le français. L'analyse a base de transitions est rapide, de par la faible complexité des algorithmes sous-jacents, eux mêmes fondés sur une optimisation locale des décisions d'attachement. Ainsi notre premier fil directeur est d'élargir le contexte syntaxique utilisé. Partant du système de transitions arc-eager (Nivre, 2008), nous proposons une variante qui considère simultanément plusieurs gouverneurs candidats pour les attachements à droite. Nous testons aussi la correction des analyses, inspirée par Hall and Novák (2005), qui révise chaque attachement en choisissant parmi plusieurs gouverneurs alternatifs dans le voisinage syntaxique. Nos approches améliorent légèrement la précision globale ainsi que celles de l'attachement des groupes prépositionnels et de la coordination. Notre deuxième fil explore des approches semi-supervisées. Nous testons l'auto-entrainement avec un analyseur en deux étapes, basé sur McClosky et al. (2006), pour le domaine journalistique ainsi que pour l'adaptation au domaine médical. Nous passons ensuite à la modélisation lexicale à base de corpus, avec des classes lexicales généralisées pour réduire la dispersion des données, et des préférences lexicales de l'attachement des groupes prépositionnels pour aider à la désambiguïsation. Nos approches améliorent, dans certains cas, la précision et la couverture de l'analyseur, sans augmenter sa complexité théorique.
APA, Harvard, Vancouver, ISO, and other styles
11

Saad, Feras Ahmad Khaled. "Probabilistic data analysis with probabilistic programming." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113164.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 48-50).<br>Probabilistic techniques are central to data analysis, but dierent approaches can be challenging to apply, combine, and compare. This thesis introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.<br>by Feras Ahmad Khaled Saad.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
12

Frohling, Krista Rose. "Transitions." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1403.

Full text
Abstract:
Transitions developed after experiencing one of the largest transitions of my life from an autonomous being and business owner to a pregnant woman to a mother, all during my three year Masters of Fine Art program at Southern Illinois University Carbondale. The first section of the show follows my emotional progression throughout pregnancy, as well as physical form, highlighting inner conflict. An emotional conflict and progression is illustrated through the use of emotional landscapes on the exterior walls of the space. Each emotional landscape is created from 25 canvas prints that I photographed on my mobile devices. The interior walls showcase my growing pregnant torso and separated oversized heads. The second section of Transitions deals with the issues of motherhood, specifically the working mother. As a working mother and graduate student, I have had to spend a large amount of time away from my daughter, and because of this I have felt a large amount of guilt and sadness. To illustrate these feelings I created installations from empty rocking chairs and all of the milk storage bags that have been used to feed my daughter in my absence. These two sculptures bookend a 10 minute long projection of my drive home taken on my iPhone. Around the exterior walls of this space, images of my daughter sleeping, and personal affects of her room are shown on large 36"x24" digital inkjet prints.
APA, Harvard, Vancouver, ISO, and other styles
13

Asafu-Adjei, Joseph Kwaku. "Probabilistic Methods." VCU Scholars Compass, 2007. http://hdl.handle.net/10156/1420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sinton, Antoine. "Modèles probabilistes avec structure spatiale en physique statistique et en informatique." Paris 6, 2009. http://www.theses.fr/2009PA066690.

Full text
Abstract:
Cette thèse est consacrée au développement et à l'étude de modèles probabilistes avec structure spatiale. Les problèmes envisagés font aussi bien partie de la physique statistique des systèmes désordonnés que du domaine de l'inférence statistique en informatique. L'intérêt de la structure spatiale est de pouvoir introduire des modèles concrets directement interprétables dans divers domaines. Nous sommes ainsi amenés à étudier une évolution de l'ensemble de satisfaction de contraintes XORSAT dans laquelle les interactions sont de portée finie. Ce nouvel ensemble s'interprète en champ moyen dans la limite de Kac. Dans une seconde limite où le rapport entre le nombre total de variables et la portée des interactions reste fixe, cet ensemble s'interprète comme un système de taille finie en champ moyen. L'étude à la fois analytique et numérique permet de mettre en évidence une divergence concrète de la longueur mosaïque ainsi que de présenter un problème de satisfiabilité possédant une interprétation réaliste. Nous avons aussi étudié un ensemble d'algorithmes approchés dans le domaine du codage. Certains problèmes de ce domaine possèdent une mémoire qui rend la complexité des algorithmes d'estimation exponentielle. Une interprétation en champ moyen de ces problèmes nous a permis d'obtenir des résultats probants quant à la réduction de la complexité algorithmique les concernant.
APA, Harvard, Vancouver, ISO, and other styles
15

Baier, Christel, Benjamin Engel, Sascha Klüppelholz, Steffen Märcker, Hendrik Tews, and Marcus Völp. "A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-129917.

Full text
Abstract:
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
APA, Harvard, Vancouver, ISO, and other styles
16

Weidner, Thomas. "Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal Logic." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208732.

Full text
Abstract:
The classic theorems of Büchi and Kleene state the expressive equivalence of finite automata to monadic second order logic and regular expressions, respectively. These fundamental results enjoy applications in nearly every field of theoretical computer science. Around the same time as Büchi and Kleene, Rabin investigated probabilistic finite automata. This equally well established model has applications ranging from natural language processing to probabilistic model checking. Here, we give probabilistic extensions Büchi\\\'s theorem and Kleene\\\'s theorem to the probabilistic setting. We obtain a probabilistic MSO logic by adding an expected second order quantifier. In the scope of this quantifier, membership is determined by a Bernoulli process. This approach turns out to be universal and is applicable for finite and infinite words as well as for finite trees. In order to prove the expressive equivalence of this probabilistic MSO logic to probabilistic automata, we show a Nivat-theorem, which decomposes a recognisable function into a regular language, homomorphisms, and a probability measure. For regular expressions, we build upon existing work to obtain probabilistic regular expressions on finite and infinite words. We show the expressive equivalence between these expressions and probabilistic Muller-automata. To handle Muller-acceptance conditions, we give a new construction from probabilistic regular expressions to Muller-automata. Concerning finite trees, we define probabilistic regular tree expressions using a new iteration operator, called infinity-iteration. Again, we show that these expressions are expressively equivalent to probabilistic tree automata. On a second track of our research we investigate Constraint LTL over multidimensional data words with data values from the infinite tree. Such LTL formulas are evaluated over infinite words, where every position possesses several data values from the infinite tree. Within Constraint LTL on can compare these values from different positions. We show that the model checking problem for this logic is PSPACE-complete via investigating the emptiness problem of Constraint Büchi automata.
APA, Harvard, Vancouver, ISO, and other styles
17

Cheng, Chi Wa. "Probabilistic topic modeling and classification probabilistic PCA for text corpora." HKBU Institutional Repository, 2011. http://repository.hkbu.edu.hk/etd_ra/1263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Qiu, Feng. "Probabilistic covering problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47567.

Full text
Abstract:
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints. In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches. In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
APA, Harvard, Vancouver, ISO, and other styles
19

Taylor, Jonathan 1981. "Lax probabilistic bisimulation." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111546.

Full text
Abstract:
Probabilistic bisimulation is a widely studied equivalence relation for stochastic systems. However, it requires the behavior of the states to match on actions with matching labels. This does not allow bisimulation to capture symmetries in the system. In this thesis we define lax probabilistic bisimulation, in which actions are only required to match within given action equivalence classes. We provide a logical characterization and an algorithm for computing this equivalence relation for finite systems. We also specify a metric on states which assigns distance 0 to lax-bisimilar states. We end by examining the use of lax bisimulation for analyzing Markov Decision Processes (MDPs) and show that it corresponds to the notion of a MDP homomorphism, introduced by Ravindran & Barto. Our metric provides an algorithm for generating an approximate MDP homomorphism and provides bounds on the quality of the best control policy that can be computed using this approximation.
APA, Harvard, Vancouver, ISO, and other styles
20

Gendra, Casalí Bernat. "Probabilistic quantum metrology." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/371132.

Full text
Abstract:
La Metrologia és el camp d’investigació sobre les eines estadístiques i el disseny tecnològic dels aparells de mesura necessaris per inferir informació precisa sobre paràmetres físics. En un sistema físic el soroll és inherent en última instància amb el de les seves parts, i per tant en un nivell microscòpic està governat per les lleis de la física quàntica. Les mesures quàntiques són intrínsecament sorolloses i per tant limiten la precisió amb la qual es pot obtenir en qualsevol escenari de metrologia. El camp de la metrologia quàntica està dedicat a l’estudi d’aquests límits i al desenvolupament de noves eines per ajudar a superar-los, sovint fent ús de les característiques exclusivament quàntiques com la superposició o l'entrellaçament.En el procés de disseny d’un protocol d’estimació és necessari utilitzar una figura de mèrit per optimitzar el rendiment d’aquests protocols. Fins ara la majoria de plantejaments de metrologia quàntica i els límits que en deriven han estat deterministes, és a dir, que estan optimitzats per tal de proporcionar una estimació vàlida per a cadascun dels possibles resultats de la mesura i minimitzar-ne l’error promig entre el valor estimat i el real del paràmetre. Aquesta avaluació dels protocols mitjançant el seu error promig és molt natural i convenient, però pot haver-hi algunes situacions en què això no sigui suficient per a expressar l’ús concret que se li donarà al valor obtingut.Un punt central d’aquesta tesi és observar que resultats concrets d’una mesura poden proporcionar una estimació amb una millor precisió que la mitjana. Perquè això succeeixi hi ha d’haver altres resultats imprecisos que compensin la mitjana perquè aquesta no violi els límits deterministes. En aquesta tesi hem escollit una figura de merit que reflecteix la màxima precisió que es pot obtenir. Optimitzem la precisió d’un subconjunt de resultats senyalats, i quantifiquem la probabilitat d’obtenir-ne algun d’ells, o en altres paraules, la probabilitat que el protocol proporcioni una estimació. Això pot ser entès com proposar una opció addicional que està sempre disponible per la mesura, a saber, la possibilitat de post-seleccionar els resultats i donar amb només certa probabilitat una resposta concloent. Aquests protocols probabilístics garanteixen una precisió mínima pels resultats senyalats. En la mecànica quàntica hi ha moltes maneres de poder llegir les dades d’un sistema quàntic. Per tant, l’optimització dels esquemes probabilístics no es pot reduir a la reinterpretació de resultats a partir dels esquemes (determinsitic) de metrologia quàntica canòniques, sinó que implica la recerca de mesures quàntiques completament diferents. Concretament, hem dissenyat protocols probabilístics per a l’estimació de fases, direccions i de sistemes de referència. Hem vist que la post-selecció té dos efectes possibles: compensar una mala elecció de l’estat inicial o contrarestar els efectes negatius del soroll presents en l’estat del sistema o en el procés de mesurament. En particular, trobem que afegir la possibilitat d’abstenció en l’estimació de fases en presència de soroll pot produir una millora en la precisió que supera la cota trobada per protocols deterministes. Trobem una cota que correspon a la millor precisió que es pot obtenir.<br>Metrology is the field of research on statistical tools and technological design of measurement devices to infer accurate information about physical parameters. The noise in a physical setup is ultimately related to that of its constituents, and at a microscopic level this is in turn dictated by the rules of quantum physics. Quantum measurements are inherently noisy and hence limit the precision that can be reached by any metrology scheme. The field of quantum metrology is devoted to the study of such limits and to the development of new tools that help to surmount them, often make use unique quantum features such as superposition or entanglement. In the process of designing an estimation protocol, the experimentalist uses a figure of merit to optimise the performance of such protocols. Up until now most quantum metrology schemes and known bounds have been deterministic, that is, they are optimized in order to provide a valid estimate for each possible measurement outcome and minimize the average error between the estimated and true value of the parameter. This benchmarking of a protocol by its average performance is very natural and convenient, but there can be some scenarios in which this is not enough to express the concrete use that will be given to the obtained value. A central point in this thesis is that particular measurement outcomes can provide an estimate with a better precision than the average one. Notice that for this to happen there must be other imprecise outcomes so that the average does not violate the deterministic bounds. In this thesis we choose a figure of merit that reflects the maximum precision one can obtain. We optimise the precision of a set of heralded outcomes, and quantify the chance of such outcomes to occur, or in other words the probability that the protocol fails to provide an estimate. This can be understood as putting forward an extra feature that is always available to the experimentalist, namely the possibility of post-selecting the outcomes of their measurements and giving with some probability an inconclusive answer. These probabilistic protocols guarantee a minimal precision upon a heralded outcome. In quantum mechanics there are many ways in which data can be read-off from a quantum system. Hence, the optimization of probabilistic schemes cannot be reduced to reinterpreting results from the canonical (determinsitic) quantum metrology schemes, but rather it entails the search of completly different quantum generalized measurements. Specifically, we design probabilistic protocols for phase, direction and reference frame estimation. We show that post-selection has two possible effects: to compensate a bad choice of probe state or to counterbal¬ance the negative effects of noise present in the state system or in the measurement process. In particular, we show that adding the possibility of abstaining in phase estimation in presence of noise can produce an enhancement in precision that overtake the ultimate bound of deterministic protocols. The bound derived is the best precision that can be obtained, and in this sense one can speak of ultimate bound in precision.
APA, Harvard, Vancouver, ISO, and other styles
21

Seidel, Karen. "Probabilistic communicating processes." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kim, Jeong-Gyoo. "Probabilistic shape models :." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Power, Christopher. "Probabilistic symmetry reduction." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3493/.

Full text
Abstract:
Model checking is a technique used for the formal verification of concurrent systems. A major hindrance to model checking is the so-called state space explosion problem where the number of states in a model grows exponentially as variables are added. This means even trivial systems can require millions of states to define and are often too large to feasibly verify. Fortunately, models often exhibit underlying replication which can be exploited to aid in verification. Exploiting this replication is known as symmetry reduction and has yielded considerable success in non probabilistic verification. The main contribution of this thesis is to show how symmetry reduction techniques can be applied to explicit state probabilistic model checking. In probabilistic model checking the need for such techniques is particularly acute since it requires not only an exhaustive state-space exploration, but also a numerical solution phase to compute probabilities or other quantitative values. The approach we take enables the automated detection of arbitrary data and component symmetries from a probabilistic specification. We define new techniques to exploit the identified symmetry and provide efficient generation of the quotient model. We prove the correctness of our approach, and demonstrate its viability by implementing a tool to apply symmetry reduction to an explicit state model checker.
APA, Harvard, Vancouver, ISO, and other styles
24

Binter, Roman. "Applied probabilistic forecasting." Thesis, London School of Economics and Political Science (University of London), 2012. http://etheses.lse.ac.uk/559/.

Full text
Abstract:
In any actual forecast, the future evolution of the system is uncertain and the forecasting model is mathematically imperfect. Both, ontic uncertainties in the future (due to true stochasticity) and epistemic uncertainty of the model (reflecting structural imperfections) complicate the construction and evaluation of probabilistic forecast. In almost all nonlinear forecast models, the evolution of uncertainty in time is not tractable analytically and Monte Carlo approaches (”ensemble forecasting”) are widely used. This thesis advances our understanding of the construction of forecast densities from ensembles, the evolution of the resulting probability forecasts and methods of establishing skill (benchmarks). A novel method of partially correcting the model error is introduced and shown to outperform a competitive approach. The properties of Kernel dressing, a method of transforming ensembles into probability density functions, are investigated and the convergence of the approach is illustrated. A connection between forecasting and Information theory is examined by demonstrating that Kernel dressing via minimization of Ignorance implicitly leads to minimization of Kulback-Leibler divergence. The Ignorance score is critically examined in the context of other Information theory measures. The method of Dynamic Climatology is introduced as a new approach to establishing skill (benchmarking). Dynamic Climatology is a new, relatively simple, nearest neighbor based model shown to be of value in benchmarking of global circulation models of the ENSEMBLES project. ENSEMBLES is a project funded by the European Union bringing together all major European weather forecasting institutions in order to develop and test state-of-the-art seasonal weather forecasting models. Via benchmarking the seasonal forecasts of the ENSEMBLES models we demonstrate that Dynamic Climatology can help us better understand the value and forecasting performance of large scale circulation models. Lastly, a new approach to correcting (improving) imperfect model is presented, an idea inspired by [63]. The main idea is based on a two-stage procedure where a second stage ‘corrective’ model iteratively corrects systematic parts of forecasting errors produced by a first stage ‘core’ model. The corrector is of an iterative nature so that at a given time t the core model forecast is corrected and then used as an input into the next iteration of the core model to generate a time t + 1 forecast. Using two nonlinear systems we demonstrate that the iterative corrector is superior to alternative approaches based on direct (non-iterative) forecasts. While the choice of the corrector model class is flexible, we use radial basis functions. Radial basis functions are frequently used in statistical learning and/or surface approximations and involve a number of computational aspects which we discuss in some detail.
APA, Harvard, Vancouver, ISO, and other styles
25

Jones, Claire. "Probabilistic non-determinism." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/413.

Full text
Abstract:
Much of theoretical computer science is based on use of inductive complete partially ordered sets (or ipos). The aim of this thesis is to extend this successful theory to make it applicable to probabilistic computations. The method is to construct a "probabilistic powerdomain" on any ipo to represent the outcome of a probabilistic program which has outputs in the original ipo. In this thesis it is shown that evaluations (functions which assign a probability to open sets with various conditions) form such a powerdomain. Further, the powerdomain is a monadic functor on the categoy Ipo. For restricted classes of ipos a powerdomain of probability distributions, or measures which only take values less than one, has been constructed (by Saheb-Djahromi). In the thesis we show that this powerdomain may be constructed for continuous ipos where it is isomorphic to that of evaluations. The powerdomain of evaluations is shown to have a simple Stone type duality between it and sets of upper continuous functions. This is then used to give a Hoare style logic for an imperative probabilistic language, which is the dual of the probabilistic semantics. Finally the powerdomain is used to give a denotational semantics of a probabilistic metalanguage which is an extension of Moggi's lambda-c-calculus for the powerdomain monad. This semantics is then shown to be equivalent to an operational semantics.
APA, Harvard, Vancouver, ISO, and other styles
26

Angelopoulos, Nicos. "Probabilistic finite domains." Thesis, City University London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ranganathan, Ananth. "Probabilistic topological maps." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22643.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008.<br>Committee Chair: Dellaert, Frank; Committee Member: Balch, Tucker; Committee Member: Christensen, Henrik; Committee Member: Kuipers, Benjamin; Committee Member: Rehg, Jim.
APA, Harvard, Vancouver, ISO, and other styles
28

Iyer, Ranjit. "Probabilistic distributed control." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1568128211&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chien, Yung-hsin. "Probabilistic preference modeling /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Morris, Quaid Donald Jozef 1972. "Practical probabilistic inference." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29989.

Full text
Abstract:
Thesis (Ph. D. in Computational Neuroscience)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2003.<br>Includes bibliographical references (leaves 157-163).<br>The design and use of expert systems for medical diagnosis remains an attractive goal. One such system, the Quick Medical Reference, Decision Theoretic (QMR-DT), is based on a Bayesian network. This very large-scale network models the appearance and manifestation of disease and has approximately 600 unobservable nodes and 4000 observable nodes that represent, respectively, the presence and measurable manifestation of disease in a patient. Exact inference of posterior distributions over the disease nodes is extremely intractable using generic algorithms. Inference can be made much more efficient by exploiting the QMR-DT's unique structure. Indeed, tailor-made inference algorithms for the QMR-DT efficiently generate exact disease posterior marginals for some diagnostic problems and accurate approximate posteriors for others. In this thesis, I identify a risk with using the QMR-DT disease posteriors for medical diagnosis. Specifically, I show that patients and physicians conspire to preferentially report findings that suggest the presence of disease. Because the QMR-DT does not contain an explicit model of this reporting bias, its disease posteriors may not be useful for diagnosis. Correcting these posteriors requires augmenting the QMR-DT with additional variables and dependencies that model the diagnostic procedure. I introduce the diagnostic QMR-DT (dQMR-DT), a Bayesian network containing both the QMR-DT and a simple model of the diagnostic procedure. Using diagnostic problems sampled from the dQMR-DT, I show the danger of doing diagnosis using disease posteriors from the unaugmented QMR-DT.<br>(cont.) I introduce a new class of approximate inference methods, based on feed-forward neural networks, for both the QMR-DT and the dQMR-DT. I show that these methods, recognition models, generate accurate approximate posteriors on the QMR-DT, on the dQMR-DT, and on a version of the dQMR-DT specified only indirectly through a set of presolved diagnostic problems.<br>by Quaid Donald Jozef Morris.<br>Ph.D.in Computational Neuroscience
APA, Harvard, Vancouver, ISO, and other styles
31

Mansinghka, Vikash Kumar. "Natively probabilistic computation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47892.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.<br>Includes bibliographical references (leaves 129-135).<br>I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.<br>(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.<br>by Vikash Kumar Mansinghka.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
32

Conduit, Bryce David. "Probabilistic alloy design." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cowans, Philip John. "Probabilistic document modelling." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Barbosa, Fábio Daniel Moreira. "Probabilistic propositional logic." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22198.

Full text
Abstract:
Mestrado em Matemática e Aplicações<br>O termo Lógica Probabilística, em geral, designa qualquer lógica que incorpore conceitos probabilísticos num sistema lógico formal. Nesta dissertacção o principal foco de estudo e uma lógica probabilística (designada por Lógica Proposicional Probabilística Exógena), que tem por base a Lógica Proposicional Clássica. São trabalhados sobre essa lógica probabilística a síntaxe, a semântica e um cálculo de Hilbert, provando-se diversos resultados clássicos de Teoria de Probabilidade no contexto da EPPL. São também estudadas duas propriedades muito importantes de um sistema lógico - correcção e completude. Prova-se a correcção da EPPL da forma usual, e a completude fraca recorrendo a um algoritmo de satisfazibilidade de uma fórmula da EPPL. Serão também considerados na EPPL conceitos de outras lógicas probabilísticas (incerteza e probabilidades intervalares) e Teoria de Probabilidades (condicionais e independência).<br>The term Probabilistic Logic generally refers to any logic that incorporates probabilistic concepts in a formal logic system. In this dissertation, the main focus of study is a probabilistic logic (called Exogenous Probabilistic Propo- sitional Logic), which is based in the Classical Propositional Logic. There will be introduced, for this probabilistic logic, its syntax, semantics and a Hilbert calculus, proving some classical results of Probability Theory in the context of EPPL. Moreover, there will also be studied two important properties of a logic system - soundness and completeness. We prove the EPPL soundness in a standard way, and weak completeness using a satis ability algorithm for a formula of EPPL. It will be considered in EPPL concepts of other probabilistic logics (uncertainty and intervalar probability) and of Probability Theory (independence and conditional).
APA, Harvard, Vancouver, ISO, and other styles
35

Carvalho, Elsa Cristina Batista Bento. "Probabilistic constraint reasoning." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8603.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia<br>The continuous constraint paradigm has been often used to model safe reasoning in applications where uncertainty arises. Constraint propagation propagates intervals of uncertainty among the variables of the problem, eliminating values that do not belong to any solution. However, constraint programming is very conservative: if initial intervals are wide (reflecting large uncertainty), the obtained safe enclosure of all consistent scenarios may be inadequately wide for decision support. Since all scenarios are considered equally likely, insufficient pruning leads to great inefficiency if some costly decisions may be justified by very unlikely scenarios. Even when probabilistic information is available for the variables of the problem, the continuous constraint paradigm is unable to incorporate and reason with such information. Therefore, it is incapable of distinguishing between different scenarios, based on their likelihoods. This thesis presents a probabilistic continuous constraint paradigm that associates a probabilistic space to the variables of the problem, enabling probabilistic reasoning to complement the underlying constraint reasoning. Such reasoning is used to address probabilistic queries and requires the computation of multi-dimensional integrals on possibly non linear integration regions. Suitable algorithms for such queries are developed, using safe or approximate integration techniques and relying on methods from continuous constraint programming in order to compute safe covers of the integration region. The thesis illustrates the adequacy of the probabilistic continuous constraint framework for decision support in nonlinear continuous problems with uncertain information, namely on inverse and reliability problems, two different types of engineering problems where the developed framework is particularly adequate to support decision makers.
APA, Harvard, Vancouver, ISO, and other styles
36

Scott, Barry Allan. "Transitions and boundaries." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3617.

Full text
Abstract:
Thesis (M.F.A.) -- University of Maryland, College Park, 2006.<br>Thesis research directed by: Dept. of Art. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
37

See, Mark. "Transitions and architecture." This title; PDF viewer required Home page for entire colleciton, 2007. http://archives.udmercy.edu:8080/dspace/handle/10429/9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Abraham, Judson Charles. "Populist Just Transitions." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104394.

Full text
Abstract:
This dissertation argues that the just transition policy framework may not vivify labor internationalism or erode support for right-wing populists if just transitions are not part of left-wing populist projects. Labor internationalism, which involves labor unions cooperating across borders to pursue common goals, is increasingly important as unions strive to work with their foreign counterparts to influence the international community's urgent efforts to address climate change. Right-wing populism is a growing threat to organized labor and climate protection efforts. Some labor activists hope that advocacy for the just transition policy framework, a set of guidelines for compensating workers in polluting industries who are laid-off as a result of environmental protections, will unite labor organizations from around the world and improve their approaches to international solidarity. Progressives hope that just transition policies will discourage voters from supporting right-wing populist candidates, who are often climate skeptics, out of fear of the job losses that accompany environmentalist reforms. However, I question the assumption that just transition policies, in and of themselves, can serve as solutions to the challenges posed by right-wing populism or overcome divisions within the global labor movement. It is possible for economic nationalism at the expense of global solidarity to continue and for right-wing populists to maintain support in decarbonizing areas where policy makers have indemnified laid-off fossil fuel workers. Integrating just transition policies into left-wing populist politics could potentially make just transitions more useful for countering the far-right and promoting labor internationalism. This dissertation looks to the political theorist Antonio Gramsci's thoughts regarding the "national popular," which Gramsci's readers often associate with left-wing populism. The national popular entails intellectuals from different fields (such as the academy, journalism, and manufacturing) coming together to modernize patriotism and strip it of chauvinistic nationalism. I point out that the original proposals for just transitions prioritized providing free higher education for the workers laid-off from polluting industries. The just transition framework's stress on higher education has populistic implications. Educators, particularly members of teachers' unions, may practice populism throughout the implementation of a just transition for laid-off coal workers by encouraging the displaced workers to cooperate with knowledge workers to rethink nationalism. If workers displaced from polluting industries rethink nationalism in university settings while maintaining their connections to the labor movement, then these workers may in turn reject far-right politicians and discourage organized labor from supporting trade nationalism.<br>Doctor of Philosophy<br>This dissertation argues that the just transition policy framework may not vivify labor internationalism or erode support for right-wing populists if just transitions are not part of left-wing populist projects. Labor internationalism, which involves labor unions cooperating across borders to pursue common goals, is increasingly important as unions strive to work with their foreign counterparts to influence the international community's urgent efforts to address climate change. Right-wing populism is a growing threat to organized labor and climate protection efforts. Some labor activists hope that advocacy for the just transition policy framework, a set of guidelines for compensating workers in polluting industries who are laid-off as a result of environmental protections, will unite labor organizations from around the world and improve their approaches to international solidarity. Progressives hope that just transition policies will discourage voters from supporting right-wing populist candidates, who are often climate skeptics, out of fear of the job losses that accompany environmentalist reforms. However, I question the assumption that just transition policies, in and of themselves, can serve as solutions to the challenges posed by right-wing populism or overcome divisions within the global labor movement. It is possible for economic nationalism at the expense of global solidarity to continue and for right-wing populists to maintain support in decarbonizing areas where policy makers have indemnified laid-off fossil fuel workers. Integrating just transition policies into left-wing populist politics could potentially make just transitions more useful for countering the far-right and promoting labor internationalism. This dissertation looks to the political theorist Antonio Gramsci's thoughts regarding the "national popular," which Gramsci's readers often associate with left-wing populism. The national popular entails intellectuals from different fields (such as the academy, journalism, and manufacturing) coming together to modernize patriotism and strip it of chauvinistic nationalism. I point out that the original proposals for just transitions prioritized providing free higher education for the workers laid-off from polluting industries. The just transition framework's stress on higher education has populistic implications. Educators, particularly members of teachers' unions, may practice populism throughout the implementation of a just transition for laid-off coal workers by encouraging the displaced workers to cooperate with knowledge workers to rethink nationalism. If workers displaced from polluting industries rethink nationalism in university settings while maintaining their connections to the labor movement, then these workers may in turn reject far-right politicians and discourage organized labor from supporting trade nationalism.
APA, Harvard, Vancouver, ISO, and other styles
39

Ohnrich, Peter. "Transitions in Architecture." Thesis, The University of Arizona, 2001. http://hdl.handle.net/10150/596954.

Full text
Abstract:
A city is a structure of single elements that has grown in time and is characterized by political, economical, aesthetic and topographical influences. Looked at it historically, a city is a logical structure, whereas all the single elements add up to a big structure and for each individual city typical overall character. Except for some special and important buildings (mostly churches) the individual buildings interacted with the general city structure and often even with each other. It is my contention that contemporary urban design should work toward the reconstruction of sympathetic interrelationships in urban spaces and buildings. Transitions should be used for interaction to make the pieces work together as a whole. This we experience mainly spatially. But transitions can be achieved in many different layers (social, spatial, thermal...). My research shall help to find out about these different layers individually and how they work together, to define them and finally apply them to a design. The site I chose for the design is the new civic plaza in Tucson, Arizona. The plaza, as part of the Rio Nuevo project, is planned to be the new main plaza for the city with its 800.000 people (year 2000). The site is an empty lot right now; all buildings are roughly laid out in size and function, but not defined in detail. This allows starting the design with the plaza and letting the buildings react to it. Mainly public buildings are supposed to border the plaza. A hotel is located to the east, a parking structure with retail on the north and different museum buildings to the west and south. My goal for the plaza is to create several activity zones of different sizes (spaces for large (outdoor concerts) and small gatherings (private spots within the public space) and different activities (walking, sitting, resting and watching). Different things may happen simultaneously, but also change during a day's or even a year's period of time. Big events like open -air concerts should be possible as well as small events of interaction between few people at the same spot during different times. All these different elements should tie together spatially supported by transitions of material, thermal comfort, light and social aspects and form a big stage of events in a continuous scene. Transitions of different kinds could achieve a change of space without losing the connection to the greater scale. As a person for example is walking from the plaza into a building (museum), he might experience transitions thermally (sun - shade - cooled air and shade - enclosed air - conditioned space) as well as spatially (same floor material inside and outside) or socially as space becomes more and more private (plaza - café area in front of museum - museum lobby - exhibition). As the change of space happens in little steps and each step connects to the previous, a change of space can be achieved without losing the overall gesture. As the plaza is located in the desert, it is important to research the climate as well to be able to establish a good comfort level at specific spaces for people to rest outside within the plaza throughout the year and at different times of day.
APA, Harvard, Vancouver, ISO, and other styles
40

Bertsimas, Dimitris J. "The Probabilistic Minimum Spanning Tree, Part II: Probabilistic Analysis and Asymptotic Results." Massachusetts Institute of Technology, Operations Research Center, 1988. http://hdl.handle.net/1721.1/5284.

Full text
Abstract:
In this paper, which is a sequel to [3], we perform probabilistic analysis under the random Euclidean and the random length models of the probabilistic minimum spanning tree (PMST) problem and the two re-optimization strategies, in which we find the MST or the Steiner tree respectively among the points that are present at a particular instance. Under the random Euclidean model we prove that with probability 1, as the number of points goes to infinity, the expected length of the PMST is the same with the expectation of the MST re-optimization strategy and within a constant of the Steiner re-optimization strategy. In the random length model, using a result of Frieze [6], we prove that with probability 1 the expected length of the PMST is asymptotically smaller than the expectation of the MST re-optimization strategy. These results add evidence that a priori strategies may offer a useful and practical method for resolving combinatorial optimization problems on modified instances. Key words: Probabilistic analysis, combinatorial optimization, minimum spanning tree, Steiner tree.
APA, Harvard, Vancouver, ISO, and other styles
41

Gutti, Praveen. "Semistructured probabilistic object query language a query language for semistructured probabilistic data /." Lexington, Ky. : [University of Kentucky Libraries], 2007. http://hdl.handle.net/10225/701.

Full text
Abstract:
Thesis (M.S.)--University of Kentucky, 2007.<br>Title from document title page (viewed on April 2, 2008). Document formatted into pages; contains: vii, 42 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 39-40).
APA, Harvard, Vancouver, ISO, and other styles
42

Hohn, Jennifer Lynn. "Generalized Probabilistic Bowling Distributions." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/82.

Full text
Abstract:
Have you ever wondered if you are better than the average bowler? If so, there are a variety of ways to compute the average score of a bowling game, including methods that account for a bowler’s skill level. In this thesis, we discuss several different ways to generate bowling scores randomly. For each distribution, we give results for the expected value and standard deviation of each frame's score, the expected value of the game’s final score, and the correlation coefficient between the score of the first and second roll of a single frame. Furthermore, we shall generalize the results in each distribution for an frame game on pins. Additionally, we shall generalize the number of possible games when bowling frames on pins. Then, we shall derive the frequency distribution of each frame’s scores and the arithmetic mean for frames on pins. Finally, to summarize the variety of distributions, we shall make tables that display the results obtained from each distribution used to model a particular bowler’s score. We evaluate the special case when bowling 10 frames on 10 pins, which represents a standard bowling game.
APA, Harvard, Vancouver, ISO, and other styles
43

Sharkey, Michael Ian. "Probabilistic Proof-carrying Code." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22720.

Full text
Abstract:
Proof-carrying code is an application of software verification techniques to the problem of ensuring the safety of mobile code. However, previous proof-carrying code systems have assumed that mobile code will faithfully execute the instructions of the program. Realistic implementations of computing systems are susceptible to probabilistic behaviours that can alter the execution of a program in ways that can result in corruption or security breaches. We investigate the use of a probabilistic bytecode language to model deterministic programs that are executed on probabilistic computing systems. To model probabilistic safety properties, a probabilistic logic is adapted to out bytecode instruction language, and soundness is proven. A sketch of a completeness proof of the logic is also shown.
APA, Harvard, Vancouver, ISO, and other styles
44

Sałustowicz, Rafał. "Probabilistic incremental program evolution." [S.l.] : [s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967352746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mukhtasor. "Probabilistic ocean outfall design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0012/MQ34210.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Morad, Kamalaldin. "Probabilistic process data rectification." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/NQ49520.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Meijs, Wouter. "Probabilistic measures of coherence." [S.l. : Rotterdam : s.n.] ; Erasmus University [Host], 2005. http://hdl.handle.net/1765/6670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Fallis, Don. "Goldman on Probabilistic Inference." Springer, 2002. http://hdl.handle.net/10150/105286.

Full text
Abstract:
In his latest book, Knowledge in a Social World, Alvin Goldman claims to have established that if a reasoner starts with accurate estimates of the reliability of new evidence and conditionalizes on this evidence, then this reasoner is objectively likely to end up closer to the truth. In this paper, I argue that Goldmanâ s result is not nearly as philosophically significant as he would have us believe. First, accurately estimating the reliability of evidenceâ in the sense that Goldman requiresâ is not quite as easy as it might sound. Second, being objectively likely to end up closer to the truthâ in the sense that Goldman establishesâ is not quite as valuable as it might sound.
APA, Harvard, Vancouver, ISO, and other styles
49

Uwimbabazi, Aline. "Extended probabilistic symbolic execution." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85804.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.<br>ENGLISH ABSTRACT: Probabilistic symbolic execution is a new approach that extends the normal symbolic execution with probability calculations. This approach combines symbolic execution and model counting to estimate the number of input values that would satisfy a given path condition, and thus is able to calculate the execution probability of a path. The focus has been on programs that manipulate primitive types such as linear integer arithmetic in object-oriented programming languages such as Java. In this thesis, we extend probabilistic symbolic execution to handle data structures, thus allowing support for reference types. Two techniques are proposed to calculate the probability of an execution when the programs have structures as inputs: an approximate approach that assumes probabilities for certain choices stay fixed during the execution and an accurate technique based on counting valid structures. We evaluate these approaches on an example of a Binary Search Tree and compare it to the classic approach which only take symbolic values as input.<br>AFRIKAANSE OPSOMMING: Probabilistiese simboliese uitvoering is ’n nuwe benadering wat die normale simboliese uitvoering uitbrei deur waarksynlikheidsberekeninge by te voeg. Hierdie benadering kombineer simboliese uitvoering en modeltellings om die aantal invoerwaardes wat ’n gegewe padvoorwaarde sal bevredig, te beraam en is dus in staat om die uitvoeringswaarskynlikheid van ’n pad te bereken. Tot dus vêr was die fokus op programme wat primitiewe datatipes manipuleer, byvoorbeeld lineêre heelgetalrekenkunde in objek-geörienteerde tale soos Java. In hierdie tesis brei ons probabilistiese simboliese uitvoering uit om datastrukture, en dus verwysingstipes, te dek. Twee tegnieke word voorgestel om die uitvoeringswaarskynlikheid van ’n program met datastrukture as invoer te bereken. Eerstens is daar die benaderingstegniek wat aanneem dat waarskynlikhede vir sekere keuses onveranderd sal bly tydens die uitvoering van die program. Tweedens is daar die akkurate tegniek wat gebaseer is op die telling van geldige datastrukture. Ons evalueer hierdie benaderings op ’n voorbeeld van ’n binêre soekboom en vergelyk dit met die klassieke tegniek wat slegs simboliese waardes as invoer neem.
APA, Harvard, Vancouver, ISO, and other styles
50

Meier, Armin. "Probabilistic protein homology modeling." Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-171299.

Full text
Abstract:
Searching sequence databases and building 3D models for proteins are important tasks for biologists. When the structure of a query protein is given, its function can be inferred. However, experimental methods for structure prediction are both expensive and time consuming. Fully automatic homology modeling refers to building a 3D model for a query sequence from an alignment to related homologous proteins with known structure (templates) by a computer. Current prediction servers can provide accurate models within a few hours to days. Our group has developed HHpred, which is one of the top performing structure prediction servers in the field. In general, homology based structure modeling consists of four steps: (1) finding homologous templates in a database, (2) selecting and (3) aligning templates to the query, (4) building a 3D model based on the alignment. In part one of this thesis, we will present improvements of step (2) and (4). Specifically, homology modeling has been shown to work best when multiple templates are selected instead of only a single one. Yet, current servers are using rather ad-hoc approaches to combine information from multiple templates. We provide a rigorous statistical framework for multi-template homology modeling. Given an alignment, we employ Modeller to calculate the most probable structure for a query. The 3D model is obtained by optimally satisfying spatial restraints derived from the alignment and expressed as probability density functions. We find that the query’s atomic distance restraints can be accurately described by two-component Gaussian mixtures. Moreover, we derive statistical weights to quantify the redundancy among related templates. This allows us to apply the standard rules of probability theory to combine restraints from several templates. Together with a heuristic template selection strategy, we have implemented this approach within HHpred and could significantly improve model quality. Furthermore, we took part in CASP, a community wide competition for structure prediction, where we were ranked first in template based modeling and, at the same time, were more than 450 times faster than all other top servers. Homology modeling heavily relies on detecting and correctly aligning templates to the query sequence (step (1) and (3) from above). But remote homologies are difficult to detect and hard to align on a pure sequence level. Hence, modern tools are based on profiles instead of sequences. A profile summarizes the evolutionary history of a given sequence and consists of position specific amino acid probabilities for each residue. In addition to the similarity score between profile columns, most methods use extra terms that compare 1D structural properties such as secondary structure or solvent accessibility. These can be predicted from local profile windows. In the second part of this thesis, we develop a new score that is independent of any predefined structural property. For this purpose, we learn a library of 32 profile patterns that are most conserved in alignments of remotely homologous, structurally aligned proteins. Each so called “context state” in the library consists of a 13-residue sequence profile. We integrate the new context score into our Hmm-Hmm alignment tool HHsearch and improve especially the sensitivity and precision of difficult pairwise alignments significantly. Taken together, we introduced probabilistic methods to improve all four main steps in homology based structure prediction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography