To see the other types of publications on this topic, follow the link: Logic in Computer Science.

Dissertations / Theses on the topic 'Logic in Computer Science'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Logic in Computer Science.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wilkinson, Toby. "Enriched coalgebraic modal logic." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/354112/.

Full text
Abstract:
We formalise the notion of enriched coalgebraic modal logic, and determine conditions on the category V (over which we enrich), that allow an enriched logical connection to be extended to a framework for enriched coalgebraic modal logic. Our framework uses V-functors L: A → A and T: X → X, where L determines the modalities of the resulting modal logics, and T determines the coalgebras that provide the semantics. We introduce the V-category Mod(A, α) of models for an L-algebra (A, α), and show that the forgetful V-functor from Mod(A, α) to X creates conical colimits. The concepts of bisimulation, simulation, and behavioural metrics (behavioural approximations),are generalised to a notion of behavioural questions that can be asked of pairs of states in a model. These behavioural questions are shown to arise through choosing the category V to be constructed through enrichment over a commutative unital quantale (Q, Ⓧ, I) in the style of Lawvere (1973). Corresponding generalisations of logical equivalence and expressivity are also introduced,and expressivity of an L-algebra (A, α) is shown to have an abstract category theoretic characterisation in terms of the existence of a so-called behavioural skeleton in the category Mod(A, α). In the resulting framework every model carries the means to compare the behaviour of its states, and we argue that this implies a class of systems is not fully defined until it is specified how states are to be compared or related.
APA, Harvard, Vancouver, ISO, and other styles
2

Coughlin, Devin. "Type-Intertwined Separation Logic." Thesis, University of Colorado at Boulder, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3704668.

Full text
Abstract:

Static program analysis can improve programmer productivity and software reliability by definitively ruling out entire classes of programmer mistakes. For mainstream imperative languages such as C, C++, and Java, static analysis about the heap---memory that is dynamically allocated at run time---is particularly challenging because heap memory acts as global, mutable state. This dissertation describes how to soundly combine two static analyses that each take vastly different approaches to reasoning about the heap: type systems and separation logic. Traditional type systems take an alias-agnostic, global view of the heap that affords both fast verification and light-weight annotation of invariants holding over the entire program. Separation logic, in contrast, provides an alias-aware, local view of the heap in which invariants can vary at each program point. In this work, I show how type systems and separation logic can be safely and efficiently combined. The result is type-intertwined separation logic, an analysis that applies traditional type-based reasoning to some regions of the program and separation logic to others---converting between analysis representations at region boundaries---and summarizes some portions of the heap with coarse type invariants and others with precise separation logic invariants. The key challenge that this dissertation addresses is the communication and preservation of heap invariants between analyses. I tackle this challenge with two core contributions. The first is type-consistent summarization and materialization, which enables type-intertwined separation logic to both leverage and selectively violate the global type invariant. This mechanism allows the analysis to efficiently and precisely verify invariants that hold almost everywhere. Second, I describe gated separating conjunction, a non-commutative strengthening of standard separating conjunction that expresses local dis-pointing relationships between sub-heaps. Gated separation enables local heap reasoning by permitting the separation logic to frame out portions of memory and prevent the type system from interfering with its contents---an operation that would be unsound in type-intertwined analysis with only standard separating conjunction. With these two contributions, type-intertwined separation logic combines the benefits of both type-like global reasoning and separation-logic-style local reasoning in a single analysis.

APA, Harvard, Vancouver, ISO, and other styles
3

Tarnoff, David. "Episode 4.03 – Combinational Logic." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/31.

Full text
Abstract:
Individual logic gates are not very practical. Their power comes when you combine them to create combinational logic. This episode takes a look at combinational logic by working through an example in order to generate its truth table.
APA, Harvard, Vancouver, ISO, and other styles
4

Tarnoff, David. "Episode 5.02 – NAND Logic." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Qing. "Optimization techniques for distributed logic simulation." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96665.

Full text
Abstract:
Gate level simulation is a necessary step to verify the correctness of a circuitdesign before fabrication. It is a very time-consuming application, especially in lightof current circuit sizes. Since circuits are continually growing in size and complexity,there is a need for more efficient simulation techniques to keep the circuit verificationtime acceptably small. The use of parallel or distributed simulation is such a technique.When executed on a network of workstations, distributed simulation is alsoa very cost-effective technique. This research focuses on optimization techniques forTime Warp based gate-level logic simulations. The techniques which are described inthis thesis are oriented towards distributed platforms. The first major contributionof this thesis was the creation of an object oriented distributed simulator, XTW. Ituses an optimistic synchronization algorithm and incorporates a number of knownoptimization techniques targeting different aspects of distributed logic simulation.XEQ, an O(1) event scheduling algorithm for this simulator was developed for usein XTW. XEQ enabled us to execute gate level simulations up to 9.4 times fasterthan the same simulator using a skip-list (O(lg n)) event queue. rb-messagea mechanism which reduces the cost of rollback in Time Warp was also developedfor use in XTW. Our experiments revealed that the rb-message mechanism reducedthe number of anti-messages sent in a Time Warp based logic simulation by 76%on average. Moreover, based on the observations that (1)not all circuits should besimulated in parallel and (2) different circuits achieve their best parallel simulationperformance with a different number of compute nodes, an algorithm that uses theK-NN machine learning algorithm was devised to determine the most effective softwareand hardware combination for a logic simulation. After an extensive trainingregime, it was shown to make a correct prediction 99% of the time on whether touse a parallel or sequential simulator. The predicted number of nodes to use on aparallel platform was shown to produce an average execution time which was notmore than 12% of the smallest execution time. The configuration which resulted inthe minimal execution time was picked 61% of the time. A final contribution of thisthesis is an effort to link together commercial single processor simulators making useof Verilog PLI.
La simulation "gate-level" est une tape ncessaire pour vrifier la conformit dela conception d'un circuit avant sa fabrication. C'est un programme qui prendbeaucoup de temps, compte tenu particulirement de la taille actuelle des circuits.Ceux-ci ne cessant de se dvelopper en taille et en complexit, il y a un rel besoin detechniques de simulation plus efficaces afin de maintenir la dure de vrification ducircuit raisonnablement courte. Une de ces techniques consiste utiliser la simulationparallle ou distribue. Quand excute sur un rseau de postes de travail, la simulationdistribue se rvle galement tre une technique trs rentable. Cette recherche se concentresur l'optimisation des techniques de simulations "gate-level" logiques bases surTime Warp. Les techniques qui sont dcrites dans cet expos sont orientes vers lesplateformes distribues. La premire contribution majeure de cet expos a t la crationd'un simulateur distribu orient sur l'objet, XTW. Il utilise un algorithme de synchronisationoptimiste et incorpore un certain nombre de techniques d'optimisationconnues visant diffrents aspects de la simulation distribue logique. XEQ, un algorithmeprogrammateur d'vnements O(1) pour ce simulateur a t dvelopp pour treutilis dans XTW. XEQ nous permet d'excuter des simulations "gate-level" jusqu'9,4 fois plus rapides qu'avec le mme simulateur utilisant une suite d'vnement en"skip-list" (O(lg n)). "rb-message" – un mcanisme qui diminue le co?t de rductiondans Time Warp a galement t mis au point pour tre utilis dans XTW. Nos essaisont rvl que le mcanisme de "rb-message" permettait de diminuer le nombre des antimessagesenvoys au cours d'une simulation logique base sur Time Warp de 76 % enmoyenne. Il a t en outre con?u, en se basant sur les observations que (1) certainscircuits ne devraient pas tre simuls en parallle et (2) que diffrents circuits atteignentleur meilleure performance de simulation parallle avec un nombre diffrent de noeudsde calculs, un algorithme utilisant l'algorithme d'apprentissage de la machine K-NNafin de dterminer quelle tait l'association de logiciel et de matriel la plus efficacedans le cadre d'une simulation logique. l'issue d'un entra?nement approfondi, ilest apparu qu'il pouvait faire un pronostic juste 99 % tablissant quand utiliser unsimulateur parallle ou squentiel. Le nombre annonc de noeuds utiliser sur une plateformeparallle s'est avr permettre une dure d'excution moyenne gale 12 % de la pluscourte dure d'excution. La configuration ayant abouti la dure d'excution minimalea t reprise dans 61 % des cas. Dernire contribution apporte par cet expos, relier lessimulateurs commerciaux processeur unique utilisant Verilog PLI.
APA, Harvard, Vancouver, ISO, and other styles
6

Kabiri, Chimeh Mozhgan. "Data structures for SIMD logic simulation." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7521/.

Full text
Abstract:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
APA, Harvard, Vancouver, ISO, and other styles
7

Lapointe, Stéphane. "Induction of recursive logic programs." Thesis, University of Ottawa (Canada), 1992. http://hdl.handle.net/10393/7467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Botha, Leonard. "DevelopinThe Bayesian Description Logic BALC." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29350.

Full text
Abstract:
Description Logics (DLs) that support uncertainty are not as well studied as their crisp alternatives. This limits their application in many real world domains, which often require reasoning about uncertain or contradictory information. In this thesis we present the Bayesian Description Logic BALC, which takes existing work on Bayesian Description Logics and applies it to the classical Description Logic ALC. We define five reasoning problems for BALC; two versions of concept satisfiability (called total and partial respectively), knowledge base consistency, three subsumption problems (positive subsumption, p-subsumption, exact subsumption), instance checking, and the most likely context problem. Consistency, satisfiability, and instance checking have not previously been studied in the context of contextual Bayesian DLs and as such this is new work. We then go on to provide algorithms that solve all of these reasoning problems, with the exception of the most likely context problem. We found that all reasoning problems in BALC are in the same complexity class as their classical variants, provided that the size of the Bayesian Network is included in the size of the knowledge base. That is, all reasoning problems mentioned above (excluding most likely context) are exponential in the size of the knowledge base and the size of the Bayesian Network.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Qing. "XTW, a parallel and distributed logic simulator." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=19631.

Full text
Abstract:
In this thesis, a new parallel synchronization mechanism, XTW, is proposed. XTW is designed for the parallel simulation of large logic circuits on a cluster of computer workstations. In XTW, a new event queue structure, XEQ, is created in order to reduce the cost of event-scheduling; a new message "un-sending" mechanism, "rb-messages", is proposed to reduce the cost of un-sending" previously sent messages. Both theoretical analysis and actual simulations provide evidence that XTW speeds up parallel logic simulations and provides excellent scalability versus the number of processors and the circuit size. An object-oriented parallel logic simulation software framework, XTWFM, is built upon the base of the XTW mechanism. A milliongates circuit, which can not be simulated by our sequential simulator, is successfully simulated by XTWFM over a cluster of 6 "small" PCs. This success demonstrates that a cluster of PCs is an attractive low-cost alternative for large scale circuit simulation.
APA, Harvard, Vancouver, ISO, and other styles
10

Phillips, Caitlin. "An algebraic approach to dynamic epistemic logic." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86767.

Full text
Abstract:
In reasoning about multi-agent systems, it is important to look beyond the realm of propositional logic and to reason about the knowledge of agents within the system, as what they know about the environment will affect how they behave. A useful tool for formalizing and analyzing what agents know is epistemic logic, a modal logic developed by philosophers in the early 1960s. Epistemic logic is key to understanding knowledge in multi-agent systems, but insufficient if one wishes to study how the agents' knowledge changes over time. To do this, it is necessary to use a logic that combines dynamic and epistemic modalities, called dynamic epistemic logic. Some formalizations of dynamic epistemic logic use Kripke semantics for the states and actions, while others take a more algebraic approach, and use order-theoretic structures in their semantics. We discuss several of these logics, but focus predominantly on the algebraic framework for dynamic epistemic logic.
Past approaches to dynamic epistemic logic have typically been focused on actions whose primary purpose is to communicate information from one agent to another. These actions are unable to alter the valuation of any proposition within the system. In fields such as security and economics, it is easy to imagine situations in which this sort of action would be insufficient. Instead, we expand the framework to include both communication actions and actions that change the state of the system. Furthermore, we propose a new modality which captures both epistemic and propositional changes that result from the agents' actions.
En raisonnement sur les systemes multi-agents, il est important de regarder au-dela du domaine de la logique propositionnelle et de raisonner sur les con- naissances des agents au sein du syst`eme, parce que ce qu'ils savent au sujet de l'environnement influe sur la mani`ere dont ils se comportent. Un outil utile pour l'analyse et la formalisation de ce que les agents savent, est la logique epistemique, une logique modale developpee par les philosophes du debut des annees 1960. La logique epistemique est la cle de la comprehension des connaissances dans les systemes multi-agents, mais elle est insuffisante si l'on veut etudier la facon dont la connaissance des agents evolue a travers le temps. Pour ce faire, il est necessaire de recourir a une logique qui allie des modalites dynamiques et epistemiques, appele la logique epistemique dynamique. Certaines formalisations de la logique epistemique dynamique utilisent la semantique de Kripke pour les etats et les actions, tandis que d'autres prennent une approche algebrique, et utilisent les structures ordonne dans leur semantique. Nous discutons plusieurs de ces logiques, mais nous nous concentrons principalement sur le cadre algebrique pour la logique epistemique dynamique.
Les approches adoptees dans le passe a la logique epistemique dynamique ont generalement ete axe sur les actions dont l'objectif principal est de communiquer des informations d'un agent a un autre. Ces actions sont dans l'impossibilite de modifier l' evaluation de toute proposition au sein du systeme. Dans des domaines tels que la securite et l' economie, il est facile d'imaginer des situations dans lesquelles ce type d'action serait insuffisante. Au lieu de cela, nous etendons le cadre algebrique pour inclure a la fois des actions de communication et des actions qui changent l' etat du systeme. En outre, nous proposons une nouvelle modalite qui permet de capturer a la fois les changements epistemiques et les changements propositionels qui resultent de l'action des agents.
APA, Harvard, Vancouver, ISO, and other styles
11

Clément, Ian. "Proof theoretical foundations for constructive Description Logic." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22027.

Full text
Abstract:
Description logics (DLs) are a family of knowledge representation languages to describe concepts in a given domain. While we can define the semantics of description logics using, for example, a translation into first-order logic, so far the proof-theoretic nature of DL has not been well investigated. In this thesis, we develop a proof theory for a constructive version of Description Logic, specifically Attributive Language with Complement (ALC), in two steps: First, we define a natural deduction system for ALC and develop a sequent calculus formulation, for which we prove cut-admissibility. We build on prior work on constructive description logic by de Paiva [2006] and modal logic by Simpson [1994], which ensures the consistency of the proposed systems for ALC. In addition, we prove soundness and completeness of this system with respect to known Kripke semantics. The study of these properties provides further evidence that it is appropriate to consider description languages as logics. Second, we adapt recent work by Andreoli [1992] on focusing systems for a variety of non-classical logics to the setting of constructive description logics. Exploiting the invertibility of certain inference rules, we design a focusing calculus suitable for backwards search, and prove its correctness via cut-admissibility. This proof-theoretic study lays the foundation for the development of a practical proof search strategy for constructive description logics.
Les logiques descriptives (DLs) sont une famille de langues de representation de connaissance pour décrire les concepts dans un domaine donné. Pendant que nous pouvons définir la sémantique de logiques descriptives en utilisant, par exemple, une traduction dans la logique du premier ordre, pour l'instant la nature théorique de preuve de DL n'a pas été bien enquêtée. Dans cette thèse, nous développons une théorie de preuve pour une version constructive de la logique descriptive, en "Attributive Language with Complement" (ALC ), en deux étapes: Premièrement, nous définissons un système de déduction naturel pour ALC et développons une formulation de calcul de séquent, pour laquelle nous prouvons l'admissibilité de coupure. Nous tirons parti du travail préalable sur la logique descriptive constructive par de Paiva [2006] et la logique modale par Simpson [1994], qui garantit la cohérence des systèmes proposés pour ALC. En plus, nous prouvons la solidité et complétude de ce système par rapport aux sémantiques Kripke connues. L'étude de ces propri étés fournissent plus d'indices que c'est approprié à considérer les langues descriptives comme des logiques propres. Deuxièmement, nous adaptons le travail récent par Andreoli [1992] sur les systèmes concentrés pour une variété de logiques non-classiques au cadre de logiques descriptives constructives. Le fait d'exploiter l'invertibility de certaines règles d'inférences, nous concevons un calcul concentré convenable à recherche reculons et prouvez son exactitude par l'admissibilité de coupure. Cette étude sur une théorie de preuve pose la fondation pour le développement d'une stratégie de recherche de preuves pratiques.
APA, Harvard, Vancouver, ISO, and other styles
12

Lambiri, Cristian. "Temporal logic models for distributed systems." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10056.

Full text
Abstract:
Since the beginning of the 1980's, the way the computer systems are conceived has changed dramatically. This is a direct result of the appearance, on a large scale, of personal computers and engineering workstations. As a result, networks of independent systems have appeared. This thesis presents a formal specification framework that can be used in the design of distributed systems. The abstract models that are presented are based on a systemic view of distributed systems and discrete event systems. Two base abstract models called deterministic discrete event systems (DDES) and discrete event automaton (DEA) are presented. For the DEA the series and parallel compositions as well as feedback connection are defined. Universal algebra is employed to study the parallel composition of DEAs. From the DDES/DEA an abstract model for distributed systems is obtained. Subsequently, linear time temporal logic is modified for use with the abstract chosen model of distributed systems. The logic is described in three aspects: syntax, semantics and axiomatics. The syntax is modified by the addition of two operators. The semantics of the logic is given over the abstract models. Five axioms are added to the axiomatic system for the two new operators. A programming language called TLL, based on the theoretical framework, links the theory with practice. The syntax and semantics of the programming language are presented. Finally an example of modeling in the framework is given.
APA, Harvard, Vancouver, ISO, and other styles
13

Ngom, Alioune. "Set logic foundation of carrier computing." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10321.

Full text
Abstract:
Set logic algebra (SLA) is a special case of multiple-valued logic algebra. As an ultra higher-valued logic system, a set-valued logic (SVL) system offers a potential and an essential solution to the interconnection problems that occur in highly parallel VLSI systems. The fundamental concept inherent to a SVL system is multiplex computing or logic values multiplexing: which means the simultaneous transmission of logic values. This basic concept enables the realization of superchips free from interconnection problems. Parallel processing with multiplexable information carriers makes possible to construct large-scale highly parallel system with reduced interconnections. Since the multiplexing of logic values increases the information density, several binary functions can be executed in parallel in a single module. Therefore a great reduction of interconnections can be achieved using optimal multiplexing scheme. Possible approaches to the implementation of the SVL system are based on frequencies multiplexing, waves multiplexing and molecules multiplexing, and are called carrier computing systems. Our research focuses on the study of completeness properties in SLA under compositions with union ($\bigcup$), intersection ($\bigcap$) and complement ($\sp-$) functions. More precisely, the question is what kind of set logic functions can be constructed from given set of functions which includes $\bigcup,$ $\bigcap,$ and $\sp-,$ i.e. whether any set logic function can be constructed from such set. We classify the set logic functions according to their ability to participate in a base (complete irredundant sets of functions) and describe all bases once the classification is done. We develop also some algorithms (programs) for classification and enumeration of functions and bases, which are very useful for a general completeness analysis.
APA, Harvard, Vancouver, ISO, and other styles
14

Quigley, Claire Louise. "A programming logic for Java bytecode programs." Thesis, University of Glasgow, 2004. http://theses.gla.ac.uk/3030/.

Full text
Abstract:
One significant disadvantage of interpreted bytecode languages, such as Java, is their low execution speed in comparison to compiled languages like C. The mobile nature of bytecode adds to the problem, as many checks are necessary to ensure that downloaded code from untrusted sources is rendered as safe as possible. But there do exist ways of speeding up such systems. One approach is to carry out static type checking at load time, as in the case of the Java Bytecode Verifier. This reduces the number of runtime checks that must be done and also allows certain instructions to be replaced by faster versions. Another approach is the use of a Just In Time (JIT) Compiler, which takes the bytecode and produces corresponding native code at runtime. Some JIT compilers also carry out some code optimization. There are, however, limits to the amount of optimization that can safely be done by the Verifier and JITs; some operations simply cannot be carried out safely without a certain amount of runtime checking. But what if it were possible to prove that the conditions the runtime checks guard against would never arise in a particular piece of code? In this case it might well be possible to dispense with these checks altogether, allowing optimizations not feasible at present. In addition to this, because of time constraints, current JIT compilers tend to produce acceptable code as quickly as possible, rather than producing the best code possible. By removing the burden of analysis from them it may be possible to change this. We demonstrate that it is possible to define a programming logic for bytecode programs that allows the proof of bytecode programs containing loops. The instructions available to use in the programs are currently limited, but the basis is in place to extend these. The development of this logic is non-trivial and addresses several difficult problems engendered by the unstructured nature of bytecode programs.
APA, Harvard, Vancouver, ISO, and other styles
15

Tarnoff, David. "Episode 4.01 – Intro to Logic Gates." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/29.

Full text
Abstract:
Logic gates are the fundamental building blocks of digital circuits. In this episode, we take a look at the four most basic gates: AND, OR, exclusive-OR, and the inverter, and show how an XOR gate can be used to compare two digital values. Click here to read the show transcript.
APA, Harvard, Vancouver, ISO, and other styles
16

Tibbits, Skylar J. E. "Logic matter : digital logic as heuristics for physical self-guided-assembly." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/64566.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 123-124).
Given the increasing complexity of the physical structures surrounding our everyday environment -- buildings, machines, computers and almost every other physical object that humans interact with -- the processes of assembling these complex structures are inevitably caught in a battle of time, complexity and human/machine processing power. If we are to keep up with this exponential growth in construction complexity we need to develop automated assembly logic embedded within our material parts to aid in construction. In this thesis I introduce Logic Matter as a system of passive mechanical digital logic modules for self-guided-assembly of large-scale structures. As opposed to current systems in self-reconfigurable robotics, Logic Matter introduces scalability, robustness, redundancy and local heuristics to achieve passive assembly. I propose a mechanical module that implements digital NAND logic as an effective tool for encoding local and global assembly sequences. I then show a physical prototype that successfully demonstrates the described mechanics, encoded information and passive self-guided-assembly. Finally, I show exciting potentials of Logic Matter as a new system of computing with applications in space/volume filling, surface construction, and 3D circuit assembly.
by Skylar J.E. Tibbits.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Long, Byron L. "Validity in a variant of separation logic." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3378369.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.
Title from PDF t.p. (viewed on Jul 9, 2010). Source: Dissertation Abstracts International, Volume: 70-10, Section: B, page: 6348. Adviser: Daniel Leivant.
APA, Harvard, Vancouver, ISO, and other styles
18

McKenzie, Lynn Mhairi. "Logic synthesis and optimisation using Reed-Muller expansions." Thesis, Edinburgh Napier University, 1995. http://researchrepository.napier.ac.uk/Output/4276.

Full text
Abstract:
This thesis presents techniques and algorithms which may be employed to represent, generate and optimise particular categories of Exclusive-OR Sum- Of-Products (ESOP) forms. The work documented herein concentrates on two types of Reed-Muller (RM) expressions, namely, Fixed Polarity Reed-Muller (FPRM) expansions and KROnecker (KRO) expansions (a category of mixed polarity RM expansions). Initially, the theory of switching functions is comprehensively reviewed. This includes descriptions of various types of RM expansion and ESOP forms. The structure of Binary Decision Diagrams (BDDs) and Reed-Muller Universal Logic Module (RM-ULM) networks are also examined. Heuristic algorithms for deriving optimal (sub-optimal) FPRM expansions of Boolean functions are described. These algorithms are improved forms of an existing tabular technique [1]. Results are presented which illustrate the performance of these new minimisation methods when evaluated against selected existing techniques. An algorithm which may be employed to generate FPRM expansions from incompletely specified Boolean functions is also described. This technique introduces a means of determining the optimum allocation of the Boolean 'don't care' terms so as to derive equivalent minimal FPRM expansions. The tabular technique [1] is extended to allow the representation of KRO expansions. This new method may be employed to generate KRO expansions from either an initial incompletely specified Boolean function or a KRO expansion of different polarity. Additionally, it may be necessary to derive KRO expressions from Boolean Sum-Of-Products (SOP) forms where the product terms are not minterms. A technique is described which forms KRO expansions from disjoint SOP forms without first expanding the SOP expressions to minterm forms. Reed-Muller Binary Decision Diagrams (RMBDDs) are introduced as a graphical means of representing FPRM expansions. RMBDDs are analogous to the BDDs used to represent Boolean functions. Rules are detailed which allow the efficient representation of the initial FPRM expansions and an algorithm is presented which may be employed to determine an optimum (sub-optimum) variable ordering for the RMBDDs. The implementation of RMBDDs as RM-ULM networks is also examined. This thesis is concluded with a review of the algorithms and techniques developed during this research project. The value of these methods are discussed and suggestions are made as to how improved results could have been obtained. Additionally, areas for future work are proposed.
APA, Harvard, Vancouver, ISO, and other styles
19

Alqahtani, Saeed Masaud H. "Cloud intrusion detection systems : fuzzy logic and classifications." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/45430/.

Full text
Abstract:
Cloud Computing (CC), as defned by national Institute of Standards and Technology (NIST), is a new technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service-provider interaction. CC is a fast growing field; yet, there are major concerns regarding the detection of security threats, which in turn have urged experts to explore solutions to improve its security performance through conventional approaches, such as, Intrusion Detection System (IDS). In the literature, there are two most successful current IDS tools that are used worldwide: Snort and Suricata; however, these tools are not flexible to the uncertainty of intrusions. The aim of this study is to explore novel approaches to uplift the CC security performance using Type-1 fuzzy logic (T1FL) technique with IDS when compared to IDS alone. All experiments in this thesis were performed within a virtual cloud that was built within an experimental environment. By combining fuzzy logic technique (FL System) with IDSs, namely SnortIDS and SuricataIDS, SnortIDS and SuricataIDS for detection systems were used twice (with and without FL) to create four detection systems (FL-SnortIDS, FL-SuricataIDS, SnortIDS, and SuricataIDS) using Intrusion Detection Evaluation Dataset (namely ISCX). ISCX comprised two types of traffic (normal and threats); the latter was classified into four classes including Denial of Service, User-to-Root, Root-to-Local, and Probing. Sensitivity, specificity, accuracy, false alarms and detection rate were compared among the four detection systems. Then, Fuzzy Intrusion Detection System model was designed (namely FIDSCC) in CC based on the results of the aforementioned four detection systems. The FIDSCC model comprised of two individual systems pre-and-post threat detecting systems (pre-TDS and post-TDS). The pre-TDS was designed based on the number of threats in the aforementioned classes to assess the detection rate (DR). Based on the output of this DR and false positives of the four detection systems, the post-TDS was designed in order to assess CC security performance. To assure the validity of the results, classifier algorithms (CAs) were introduced to each of the four detection systems and four threat classes for further comparison. The classifier algorithms were OneR, Naive Bayes, Decision Tree (DT), and K-nearest neighbour. The comparison was made based on specific measures including accuracy, incorrect classified instances, mean absolute error, false positive rate, precision, recall, and ROC area. The empirical results showed that FL-SnortIDS was superior to FL-SuricataIDS, SnortIDS, and SuricataIDS in terms of sensitivity. However, insignificant difference was found in specificity, false alarms and accuracy among the four detection systems. Furthermore, among the four CAs, the combination of FL-SnortIDS and DT was shown to be the best detection method. The results of these studies showed that FIDSCC model can provide a better alternative to detecting threats and reducing the false positive rates more than the other conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
20

Hinman, Roderick Thornton. "Recovered energy logic--a logic family and power supply featuring very high efficiency." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12015.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 215-220).
by Roderick Thornton Hinman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Liang-Ting. "On a purely categorical framework for coalgebraic modal logic." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/4882/.

Full text
Abstract:
A category CoLog of distributive laws is introduced to unify different approaches to modal logic for coalgebras, based merely on the presence of a contravariant functor P that maps a state space to its collection of predicates. We show that categorical constructions, including colimits, limits, and compositions of distributive laws as a tensor product, in CoLog generalise and extend existing constructions given for Set coalgebraic logics and that the framework does not depend on any particular propositional logic or state space. In the case that P establishes a dual adjunction with its dual functor S, we show that a canonically defined coalgebraic logic exists for any type of coalgebras. We further restrict our discussion to finitary algebraic logics and study equational coalgebraic logics. Objects of predicate liftings are used to characterise equational coalgebraic logics. The expressiveness problem is studied via the mate correspondence, which gives an isomorphism between CoLog and the comma category from the pre-composition to the post-composition with S. Then, the modularity of the expressiveness is studied in the comma category via the notion of factorisation system.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Kailiang. "Circuit design for logic automata." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52781.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 143-148).
The Logic Automata model is a universal distributed computing structure which pushes parallelism to the bit-level extreme. This new model drastically differs from conventional computer architectures in that it exposes, rather than hides, the physics underlying the computation by accommodating data processing and storage in a local and distributed manner. Based on Logic Automata, highly scalable computing structures for digital and analog processing have been developed; and they are verified at the transistor level in this thesis. The Asynchronous Logic Automata (ALA) model is derived by adding the temporal locality, i.e., the asynchrony in data exchanges, in addition to the spacial locality of the Logic Automata model. As a demonstration of this incrementally extensible, clockless structure, we designed an ALA cell library in 90 nm CMOS technology and established a "pick-and-place" design flow for fast ALA circuit layout. The work flow gracefully aligns the description of computer programs and circuit realizations, providing a simpler and more scalable solution for Application Specific Integrated Circuit (ASIC) designs, which are currently limited by global constraints such as the clock and long interconnects. The potential of the ALA circuit design flow is tested with example applications for mathematical operations. The same Logic Automata model can also be augmented by relaxing the digital states into analog ones for interesting analog computations. The Analog Logic Automata (AnLA) model is a merge of the Analog Logic principle and the Logic Automata architecture, in which efficient processing is embedded onto a scalable construction.
(cont.) In order to study the unique property of this mixed-signal computing structure, we designed and fabricated an AnLA test chip in AMI 0.5[mu]m CMOS technology. Chip tests of an AnLA Noise-Locked Loop (NLL) circuit as well as application tests of AnLA image processing and Error-Correcting Code (ECC) decoding, show large potential of the AnLA structure.
by Kailiang Chen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Maggi, Alessandro. "The DReAM framework: a logic-inspired approach to reconfigurable system modeling." Thesis, IMT Alti Studi Lucca, 2020. http://e-theses.imtlucca.it/310/1/Maggi_phdthesis.pdf.

Full text
Abstract:
Modern systems evolve in unpredictable environments and have to continuously adapt their behavior to changing conditions. The DReAM (Dynamic Reconfigurable Architecture Modeling) framework has been designed to address these requirements by offering the tools for modeling reconfigurable dynamic systems effectively. At its core, the framework allows component-based architecture design leveraging a rule-based language, inspired from Interaction Logic. The expressiveness of the language allows us to define the behavior of both components and components aggregates encompassing all aspects of dynamicity, including parametric multi-modal coordination of components and reconfiguration of their structure and population. DReAM allows the description of both endogenous/modular and exogenous/centralized coordination styles and sound transformations from one style to the other, while adopting a familiar and intuitive syntax. To better model dynamic mobile systems, the framework is further extended with two structuring concepts: motifs - independent dynamic architectures coordinating components assigned to them - and maps - graphlike data structures modeling the topology of the environment and parametrizing coordination between components. The jDReAM Java project has been developed to provide an execution engine with an associated library of classes and methods that support system specifications conforming to the DReAM syntax. It allows to develop runnable systems combining the expressiveness of the rule-based notation with the flexibility of this widespread programming language.
APA, Harvard, Vancouver, ISO, and other styles
24

Martínez-Mascarúa, Carlos Mario. "Syntactic and semantic structures in cocolog logic control." Thesis, McGill University, 1997. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=34757.

Full text
Abstract:
The research presented in this thesis is formulated within the Conditional Observer and Controller Logic (COCOLOG) framework. COCOLOG is a family of first order languages and associated theories for the design and implementation of controllers for discrete event systems (DESs).
The opening part of this thesis presents a high level formulation of COCOLOG called Macro COCOLOG. First, we present the theory of Macro COCOLOG languages, a framework for the enhancement of the original COCOLOG language via definitional constructions. Second, we present the theory of Macro COCOLOG actions, a framework for the enhancement of COCOLOG allowing the utilisation of hierarchically aggregated control actions.
In this thesis Macro COCOLOG is applied to a pair of examples: the control of the motion of a mobile robot and the flow of water through a tank.
The next question addressed in the thesis is the possibility of expanding the original COCOLOG theories in various ways concerning the fundamental issues of the arithmetic system and the notion of reachability in DESs as expressed in COCOLOG. Specifically, the fundamental nature of the reachability predicate, Rbl(·,·,·), is explored, and found to be completely determined by notions axiomatised in subtheories of the original COCOLOG theory. This result effectively reduces the complexity of the proofs originally involving Rbl(·,·,·).
Following this line of thought, two sets of Macro languages and associated theories are developed which are shown to be as powerful (in terms of expressiveness and deductive scope) as the original COCOLOG theories and hence, necessarily, as powerful as Markovian fragment COCOLOG theories.
A final result along these lines is that the control law itself (originally expressed in a set of extra logical Conditional Control Rules) can be incorporated into the COCOLOG theories via function symbol definition.
The efficient implementation of COCOLOG controllers serves as a motivation for the final two chapters of the thesis. A basic result in this chapter is that a COCOLOG controller may itself be realized as a DES since, for any COCOLOG controller, it is shown that one may generate a finite state machine realizing that controller. This realization can then be used for real time (i.e. reactive) control. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
25

Tarnoff, David. "Episode 4.04 – NAND, NOR, and Exclusive-NOR Logic." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/32.

Full text
Abstract:
The simplest combinational logic circuits are made by inverting the output of a fundamental logic gate. Despite this simplicity, these gates are vital. In fact, we can realize any truth table using a circuit made only from AND gates with inverted outputs.
APA, Harvard, Vancouver, ISO, and other styles
26

Kang, Le. "A logic approach to conflict resolution in university timetabling." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5767.

Full text
Abstract:
A computerized timetabling system developed at University of Ottawa is presented. The system is built on a logic programming model which uses first order logic to define first order and second order constraints in timetabling. Information about courses, professors, and student programs is collected for each academic year and used in the process of constructing timetables. The time schedule produced by the system takes into account course conflicts, professor availability, professor teaching preferences, pre-assignments, classroom location choices, and many other important factors that affect its user satisfaction level. Along with the system's ability to include factors such as professor availability, professor teaching preferences and classroom location, the analysis of test results at the University of Ottawa shows a large improvement in the time utilization and seating usage of classrooms, compared to the corresponding timetables that are produced by the traditional manual processes. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
27

Ngom, Alioune. "Synthesis of multiple-valued logic functions by neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ36787.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Perera, Hemapani Srinath. "Enforcing user-defined management logic in large scale systems." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3358983.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.
Title from PDF t.p. (viewed on Feb. 10, 2010). Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3611. Adviser: Dennis B. Gannon.
APA, Harvard, Vancouver, ISO, and other styles
29

Eyoh, Imo. "Interval type-2 Atanassov-intuitionistic fuzzy logic for uncertainty modelling." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/51441/.

Full text
Abstract:
This thesis investigates a new paradigm for uncertainty modelling by employing a new class of type-2 fuzzy logic system that utilises fuzzy sets with membership and non-membership functions that are intervals. Fuzzy logic systems, employing type-1 fuzzy sets, that mark a shift from computing with numbers towards computing with words have made remarkable impacts in the field of artificial intelligence. Fuzzy logic systems of type-2, a generalisation of type-1 fuzzy logic systems that utilise type-2 fuzzy sets, have created tremendous advances in uncertainty modelling. The key feature of the type-2 fuzzy logic systems, with particular reference to interval type-2 fuzzy logic systems, is that the membership functions of interval type-2 fuzzy sets are themselves fuzzy. These give interval type-2 fuzzy logic systems an advantage over their type-1 counterparts which have precise membership functions. Whilst the interval type-2 fuzzy logic systems are effective in modelling uncertainty, they are not able to adequately handle an indeterminate/neutral characteristic of a set, because interval type-2 fuzzy sets are only specified by membership functions with an implicit assertion that the non-membership functions are complements of the membership functions (lower or upper). In a real life scenario, it is not necessarily the case that the non-membership function of a set is complementary to the membership function. There may be some degree of hesitation arising from ignorance or a complete lack of interest concerning a particular phenomenon. Atanassov intuitionistic fuzzy set, another generalisation of the classical fuzzy set, captures this thought process by simultaneously defining a fuzzy set with membership and non-membership functions such that the sum of both membership and non-membership functions is less than or equal to 1. In this thesis, the advantages of both worlds (interval type-2 fuzzy set and Atanassov intuitionistic fuzzy set) are explored and a new and enhanced class of interval type-2 fuzzy set namely, interval type-2 Atanassov intuitionistic fuzzy set, that enables hesitation, is introduced. The corresponding fuzzy logic system namely, interval type-2 Atanassov intuitionistic fuzzy logic system is rigorously and systematically formulated. In order to assess this thesis investigates a new paradigm for uncertainty modelling by employing a new class of type-2 fuzzy logic system that utilises fuzzy sets with membership and non-membership functions that are intervals. Fuzzy logic systems, employing type-1 fuzzy sets, that mark shift from computing with numbers towards computing with words have made remarkable impacts in the field of artificial intelligence. Fuzzy logic systems of type-2, a generalisation of type-1 fuzzy logic systems that utilise type-2 fuzzy sets, have created tremendous advances in uncertainty modelling. The key feature of the type-2 fuzzy logic systems, with particular reference to interval type-2 fuzzy logic systems, is that the membership functions of interval type-2 fuzzy sets are themselves fuzzy. These give interval type-2 fuzzy logic systems an advantage over their type-1 counterparts which have precise membership functions. Whilst the interval type-2 fuzzy logic systems are effective in modelling uncertainty, they are not able to adequately handle an indeterminate/neutral characteristic of a set, because interval type-2 fuzzy sets are only specified by membership functions with an implicit assertion that the non-membership functions are complements of the membership functions (lower or upper). In a real life scenario, it is not necessarily the case that the non-membership function of a set is complementary to the membership function. There may be some degree of hesitation arising from ignorance or a complete lack of interest concerning a particular phenomenon. Atanassov intuitionistic fuzzy set, another generalisation of the classical fuzzy set, captures this thought process by simultaneously defining a fuzzy set with membership and non-membership functions such that the sum of both membership and non-membership functions is less than or equal to 1. In this thesis, the advantages of both worlds (interval type-2 fuzzy set and Atanassov intuitionistic fuzzy set) are explored and a new and enhanced class of interval type-2 fuzz set namely, interval type-2 Atanassov intuitionistic fuzzy set, that enables hesitation, is introduced. The corresponding fuzzy logic system namely, interval type-2 Atanassov intuitionistic fuzzy logic system is rigorously and systematically formulated. In order to assess the viability and efficacy of the developed framework, the possibilities of the optimisation of the parameters of this class of fuzzy systems are rigorously examined. First, the parameters of the developed model are optimised using one of the most popular fuzzy logic optimisation algorithms such as gradient descent (first-order derivative) algorithm and evaluated on publicly available benchmark datasets from diverse domains and characteristics. It is shown that the new interval type-2 Atanassov intuitionistic fuzzy logic system is able to handle uncertainty well through the minimisation of the error of the system compared with other approaches on the same problem instances and performance criteria. Secondly, the parameters of the proposed framework are optimised using a decoupledextended Kalman filter (second-order derivative) algorithm in order to address the shortcomings of the first-order gradient descent method. It is shown statistically that the performance of this new framework with fuzzy membership and non-membership functions is significantly better than the classical interval type-2 fuzzy logic systems which have only the fuzzy membership functions, and its type-1 counterpart which are specified by single membership and non-membership functions. The model is also assessed using a hybrid learning of decoupled extended Kalman filter and gradient descent methods. The proposed framework with hybrid learning algorithm is evaluated by comparing it with existing approaches reported in the literature on the same problem instances and performance metrics. The simulation results have demonstrated the potential benefits of using the proposed framework in uncertainty modelling. In the overall, the fusion of these two concepts (interval type-2 fuzzy logic system and Atanassov intuitionistic fuzzy logic system) provides a synergistic capability in dealing with imprecise and vague information.
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Chen-Hsiu. "A tabular propositional logic: and/or Table Translator." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2409.

Full text
Abstract:
The goal of this project is to design a tool to help users translate any logic statement into Disjunctive Normal Form and present the result as an AND/OR TABLE, which makes the logic relation easier to express by using a two-dimensional grid of values or expressions. This tool is implemented through a web-based and Java-based application. Thus, the user can utilize this tool via World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Guoxing. "A complete reified temporal logic and its applications." Thesis, University of Greenwich, 2008. http://gala.gre.ac.uk/8200/.

Full text
Abstract:
Temporal representation and reasoning plays a fundamental and increasingly important role in some areas of Computer Science and Artificial Intelligence. A natural approach to represent and reason about time-dependent knowledge is to associate them with instantaneous time points and/or durative time intervals. In particular, there are various ways to use logic formalisms for temporal knowledge representation and reasoning. Based on the chosen logic frameworks, temporal theories can be classified into modal logic approaches (including prepositional modal logic approaches and hybrid logic approaches) and predicate logic approaches (including temporal argument methods and temporal reification methods). Generally speaking, the predicate logic approaches are more expressive than the modal logic approaches and among predicate logic approaches, temporal reification methods are even more expressive for representing and reasoning about general temporal knowledge. However, the current reified temporal logics are so complicate that each of them either do not have a clear definition of its syntax and semantics or do not have a sound and complete axiomatization. In this thesis, a new complete reified temporal logic (CRTL) is introduced which has a clear syntax, semantics, and a complete axiomatic system by inheriting from the initial first order language. This is the main improvement made to the reification approaches for temporal representation and reasoning. It is a true reified logic since some meta-predicates are formally defined that allow one to predicate and quantify over prepositional terms, and therefore provides the expressive power to represent and reason about both temporal and non-temporal relationships between prepositional terms. For a special case, the temporal model of the simplified CRTL system (SCRTL) is defined as scenarios and graphically represented in terms of a directed, partially weighted or attributed, simple graph. Therefore, the problem of matching temporal scenarios is transformed into conventional graph matching. For the scenario graph matching problem, the traditional eigen-decomposition graph matching algorithm and the symmetric polynomial transform graph matching algorithm are critically examined and improved as two new algorithms named meta-basis graph matching algorithm and sort based graph matching algorithm respectively, where the meta-basis graph matching algorithm works better for 0-1 matrices while the sort based graph matching algorithm is more suitable for continuous real matrices. Another important contribution is the node similarity graph matching framework proposed in this thesis, based on which the node similarity graph matching algorithms can be defined, analyzed and extended uniformly. We prove that that all these node similarity graph matching algorithms fail to work for matching circles.
APA, Harvard, Vancouver, ISO, and other styles
32

Forst, Jan Frederik. "POLIS : a probabilistic summarisation logic for structured documents." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/467.

Full text
Abstract:
As the availability of structured documents, formatted in markup languages such as SGML, RDF, or XML, increases, retrieval systems increasingly focus on the retrieval of document-elements, rather than entire documents. Additionally, abstraction layers in the form of formalised retrieval logics have allowed developers to include search facilities into numerous applications, without the need of having detailed knowledge of retrieval models. Although automatic document summarisation has been recognised as a useful tool for reducing the workload of information system users, very few such abstraction layers have been developed for the task of automatic document summarisation. This thesis describes the development of an abstraction logic for summarisation, called POLIS, which provides users (such as developers or knowledge engineers) with a high-level access to summarisation facilities. Furthermore, POLIS allows users to exploit the hierarchical information provided by structured documents. The development of POLIS is carried out in a step-by-step way. We start by defining a series of probabilistic summarisation models, which provide weights to document-elements at a user selected level. These summarisation models are those accessible through POLIS. The formal definition of POLIS is performed in three steps. We start by providing a syntax for POLIS, through which users/knowledge engineers interact with the logic. This is followed by a definition of the logics semantics. Finally, we provide details of an implementation of POLIS. The final chapters of this dissertation are concerned with the evaluation of POLIS, which is conducted in two stages. Firstly, we evaluate the performance of the summarisation models by applying POLIS to two test collections, the DUC AQUAINT corpus, and the INEX IEEE corpus. This is followed by application scenarios for POLIS, in which we discuss how POLIS can be used in specific IR tasks.
APA, Harvard, Vancouver, ISO, and other styles
33

Shlyakhter, Ilya 1975. "Declarative symbolic pure-logic model checking." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30184.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 173-181).
Model checking, a technique for findings errors in systems, involves building a formal model that describes possible system behaviors and correctness conditions, and using a tool to search for model behaviors violating correctness properties. Existing model checkers are well-suited for analyzing control-intensive algorithms (e.g. network protocols with simple node state). Many important analyses, however, fall outside the capabilities of existing model checkers. Examples include checking algorithms with complex state, distributed algorithms over all network topologies, and highly declarative models. This thesis addresses the problem of building an efficient model checker that overcomes these limitations. The work builds on Alloy, a relational modeling language. Previous work has defined the language and shown that it can be analyzed by translation to SAT. The primary contributions of this thesis include: a modeling paradigm for describing complex structures in Alloy; significant improvements in scalability of the analyzer; and improvements in usability of the analyzer via addition of a debugger for over constraints. Together, these changes make model-checking practical for important new classes of analyses. While the work was done in the context of Alloy, some techniques generalize to other verification tools.
by Ilya A. Shlyakhter.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Melnikoff, Stephen Jonathan. "Speech recognition in programmable logic." Thesis, University of Birmingham, 2003. http://etheses.bham.ac.uk//id/eprint/16/.

Full text
Abstract:
Speech recognition is a computationally demanding task, especially the decoding part, which converts pre-processed speech data into words or sub-word units, and which incorporates Viterbi decoding and Gaussian distribution calculations. In this thesis, this part of the recognition process is implemented in programmable logic, specifically, on a field-programmable gate array (FPGA). Relevant background material about speech recognition is presented, along with a critical review of previous hardware implementations. Designs for a decoder suitable for implementation in hardware are then described. These include details of how multiple speech files can be processed in parallel, and an original implementation of an algorithm for summing Gaussian mixture components in the log domain. These designs are then implemented on an FPGA. An assessment is made as to how appropriate it is to use hardware for speech recognition. It is concluded that while certain parts of the recognition algorithm are not well suited to this medium, much of it is, and so an efficient implementation is possible. Also presented is an original analysis of the requirements of speech recognition for hardware and software, which relates the parameters that dictate the complexity of the system to processing speed and bandwidth. The FPGA implementations are compared to equivalent software, written for that purpose. For a contemporary FPGA and processor, the FPGA outperforms the software by an order of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
35

Boskovitz, Agnes. "Data editing and logic : the covering set method from the perspective of logic /." View thesis entry in Australian Digital Theses, 2008. http://thesis.anu.edu.au/public/adt-ANU20080314.163155/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gad, Soumyashree Shrikant Gad. "Semantic Analysis of Ladder Logic." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1502740043946349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Avril, Hervé. "Clustered time warp and logic simulation." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=41971.

Full text
Abstract:
In this research, the feasibility of using parallel discrete-event simulation techniques to run logic-level circuit simulations on general purpose distributed memory architectures is investigated. After studying the characteristics of VLSI models, we introduce Clustered Time Warp (CTW), a novel approach to parallel discrete event simulation. In CTW, the logical gates of a circuit are partitioned into clusters and the synchronization algorithm makes use of an optimistic approach between the clusters and a sequential approach within the clusters. We also present a new family of three space-based checkpointing algorithms for use with CTW. Results show that each checkpointing algorithm developed for CTW occupies a different point in the spectrum of possible trade-offs between memory usage and execution time.
We also present a dynamic load balancing algorithm developed for Clustered Time Warp which focuses on distributing the load of the simulation evenly among the processors and then tries to reduce inter-processor communications. We make use of a triggering technique based on the throughput of the simulation system. Performance results show that by dynamically balancing the load, the throughput of the simulation system could be improved by more than a 100%. No substantial improvement was observed on the overall simulation time when trying to minimize inter-processor communications, suggesting that load distribution is the most important factor to be taken into consideration in speeding up the simulation of digital circuits.
Furthermore, we examine the impact of partitioning and mapping on the performance and behavior of the Clustered Time Warp algorithm. We show that partitioning algorithms which try to minimize the number of cutsets between the partitions do not necessarily succeed in minimizing inter-processor communications. We also show that in our environment, load imbalance has a stronger effect than rollback overhead.
Finally, we study the problem of scalability encountered when using optimistic techniques. We show that the performance of Time Warp can greatly suffer from rollback explosions or when the dog chasing its tail phenomenon is observed. We also show that Clustered Time Warp is less sensitive to these phenomenons and as such, is more scalable than Time Warp.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Pu. "ATT: Execution models for logic programs." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1061906762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Nai-Wei. "Automatic complexity analysis of logic programs." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186287.

Full text
Abstract:
This dissertation describes research toward automatic complexity analysis of logic programs and its applications. Automatic complexity analysis of programs concerns the inference of the amount of computational resources consumed during program execution, and has been studied primarily in the context of imperative and functional languages. This dissertation extends these techniques to logic programs so that they can handle nondeterminism, namely, the generation of multiple solutions via backtracking. We describe the design and implementation of a (semi)-automatic worst-case complexity analysis system for logic programs. This system can conduct the worst-case analysis for several complexity measures, such as argument size, number of solutions, and execution time. This dissertation also describes an application of such analyses, namely, a runtime mechanism for controlling task granularity in parallel logic programming systems. The performance of parallel systems often starts to degrade when the concurrent tasks in the systems become too fine-grained. Our approach to granularity control is based on time complexity information. With this information, we can compare the execution cost of a procedure with the average process creation overhead of the underlying system to determine at runtime if we should spawn a procedure call as a new concurrent task or just execute it sequentially. Through experimental measurements, we show that this mechanism can substantially improve the performance of parallel systems in many cases. This dissertation also presents several source-level program transformation techniques for optimizing the evaluation of logic programs containing finite-domain constraints. These techniques are based on number-of-solutions complexity information. The techniques include planning the evaluation order of subgoals, reducing the domain of variables, and planning the instantiation order of variable values. This application allows us to solve a problem by starting with a more declarative but less efficient program, and then automatically transforming it into a more efficient program. Through experimental measurements we show that these program transformation techniques can significantly improve the efficiency of the class of programs containing finite-domain constraints in most cases.
APA, Harvard, Vancouver, ISO, and other styles
40

Agin, Ruben. "Logic simulation on a cellular automata machine." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Simon, Thomas D. (Thomas David). "Fast CMOS buffering with post-change logic." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Waldron, Niamh 1974. "InGaAs self-aligned HEMT for logic applications." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/44293.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 123-132).
As CMOS scaling approaches the end of the roadmap it has become a matter of great urgency to explore alternative options to conventional Si devices for logic applications. The high electron mobilities of III-V based compounds makes them an attractive option for use as a channel material. Of these materials, InGaAs offers the best balance between a mature technology and high mobility. InGaAs high electron mobility transistors (HEMTs) have already been shown to hold great promise for logic devices but they are typically not self-aligned nor enhancement mode and as such are not suitable for scaled VLSI applications. In this work a novel self-aligned device architecture for InGaAs HEMT devices is proposed and demonstrated. The key feature of the process is a non-alloyed a W ohmic layer that is separated from the gate by means of an air spacer. The gate to source metal distance is reduced to 60 nm, a 20x improvement over conventional designs where the source to drain distance is typically 1.5 to 2 /Lm. A detailed analysis of the source resistance was carried out and the heterojunction barrier resistance was determined to be the dominant resistance component. Two methods of changing the device threshold voltage are investigated. In the first F is used to passivate Si donors in the insulator layer. In the second the insulator is thinned by means of a dry etch. No degradation of the source resistance was observed using this method, which is an improvement over previous results using wet chemical etching. A 90 nm self-aligned enhancement-mode device with a vertically scaled insulator thickness of 5 nm was fabricated. The device has outstanding logic figures of merit with a VT of 60 mV, g, of 1.3 S/mm, SS of 71 mV/dec, DIBL of 55 mV/V and an I,/Ileak ratio of 2x103.
(cont.) These values are outstanding when compared to state of-the-art Si devices. The relatively low In/Ileak ratio is a consequence of operating a Schottky gate device in enhancement mode. Ultimately a high-k gate dielectric solution will be required.
by Niamh Waldron.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Jianqiang Ph D. Massachusetts Institute of Technology. "InGaAs Quantum-Well MOSFETs for logic applications." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99777.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 151-161).
InGaAs is a promising candidate as an n-type channel material for future CMOS due to its superior electron transport properties. Great progress has taken place recently in demonstrating InGaAs MOSFETs for this goal. Among possible InGaAs MOSFET architectures, the recessed-gate design is an attractive option due to its scalability and simplicity. In this thesis, a novel self-aligned recessed-gate fabrication process for scaled InGaAs Quantum-Well MOSFETs (QW-MOSFETs) is developed. The device architectural design emphasizes scalability, performance and manufacturability by making extensive use of dry etching and Si-compatible materials. The fabrication sequence yields precise control of all critical transistor dimensions. This work achieved InGaAs MOSFETs with the shortest gate length (Lg=20 nm), and MOSFET arrays with the smallest contact size (Lc=40 nm) and smallest pitch size (Lp=150 nm), at the time when they were made. Using a wafer bonding technique, InGaAs MOSFETs were also integrated onto a silicon substrate. The fabricated transistors show the potential of InGaAs to yield devices with well-balanced electron transport, electrostatic integrity and parasitic resistance. A device design optimized for transport exhibits a transconductance of 3.1 mS/[mu]m, a value that matches the best III-V high-electron-mobility transistors (HEMTs). The precise fabrication technology developed in this work enables a detailed study of the impact of channel thickness scaling on device performance. The scaled III-V device architecture achieved in this work has also enabled new device physics studies relevant for the application of InGaAs transistors for future logic. A particularly important one is OFF-state leakage. For the first time, this work has unambiguously identified band-to-band tunneling (BTBT) amplified by a parasitic bipolar effect as the cause of excess OFF-state leakage current in these transistors. This finding has important implications for future device design
by Jianqiang Lin.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Che, Austin 1979. "Engineering RNA logic with synthetic splicing ribozymes." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/47786.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 169-185).
Reusable components, such as logic gates and code libraries, simplify the design and implementation of electronic circuits and computer programs. The engineering of biological systems would benefit also from reusable components. In this thesis, I show the utility of splicing ribozymes for the biological engineer. Ribozymes allow the engineer to manipulate existing biological systems and to program self-modifying RNA systems. In addition, splicing ribozymes are easy to engineer, malleable, modular, and scalable. I used the model ribozyme from Tetrahymena to explore the principles behind engineering biological splicing systems in vivo. I show that the core ribozyme is modular and functions properly in many different contexts. Simple base pairing rules and computational RNA folding can predict splicing efficiency in bacterial cells. To test our understanding of the ribozyme, I generated synthetic ribozymes by manipulating the primary sequence while maintaining the secondary structure. Results indicate that our biochemical understanding of the ribozyme is accurate enough to support engineering. Splicing ribozymes can form core components in an all-RNA logic system. I developed biological transzystors, switches analogous to electrical transistors. Transzystors can use any trans-RNA as input and any RNA as output, allowing the genetic reading of RNA levels. I also show the ribozyme can write RNA using the trans-splicing reaction.
(cont.) Trans-splicing provides an easy mechanism to hook into an existing biological system and patch its operation. The generality of these ribozymes for a wide set of applications makes them promising tools for synthetic biology. Keywords: synthetic biology, RNA, Tetrahymena, ribozyme, splicing, transzystor.
by Austin J. Che.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
45

Ahmed, Abdulbasit. "Online network intrusion detection system using temporal logic and stream data processing." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12153/.

Full text
Abstract:
These days, the world is becoming more interconnected, and the Internet has dominated the ways to communicate or to do business. Network security measures must be taken to protect the organization environment. Among these security measures are the intrusion detection systems. These systems aim to detect the actions that attempt to compromise the confidentiality, availability, and integrity of a resource by monitoring the events occurring in computer systems and/or networks. The increasing amounts of data that are transmitted at higher and higher speed networks created a challenging problem for the current intrusion detection systems. Once the traffic exceeds the operational boundaries of these systems, packets are dropped. This means that some attacks will not be detected. In this thesis, we propose developing an online network based intrusion detection system by the combined use of temporal logic and stream data processing. Temporal Logic formalisms allow us to represent attack patterns or normal behaviour. Stream data processing is a recent database technology applied to flows of data. It is designed with high performance features for data intensive applications processing. In this work we develop a system where temporal logic specifications are automatically translated into stream queries that run on the stream database server and are continuously evaluated against the traffic to detect intrusions. The experimental results show that this combination was efficient in using the resources of the running machines and was able to detect all the attacks in the test data. Additionally, the proposed solution provides a concise and unambiguous way to formally represent attack signatures and it is extensible allowing attacks to be added. Also, it is scalable as the system can benefit from using more CPUs and additional memory on the same machine, or using distributed servers.
APA, Harvard, Vancouver, ISO, and other styles
46

Maharaj, Anish. "The efficient evaluation of visual queries within a logic-based framework." Master's thesis, University of Cape Town, 1995. http://hdl.handle.net/11427/13526.

Full text
Abstract:
Bibliography: leaves 149-153.
There has been much research in the area of visual query systems in recent years. This has stemmed from the need for a more powerful database visualization and querying ability. In addition, there has been a pressing need for a more intuitive interface for the non-expert user. Systems such as Hy+, developed at the University of Toronto, provide environments that satisfy a wide range of database interaction and querying, with the advantage of maintaining a visual interface abstraction throughout. This thesis explores issues related to the translation and evaluation of visual queries, including semantic and optimization possibilities. The primary focus will be on the GraphLog query language, defined in the context of the Hy+ visualization system. GraphLog is translated to the deductive database language Datalog, which is subsequently evaluated by the CORAL logic database system. We propose graph semantics, which define the meaning of visual queries in terms of paths in a graph, for monotone GraphLog. This provides a more intuitive meaning which is not linked to any particular translation. Therefore, Datalog generated by a translation may be compared to well-defined semantics to ensure that the translation preserves the intended meaning. By examining various queries in terms of the graph semantics, we uncover a shortcoming in the existing GraphLog translation. In addition, an alternative translation to Datalog, based on the construction of a nondeterministic finite state automaton, is described for GraphLog queries. The translation has the property that visual queries containing constants are optimized using a technique known as factoring. In addition, the translation performs an optimization on queries with multiple edges that contain no constants, referred to here as variable constraining.
APA, Harvard, Vancouver, ISO, and other styles
47

Mkrtchyan, Lusine. "Alternative solutions to traditional approaches to risk analysis and decision making using fuzzy logic." Thesis, IMT Alti Studi Lucca, 2010. http://e-theses.imtlucca.it/29/1/Mkrtchyan_phdthesis.pdf.

Full text
Abstract:
Fuzzy set theory (FST) and Fuzzy logic (FL) are one of the main components of soft computing which is a collection of techniques to handle hard problems in which the application of traditional approaches fails. The father of FST and FL stated that the dominant aim of SC is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness, and low solution cost. Since its establishment the theory of fuzzy sets and fuzzy logic became very popular and received much attention especially during the last decade being applied in many different fields. The wide use of fuzzy controllers in many massproduced products resulted in the increase of research in fuzzy set theory and fuzzy logic. In this thesis we use the techniques that are based on FL and FST for risk analysis and risk-based decision making. There are several reasons for using FL and FST. Fuzzy logic is a true extension of conventional logic: thus anything that was built using conventional design techniques can be built with fuzzy logic. Another advantage is that it is close to human reasoning, and it is easy to understand for the users who do not have strong mathematical knowledge. A fuzzy system allows the user to use and to reason with words instead of crisp numbers. In addition, FL also offers a wide range of operators to perform efficient combinations of fuzzy predicates. In this thesis we propose alternative solutions to the existing approaches that use FL and FST for risk analysis and risk-based decision making. We investigated the current approaches, and we actually found that there exists only a small amount of researches that focus on risk analysis by using fuzzy logic. As far as we found, there are very few approaches that are generic and representative enough to be applied generally and to be used for complex problems. The existing approaches are very specific, targeting a particular area concentrating on specific types of risks. In this thesis we propose several different frameworks and algorithms based on FST and FL. First, we introduce two algorithms to rank the generalized fuzzy numbers. The main reason for developing a new ranking algorithm is that the existing ranking algorithms have some disadvantages that make them not suitable for risk assessment and decision making. We used our algorithms in risk-aware decision making related to the choice of alternatives. Second, we introduce a pessimistic approach to assess the impact of risk factors on the overall risk. The methods that use the fuzzy weighted average often give a lower result than the real risk especially in the case of a large amount of input variables. Furthermore, the traditional approaches of using fuzzy inference systems may give the same result for different cases depending on the choice of the defuzzification method. For the pessimistic approach we used our developed algorithms of ranking generalized fuzzy numbers. Next we propose the use of Fuzzy Bayesian Networks (FBNs) for risk assessment. While there is a considerable number of studies for Bayesian networks (BNs) for risk analysis and decision making, as far as we found there is not a study to make use of FBNs even though FBNs seem more appropriate and straightforward to use for risk analysis and risk assessment. In general, there is only a small amount of studies about FBNs, and not in many application fields. The last approach discussed in this thesis is the use of Fuzzy Cognitive Maps (FCMs) for risk analysis and decision making. We propose a new framework for group decision making in risk analysis using Extended FCMs. In addition we developed a new type of FCMs, Belief Degree Distributed FCMs, and we show its use for decision making.
APA, Harvard, Vancouver, ISO, and other styles
48

Nenzi, Laura. "A logic-based approach to specify and design spatio-temporal behaviours of complex systems." Thesis, IMT Alti Studi Lucca, 2016. http://e-theses.imtlucca.it/189/1/Nenzi_phdthesis.pdf.

Full text
Abstract:
Models of complex systems, composed of many heterogeneous interacting components, are challenging to analyse, due to the size and complexity of the network of interactions among the individual entities. The analysis becomes even more challenging when the spatio-temporal aspects of the system are to be taken into account. In this thesis, we propose a framework of efficient techniques to validate and analyse the behaviour of complex systems with spatio-temporal dynamics, both in the stochastic and deterministic cases. In particular, we define Signal Spatio-Temporal Logic (SSTL), a spatial extension of Signal Temporal Logic (STL). SSTL presents two new operators: the bounded somewhere and the bounded surround, that can be used to specify metric and topological properties in a discrete space. Given an SSTL formula, we design efficient monitoring algorithms to check its validity and compute its satisfaction (robustness) score over a spatio-temporal trace. To deal with stochastic systems, we define a stochastic version of the quantitative semantics of STL that we extended later to SSTL. We then combine it with machine learning techniques to define efficient parameter estimation and system design procedures. The specification and validation of SSTL formulae have been implemented in a Java tool, jSSTL. Finally, the expressivity of SSTL and the efficiency of the algorithms developed in this work are showed on interested and challenging case studies, including an epidemic spreading model of a waterborne disease, a pattern formation example for reaction-diffusion systems and a french flag model of the morphogen Bicoid
APA, Harvard, Vancouver, ISO, and other styles
49

Teslenko, Maxim. "All Around Logic Synthesis." Doctoral thesis, Stockholm : Mikroelektronik och informationsteknik, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ward, James L. "A Comparison of Fuzzy Logic Spatial Relationship Methods for Human Robot Interaction." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-12172008-125840/.

Full text
Abstract:
As the science of robotics advances, robots are interacting with people more frequently. Robots are appearing in our houses and places of work acting as assistants in many capacities. One aspect of this interaction is determining spatial relationships between objects. People and robots simply can not communicate effectively without references to the physical world and how those objects relate to each other. In this research fuzzy logic is used to help determine the spatial relationships between objects as fuzzy logic lends itself to the inherent imprecision of spatial relationships. Objects are rarely absolutely in front of or to the right of another, especially when dealing with multiple objects. This research compares three methods of fuzzy logic, the angle aggregation method, the centroid method and the histogram of angles â composition method. First we use a robot to gather real world data on the geometries between objects, and then we adapt the fuzzy logic techniques for the geometry between objects from the robot's perspective which is then used on the generated robot data. Last we perform an in depth analysis comparing the three techniques with the human survey data to determine which may predict spatial relationships most accurately under these conditions as a human would. Previous research mainly focused on determining spatial relationships from an allocentric, or bird's eye view, where here we apply some of the same techniques to determine spatial relationships from an egocentric, or observer's point of view.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography