To see the other types of publications on this topic, follow the link: Mathematical Logic and Formal Languages.

Dissertations / Theses on the topic 'Mathematical Logic and Formal Languages'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mathematical Logic and Formal Languages.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Almeida, João Marcos de 1974. "Logics of Formal Inconsistency." Phd thesis, Instituições portuguesas -- UTL-Universidade Técnica de Lisboa -- IST-Instituto Superior Técnico -- -Departamento de Matemática, 2005. http://dited.bn.pt:80/29635.

Full text
Abstract:
According to the classical consistency presupposition, contradictions have an explosive character: Whenever they are present in a theory, anything goes, and no sensible reasoning can thus take place. A logic is paraconsistent if it disallows such presupposition, and allows instead for some inconsistent yet non-trivial theories to make perfect sense. The Logics of Formal Inconsistency, LFIs, form a particularly expressive class of paraconsistent logics in which the metatheoretical notion of consistency can be internalized at the object-language level. As a consequence, the LFIs are able to recapture consistent reasoning by the addition of appropriate consistency assumptions. The present monograph introduces the LFIs and provides several illustrations of them and of their properties, showing that such logics constitute in fact the majority of interesting paraconsistent systems in the literature. Several ways of performing the recapture of consistent reasoning inside such inconsistent systems are also illustrated. In each case, interpretations in terms of many-valued, possible-translations, or modal semantics are provided, and the problems related to providing algebraic counterparts to such logics are surveyed. A formal abstract approach is proposed to all related definitions and an extended investigation is made into the logical principles and the positive and negative properties of negation.
APA, Harvard, Vancouver, ISO, and other styles
2

Toninho, Bernardo Parente Coutinho Fernandes. "A Logic and tool for local reasoning about security protocols." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2307.

Full text
Abstract:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
This thesis tackles the problem of developing a formal logic and associated model-checking techniques to verify security properties, and its integration in the Spatial Logic Model Checker(SLMC) tool. In the areas of distributed system design and analysis, there exists a substantial amount of work related to the verification of correctness properties of systems, in which the work aimed at the verification of security properties mostly relies on precise yet informal methods of reasoning. This work follows a line of research that applies formal methodologies to the verification of security properties in distributed systems, using formal tools originally developed for the study of concurrent and distributed systems in general. Over the years, several authors have proposed spatial logics for local and compositional reasoning about algebraic models of distributed systems known as process calculi. In this work, we present a simplification of a process calculus known as the Applied - calculus, introduced by Abadi and Fournet, designed for the study of security protocols. We then develop a spatial logic for this calculus, extended with knowledge modalities, aimed at reasoning about security protocols using the concept of local knowledge of processes. Furthermore, we conclude that the extensions are sound and complete regarding their intended semantics and that they preserve decidability, under reasonable assumptions. We also present a model-checking algorithm and the proof of its completeness for a large class of processes. Finally, we present an OCaml implementation of the algorithm, integrated in the Spatial Logic Model Checker tool, developed by Hugo Vieira and Luis Caires, thus producing the first tool for security protocol analysis that employs spatial logics.
APA, Harvard, Vancouver, ISO, and other styles
3

Reis, Teofilo de Souza. "Conectivos flexíveis : uma abordagem categorial às semânticas de traduções possíveis." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/278896.

Full text
Abstract:
Orientador: Marcelo Esteban Coniglio
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-08-11T21:55:18Z (GMT). No. of bitstreams: 1 Reis_TeofilodeSouza_M.pdf: 733611 bytes, checksum: 0e64d330d9e71079eddd94de91f141c2 (MD5) Previous issue date: 2008
Resumo: Neste trabalho apresentamos um novo formalismo de decomposição de Lógicas, as Coberturas por Traduções Possíveis, ou simplesmente CTPs. As CTPs constituem uma versão formal das Semânticas de Traduções Possíveis, introduzidas por W. Carnielli em 1990. Mostramos como a adoção de um conceito mais geral de morfismo de assinaturas proposicionais (usando multifunções no lugar de funções) nos permite definir uma categoria Sig?, na qual os conectivos, ao serem traduzidos de uma assinatura para outra, gozam de grande flexibilidade. A partir de Sig?, contruímos a categoria Log? de lógicas tarskianas e morfismos (os quais são funções obtidas a partir de um morfismo de assinaturas, isto é, de uma multifunção). Estudamos algumas características de Sig? e Log?, afim de verificar que estas categorias podem de fato acomodar as construções que pretendemos apresentar. Mostramos como definir em Log? o conjunto de traduções possíveis de uma fórmula, e a partir disto definimos a noção de CTP para uma lógica L. Por fim, exibimos um exemplo concreto de utilização desta nova ferramenta, e discutimos brevemente as possíveis abordagens para uma continuação deste trabalho.
Abstract: We present a general study of a new formalism of decomposition of logics, the Possible- Translations Coverings (in short PTC 's) which constitute a formal version of Possible-Translations Semantics, introduced by W. Carnielli in 1990. We show how the adoption of a more general notion of propositional signatures morphism allows us to define a category Sig?, in which the connectives, when translated from a signature to another one, enjoy of great flexibility. Essentially, Sig? -morphisms will be multifunctions instead of functions. From Sig? we construct the category Log? of tarskian logics and morphisms between them (these .are functions obtained from signature morphisms, that is, from multifunctions) . We show how to define in Log? the set of possible translations of a given formula, and we define the notion of a PTC for a logic L. We analyze some properties of PTC 's and give concrete examples of the above mentioned constructions. We conclude with a discussion of the approaches to be used in a possible continuation of these investigations.
Mestrado
Mestre em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
4

Silvestrini, Luiz Henrique da Cruz. "Uma nova abordagem para a noção de quase-verdade." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280594.

Full text
Abstract:
Orientador: Marcelo Esteban Coniglio
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas
Made available in DSpace on 2018-08-17T19:11:26Z (GMT). No. of bitstreams: 1 Silvestrini_LuizHenriquedaCruz_D.pdf: 1289416 bytes, checksum: aa5f4929e7149a647d28c4fd4df86874 (MD5) Previous issue date: 2011
Resumo: Mikenberg, da Costa e Chuaqui (1986) introduziram a noção de quase-verdade por meio da noção de estruturas parciais, e para tanto, conceberam os predicados como ternas. O arcabouço conceitual resultante proporcionou o emprego de estruturas parciais na ciência, pois, em geral, não sabemos tudo a respeito de um determinado domínio de conhecimento. Generalizamos a noção de predicados como ternas para fórmulas complexas. A partir desta nova abordagem, obtemos uma definição de quase-verdade via noção de satisfação pragmática de uma fórmula A em uma estrutura parcial E. Introduzimos uma lógica subjacente à nossa nova definição de quase-verdade, a saber, a lógica paraconsistente trivalente LPT1, a qual possui uma axiomática de primeira ordem. Relacionamos a noção de quase-verdade com algumas lógicas paraconsistentes já existentes. Defendemos que a formalização das Sociedades Abertas, introduzidas por Carnielli e Lima-Marques (1999), quando combinada com quantificadores modulados, introduzidos por Grácio (1999), constitui uma alternativa para capturar a componente indutiva presente na atividade científica, e mostramos, a partir disso, que a proposta original de da Costa e colaboradores pode ser explicada em termos da nova noção de sociedades moduladas
Abstract: Newton da Costa and his collaborators have introduced the notion of quasi-truth by means of partial structures, and for this purpose, they conceived the predicates as ordered triples: the set of tuples which satisfies, does not satisfy and can satisfy or not the predicate, respectively (the latter represents lack of information). This approach provides a conceptual framework to analyse the use of (first-order) structures in science in contexts of informational incompleteness. In this Thesis, the notion of predicates as triples is extended recursively to any complex formula of the first-order object language. From this, a new definition of quasi-truth via the notion of pragmatic satisfaction is obtained. We obtain the proof-theoretic counterpart of the logic underlying our new definition of quasi-truth, namely, the three-valued paraconsistent logic LPT1, which is presented axiomatically in a first-order language. We relate the notion of quasi-truth with some existing paraconsistent logics. We defend that the formalization of (open) society semantics when combined with the modulated quantifiers constitutes an alternative to capture the inductive component present in scientific activity, and show, from this, that the original proposal of da Costa and collaborators can be explained in terms of the new concept of modulated societies
Doutorado
Filosofia
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
5

Bueno-Soler, Juliana 1976. "Multimodalidades anodicas e catodicas : a negação controlada em logicas multimodais e seu poder expressivo." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280387.

Full text
Abstract:
Orientador: Itala Maria Loffredo D'Ottaviano
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-09-11T21:14:41Z (GMT). No. of bitstreams: 1 Bueno-Soler_Juliana_D.pdf: 1230879 bytes, checksum: c04ce9e8061c154854f6283749f9c12b (MD5) Previous issue date: 2009
Resumo: O presente trabalho tem por objetivo investigar o papel da negação no âmbito das modalidades, de forma a poder esclarecer até que ponto a negação pode ser atenuada, controlada ou mesmo totalmente eliminada em favor da melhor expressabilidade lógica de certas teorias, asserções ou raciocínios que sofrem os efeitos da negação. Contudo, atenuar ou eliminar a negação tem um alto preço: métodos tradicionais em lógica podem deixar de ser válidos e certos resultados, como teoremas de completude para sistemas lógicos, podem ser derrogados. Do ponto de vista formal, a questão central que investigamos aqui e até que ponto tais métodos podem ser restabelecidos. Com tal finalidade, iniciamos nosso estudo a partir do que denominamos sistemas anódicos" (sem negação) e, a posteriori, introduzimos gradativamente o elemento catódico" (negações, com diversas gradações e diferentes características) nos sistemas modais por meio de combinações com certas lógicas paraconsistentes, as chamadas lógicas da inconsistência formal (LFIs). Todos os sistemas tratados são semanticamente caracterizados por semânticas de mundos possíveis; resultados de incompletude são também obtidos e discutidos. Obtemos ainda semânticas modais de traduções possíveis para diversos desses sistemas. Avançamos na direção das multimodalidades, investigando os assim chamados sistemas multimodais anódicos e catódicos. Finalmente, procuramos avaliar criticamente o alcance e o interesse dos resultados obtidos na direção da racionalidade sensível à negação.
Abstract: The present work aims to investigate the role of negations in the scope of modalities and in the reasoning expressed by modalities. The investigation starts from what we call anodic" systems (without any form of negation) and gradually reaches the cathodic" elements, where negations are introduced by means of combining modal logics with certain paraconsistent logics known as logics of formal inconsistency (LFIs). We obtain completeness results for all treated systems, and also show that certain incompleteness results can be obtained. The class of the investigated systems includes all normal modal logics that are extended by means of the schema Gk;l;m;n due to E. J. Lemmon and D. Scott combined with LFIs. We also tackle the question of obtaining modal possible-translations semantics for these systems. Analogous results are analyzed in the scope of multimodalities, where anodic as much as cathodic logics are studied. Finally, we advance a critical evaluation of the reach and scope of all the results obtained to what concerns expressibility of reasoning considered to be sensible to negation. We also critically assess the obtained results in contrast with problems of rationality that are sensible to negation.
Doutorado
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
6

Rodrigues, Tarcísio Genaro. "Sobre os fundamentos de programação lógica paraconsistente." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/278897.

Full text
Abstract:
Orientador: Marcelo Esteban Coniglio
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-08-17T03:29:03Z (GMT). No. of bitstreams: 1 Rodrigues_TarcisioGenaro_M.pdf: 1141020 bytes, checksum: 59bb8a3ae7377c05cf6a8d8e6f7e45a5 (MD5) Previous issue date: 2010
Resumo: A Programação Lógica nasce da interação entre a Lógica e os fundamentos da Ciência da Computação: teorias de primeira ordem podem ser interpretadas como programas de computador. A Programação Lógica tem sido extensamente utilizada em ramos da Inteligência Artificial tais como Representação do Conhecimento e Raciocínio de Senso Comum. Esta aproximação deu origem a uma extensa pesquisa com a intenção de definir sistemas de Programação Lógica paraconsistentes, isto é, sistemas nos quais seja possível manipular informação contraditória. Porém, todas as abordagens existentes carecem de uma fundamentação lógica claramente definida, como a encontrada na programação lógica clássica. A questão básica é saber quais são as lógicas paraconsistentes subjacentes a estas abordagens. A presente dissertação tem como objetivo estabelecer uma fundamentação lógica e conceitual clara e sólida para o desenvolvimento de sistemas bem fundados de Programação Lógica Paraconsistente. Nesse sentido, este trabalho pode ser considerado como a primeira (e bem sucedida) etapa de um ambicioso programa de pesquisa. Uma das teses principais da presente dissertação é que as Lógicas da Inconsistência Formal (LFI's), que abrangem uma enorme família de lógicas paraconsistentes, proporcionam tal base lógica. Como primeiro passo rumo à definição de uma programação lógica genuinamente paraconsistente, demonstramos nesta dissertação uma versão simplificada do Teorema de Herbrand para uma LFI de primeira ordem. Tal teorema garante a existência, em princípio, de métodos de dedução automática para as lógicas (quantificadas) em que o teorema vale. Um pré-requisito fundamental para a definição da programação lógica é justamente a existência de métodos de dedução automática. Adicionalmente, para a demonstração do Teorema de Herbrand, são formuladas aqui duas LFI's quantificadas através de sequentes, e para uma delas demonstramos o teorema da eliminação do corte. Apresentamos também, como requisito indispensável para os resultados acima mencionados, uma nova prova de correção e completude para LFI's quantificadas na qual mostramos a necessidade de exigir o Lema da Substituição para a sua semântica
Abstract: Logic Programming arises from the interaction between Logic and the Foundations of Computer Science: first-order theories can be seen as computer programs. Logic Programming have been broadly used in some branches of Artificial Intelligence such as Knowledge Representation and Commonsense Reasoning. From this, a wide research activity has been developed in order to define paraconsistent Logic Programming systems, that is, systems in which it is possible to deal with contradictory information. However, no such existing approaches has a clear logical basis. The basic question is to know what are the paraconsistent logics underlying such approaches. The present dissertation aims to establish a clear and solid conceptual and logical basis for developing well-founded systems of Paraconsistent Logic Programming. In that sense, this text can be considered as the first (and successful) stage of an ambitious research programme. One of the main thesis of the present dissertation is that the Logics of Formal Inconsistency (LFI's), which encompasses a broad family of paraconsistent logics, provide such a logical basis. As a first step towards the definition of genuine paraconsistent logic programming we shown, in this dissertation, a simplified version of the Herbrand Theorem for a first-order LFI. Such theorem guarantees the existence, in principle, of automated deduction methods for the (quantified) logics in which the theorem holds, a fundamental prerequisite for the definition of logic programming over such logics. Additionally, in order to prove the Herbrand Theorem we introduce sequent calculi for two quantified LFI's, and cut-elimination is proved for one of the systems. We also present, as an indispensable requisite for the above mentioned results, a new proof of soundness and completeness for first-order LFI's in which we show the necessity of requiring the Substitution Lemma for the respective semantics
Mestrado
Filosofia
Mestre em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
7

Palacios, Pastrana Florencio Edmundo. "Etude des rapports entre linguistique et logique concernant la dimension temporelle : un modèle de transition." Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10273.

Full text
Abstract:
Le but general de cette these est de developper un langage formel susceptible de modeliser certains traits du langage naturel ayant une relation forte avec le temps. En particulier nous sommes interesse par la notion linguistique de l'aspect et de ses consequences logiques possibles. Nous basons notre analyse sur deux perspectives : linguistique et logique. Pour la premiere nous analysons les concepts pertinents lies a la categorie grammaticale de l'aspect, qui, avec la categorie du temps grammatical, a une relation directe avec la notion de temps. Pour la seconde perspective, nous analysons les notions logiques mises en jeu dans des systemes formels deductifs et leur relations avec le temps : la logique temporelle. Comme il est etabli, les langages formels bases sur les notions definies par frege ne sont pas suffisants pour exprimer toutes les composantes temporelles du langage naturel. Toutefois il y a d'autres formalismes etendus qui prennent en compte certains concepts linguistiques comme l'aspect. Une telle proposition a ete faite par galton qui introduit des operateurs pour certaines des notions aspectuelles les plus courantes en anglais comme la perfectivite et la progressivite. Notre proposition introduit des notions topologiques pour representer la structure de l'ensemble dans lequel un enonce prend une certaine valeur de verite. De plus nous traitons aussi du concept de sigma-signification pour representer certains concepts theoriques non ensemblistes en rapport avec la signification des enonces.
APA, Harvard, Vancouver, ISO, and other styles
8

Yim, Austin Vincent. "On Galois correspondences in formal logic." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:b47d1dda-8186-4c81-876c-359409f45b97.

Full text
Abstract:
This thesis examines two approaches to Galois correspondences in formal logic. A standard result of classical first-order model theory is the observation that models of L-theories with a weak form of elimination of imaginaries hold a correspondence between their substructures and automorphism groups defined on them. This work applies the resultant framework to explore the practical consequences of a model-theoretic Galois theory with respect to certain first-order L-theories. The framework is also used to motivate an examination of its underlying model-theoretic foundations. The model-theoretic Galois theory of pure fields and valued fields is compared to the algebraic Galois theory of pure and valued fields to point out differences that may hold between them. The framework of this logical Galois correspondence is also applied to the theory of pseudoexponentiation to obtain a sketch of the Galois theory of exponential fields, where the fixed substructure of the complex pseudoexponential field B is an exponential field with the field Qrab as its algebraic subfield. This work obtains a partial exponential analogue to the Kronecker-Weber theorem by describing the pure field-theoretic abelian extensions of Qrab, expanding upon work in the twelfth of Hilbert’s problems. This result is then used to determine some of the model-theoretic abelian extensions of the fixed substructure of B. This work also incorporates the principles required of this model-theoretic framework in order to develop a model theory over substructural logics which is capable of expressing this Galois correspondence. A formal semantics is developed for quantified predicate substructural logics based on algebraic models for their propositional or nonquantified fragments. This semantics is then used to develop substructural forms of standard results in classical first-order model theory. This work then uses this substructural model theory to demonstrate the Galois correspondence that substructural first-order theories can carry in certain situations.
APA, Harvard, Vancouver, ISO, and other styles
9

Dumbravă, Ştefania-Gabriela. "Formalisation en Coq de Bases de Données Relationnelles et Déductives -et Mécanisation de Datalog." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS525/document.

Full text
Abstract:
Cette thèse présente une formalisation en Coq des langages et des algorithmes fondamentaux portant sur les bases de données. Ainsi, ce fourni des spécifications formelles issues des deux approches différentes pour la définition des modèles de données: une basée sur l’algèbre et l'autre basée sur la logique.A ce titre, une première contribution de cette thèse est le développement d'une bibliothèque Coq pour le modèle relationnel. Cette bibliothèque contient les modélisations de l’algèbre relationnelle et des requêtes conjonctives. Il contient aussi une mécanisation des contraintes d'intégrité et de leurs procédures d'inférence. Nous modélisons deux types de contraintes: les dépendances, qui sont parmi les plus courantes: les dépendances fonctionnelles et les dépendances multivaluées, ainsi que leurs axiomatisations correspondantes. Nous prouvons formellement la correction de leurs algorithmes d'inférence et, pour le cas de dépendances fonctionnelles, aussi la complétude.Ces types de dépendances sont des instances de contraintes plus générales : les dépendances génératrices d'égalité (equality generating dependencies, EGD) et, respectivement, les dépendances génératrices de tuples (tuple generating dependencies, TGD), qui appartiennent a une classe encore plus large des dépendances générales (general dependencies). Nous modélisons ces dernières et leur procédure d'inférence, i.e, "the chase", pour lequel nous établissons la correction. Enfin, on prouve formellement les théorèmes principaux des bases de données, c'est-à-dire, les équivalences algébriques, la théorème de l' homomorphisme et la minimisation des requêtes conjonctives.Une deuxième contribution consiste dans le développement d'une bibliothèque Coq/ssreflect pour la programmation logique, restreinte au cas du Datalog. Dans le cadre de ce travail, nous donnons la première mécanisations d'un moteur Datalog standard et de son extension avec la négation. La bibliothèque comprend une formalisation de leur sémantique en theorie des modelés ainsi que de leur sémantique par point fixe, implémentée par une procédure d'évaluation stratifiée. La bibliothèque est complétée par les preuves de correction, de terminaison et de complétude correspondantes. Cette plateforme ouvre la voie a la certification d' applications centrées données
This thesis presents a formalization of fundamental database theories and algorithms. This furthers the maturing state of the art in formal specification development in the database field, with contributions stemming from two foundational approches to database models: relational and logic based.As such, a first contribution is a Coq library for the relational model. This contains a mechanization of integrity constraints and of their inference procedures. We model two of the most common dependencies, namely functional and multivalued, together with their corresponding axiomatizations. We prove soundness of their inference algorithms and, for the case of functional ones, also completeness. These types of dependencies are instances of equality and, respectively, tuple generating dependencies, which fall under the yet wider class of general dependencies. We model these and their inference procedure,i.e, the chase, for which we establish soundness.A second contribution consists of a Coq/Ssreflect library for logic programming in the Datalog setting. As part of this work, we give (one of the) first mechanizations of the standard Datalog language and of its extension with negation. The library includes a formalization of their model theoretical semantics and of their fixpoint semantics, implemented through bottom-up and, respectively, through stratified evaluation procedures. This is complete with the corresponding soundness, termination and completeness proofs. In this context, we also construct a preliminary framework for dealing with stratified programs. This work paves the way towards the certification of data-centric applications
APA, Harvard, Vancouver, ISO, and other styles
10

Cholodovskis, Ana Flávia de Faria 1988. "Lógicas de inconsistência formal e não-monotonicidade." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/279773.

Full text
Abstract:
Orientador: Walter Alexandre Carnielli
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas
Made available in DSpace on 2018-08-26T05:10:17Z (GMT). No. of bitstreams: 1 Cholodovskis_AnaFlaviadeFaria_M.pdf: 14177739 bytes, checksum: 714f42d947721ac9da9a5d1e34cd497e (MD5) Previous issue date: 2014
Resumo: Existem diversas razões para justificar o desenvolvimento de lógicas não-clássicas tais como a expressividade destas linguagens e como elas poderiam ajudar a formalizar o pensamento humano. Neste sentido, as lógicas não-monotônicas foram desenvolvidas em prol de formalizar raciocínios cotidianos baseados na premissa de que nós deveríamos ser capazes de retratar conclusões previamente obtidas quando confrontadas com novas informações. Algumas lógicas não-monotônicas utilizam a noção de pensamento default para formalizar raciocínios cotidianos. Por outro lado, as lógicas paraconsistentes são aquelas lógicas que estudam teorias não-explosivas e foram desenvolvidas em prol de lidar com contradições. Sobre as lógicas paraconsistentes, existe uma classe de sistemas que se mostram realmente interessantes, particularmente: as Lógicas de Inconsistência Formal (LIFs). LIFs são um tipo especial de lógicas paraconsistentes que são gentilmente explosivas e internalizam o conceito de consistência no nível da linguagem-objeto utilizando o operador de consistência ? . A questão inicial Poderia a Paraconsistência substituir a Não-Monotonicidade? nos guiou à formalização de uma pergunta mais específica, entretanto, mais intrigante: É possível desenvolver uma lógica não-monotônica gentilmente explosiva?. No intuito de buscar responder a essa questão, é importante investigar conceitual e filosoficamente a relevância e as problemáticas de se desenvolver tal lógica. Este trabalho visa justificar a importância de uma lógica não-monotônica paraconsistente baseada nas Lógicas de Inconsistência Formal a partir de uma análise intuitiva dos conceitos e das noções envolvidas em tais sistemas formais considerando, ainda, abordagens possíveis a partir das chamadas Lógicas Adaptativas de Inconsistência e das Lógicas Moduladas
Abstract: There are many reasons to justify the development of non-classical logics such as the expressivity of those languages and how they could help to formulate human reasoning. In that sense, nonmonotonic logics were developed in order to formalize everyday reasoning based on the premise that we should be able to retract conclusions previously obtained in face of new information. Some nonmonotonic logics uses the notion of default reasoning to formalize everyday reasoning. On the other hand, paraconsistent logics are those logics that studies non-explosive theories and were developed in order to deal with contradictions. About paraconsistent logics, there is a class of systems that has shown to be really interesting, particularly: the Logics of Formal Inconsistency [LFIs]. LFIs are a special kind of paraconsistent logics that are gently explosive and internalize the concept of consistency at the object-language level using the consistency operator ?. The initial question Can Paraconsistency replace Nonmonotonicity? guided us to the formulation of a more specific yet intriguing question: Is it possible to develop a gently explosive nonmonotonic logic?. In order to answer that question, it is important to investigate both conceptual and philosophical relevance and problems of developing such logic. This work intends to justify the importance of a non-monotonic paraconsistent logic based on Logics of Formal Inconsistency from an intuitive analysis of concepts and notions involved in such formal systems, also considering possible approaches from the so called Adaptive Logics of Inconsistency an Modulated Logics
Mestrado
Filosofia
Mestra em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
11

Delaney, Aidan. "Defining star-free regular languages using diagrammatic logic." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/d1c53bda-f520-4807-9de9-8de12eda3d9e.

Full text
Abstract:
Spider diagrams are a recently developed visual logic that make statements about relationships between sets, their members and their cardinalities. By contrast, the study of regular languages is one of the oldest active branches of computer science research. The work in this thesis examines the previously unstudied relationship between spider diagrams and regular languages. In this thesis, the existing spider diagram logic and the underlying semantic theory is extended to allow direct comparison of spider diagrams and star-free regular languages. Thus it is established that each spider diagram defines a commutative star-free regular language. Moreover, we establish that every com- mutative star-free regular language is definable by a spider diagram. From the study of relationships between spider diagrams and commutative star-free regular languages, an extension of spider diagrams is provided. This logic, called spider diagrams of order, increases the expressiveness of spider di- agrams such that the language of every spider diagram of order is star-free and regular, but not-necessarily commutative. Further results concerning the expres- sive power of spider diagrams of order are gained through the use of a normal form for the diagrams. Sound reasoning rules which take a spider diagram of order and produce a semantically equivalent diagram in the normal form are pro- vided. A proof that spider diagrams of order define precisely the star-free regular languages is subsequently presented. Further insight into the structure and use of spider diagrams of order is demonstrated by restricting the syntax of the logic. Specifically, we remove spiders from spider diagrams of order. We compare the expressiveness of this restricted fragment of spider diagrams of order with the unrestricted logic.
APA, Harvard, Vancouver, ISO, and other styles
12

Suriadi, Suriadi. "Strengthening and formally verifying privacy in identity management systems." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/39345/1/Suriadi_Suriadi_Thesis.pdf.

Full text
Abstract:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
APA, Harvard, Vancouver, ISO, and other styles
13

Alves, Thiago de Oliveira. "Lógica formal e sua aplicação na argumentação matemática." Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3248.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-01-13T17:27:04Z No. of bitstreams: 1 thiagodeoliveiraalves.pdf: 655489 bytes, checksum: e3e858183683f82164e751d989a96b35 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-07T13:56:24Z (GMT) No. of bitstreams: 1 thiagodeoliveiraalves.pdf: 655489 bytes, checksum: e3e858183683f82164e751d989a96b35 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-07T14:05:47Z (GMT) No. of bitstreams: 1 thiagodeoliveiraalves.pdf: 655489 bytes, checksum: e3e858183683f82164e751d989a96b35 (MD5)
Made available in DSpace on 2017-02-07T14:05:47Z (GMT). No. of bitstreams: 1 thiagodeoliveiraalves.pdf: 655489 bytes, checksum: e3e858183683f82164e751d989a96b35 (MD5) Previous issue date: 2016-07-18
O uso da Lógica é de fundamental importância no desenvolvimento de teorias matemáticas modernas, que buscam deduzir de axiomas e conceitos primitivos todo seu corpo de teoremas e consequências. O objetivo desta dissertação é descrever as ferramentas da Lógica Formal que possam ter aplicações imediatas nas demonstrações de conjecturas e teoremas, trazendo justificativa e significado para as técnicas dedutivas e argumentos normalmente utilizados na Matemática. Além de temas introdutórios sobre argumentação e âmbito da lógica, o trabalho todo é apresentado por método sistemático em busca de um critério formal que possa separar os argumentos válidos dos inválidos. Conclui-se que com uma boa preparação inicial no campo da Lógica Formal, o matemático iniciante possa ter uma referência sobre como proceder estrategicamente nos processos de provas de conjecturas e um conhecimento mais profundo ao entender os motivos da validade dos teoremas que encontrará ao se dedicar a sua área de formação.
TheuseofLogicisoffundamentalimportanceinthedevelopmentofmodernmathematical theories that seek deduce from axioms and primitive concepts all your body of theorems and consequences. The aim of this work is to describe the tools of Formal Logic that may have immediate applications in the statements of theorems and conjectures, bringing justification and meaning to the deductive techniques and arguments commonly used in Mathematics. In addition to introductory topics on argumentation and scope of Logic, all the work is presented by systematic method in search of a formal criterion that can separate the valid arguments of the invalids. It follows that with a good initial preparation in the field of Formal Logic, the novice mathematician could have a reference on how to strategically proceed in conjectures evidence processes and a deeper knowledge to understand the reasons for the validity of theorems found on their training area.
APA, Harvard, Vancouver, ISO, and other styles
14

Magnago, Enrico. "Facing infinity in model checking expressive specification languages." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356869.

Full text
Abstract:
Society relies on increasingly complex software and hardware systems, hence techniques capable of proving that they behave as expected are of great and growing interest. Formal verification procedures employ mathematically sound reasoning to address this need. This thesis proposes novel techniques for the verification and falsification of expressive specifications on timed and infinite-state systems. An expressive specification language allows the description of the intended behaviour of a system via compact formal statements written at an abstraction level that eases the review process. Falsifying a specification corresponds to identifying an execution of the system that violates the property (i.e. a witness). The capability of identifying witnesses is a key feature in the iterative refinement of the design of a system, since it provides a description of how a certain error can occur. The designer can analyse the witness and take correcting actions by refining either the description of the system or its specification. The contribution of this thesis is twofold. First, we propose a semantics for Metric Temporal Logic that considers four different models of time (discrete, dense, super-discrete and super-dense). We reduce its verification problem to finding an infinite fair execution (witness) for an infinite-state system with discrete time. Second, we define a novel SMT-based algorithm to identify such witnesses. The algorithm employs a general representation of such executions that is both informative to the designer and provides sufficient structure to automate the search of a witness. We apply the proposed techniques to benchmarks taken from software, infinite-state, timed and hybrid systems. The experimental results highlight that the proposed approaches compete and often outperform specific (application tailored) techniques currently used in the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
15

Loftus, John A. "Powers of words in language families." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Testa, Rafael Rodrigues 1982. "Revisão de Crenças Paraconsistente baseada em um operador formal de consistência." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/281195.

Full text
Abstract:
Orientador: Marcelo Esteban Coniglio
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas
Made available in DSpace on 2018-08-25T18:45:14Z (GMT). No. of bitstreams: 1 Testa_RafaelRodrigues_D.pdf: 1707390 bytes, checksum: 77a5315394cfd4052cf1fe8733d0559c (MD5) Previous issue date: 2014
Resumo: A Revisão de Crenças estuda como agentes racionais mudam suas crenças ao receberem novas informações. O sistema AGM, trabalho mais influente desta área apresentado por Alchourrón, Gärdenfos e Makinson, postula critérios de racionalidade para os diferentes tipos de mudança de crenças e oferece construções explícitas para tais - a equivalência entre os postulados e operações é chamado de teroema da representação. Trabalhos recentes mostram como o paradigma AGM pode ser compatível com diferentes lógicas não-clássicas, o que é chamado de AGM-compatibilidade - este é o caso da família de lógicas paraconsistentes que analisamos, as Lógicas da Inconsistência Formal (LFIs, da sigla em inglês). A despeito da AGM-compatibilidade, ao se partir de uma nova lógica sua racionalidade subjacente deve ser entendida e sua linguagem deve ser efetivamente usada. Propomos assim novas construções que de fato capturam a intuição presente na LFIs - é o que chamamos de sistema AGMo. Com isso, possibilitamos a estas lógicas uma nova interpretação, na esteira da epistemologia formal. Em uma abordagem alternativa, ao se partir da AGM-compatibilidade os resultados AGM podem ser diretamente aplicados às LFIs - o que chamamos de sistema AGMp. Em ambas abordagens, provamos os respectivos teoremas da representação sempre que necessário
Abstract: Belief Revision studies how rational agents change their beliefs when they receive new information. The AGM system, most influential work in this area of study investigated by Alchourrón, Gärdenfos and Makinson, postulates rationality criteria for different types of belief change and provides explicit constructions for them - the equivalence between the postulates and operations is called representation theorem. Recent studies show how the AGM paradigm can be compliant with different non-classical logics, which is called the AGM-compliance - this is the case of the paraconsistent logics family we analyze in this thesis, the Logics of Formal Inconsistency (LFIs). Despite the AGM-compliance, when a new logic is taken into account its underlying rationality must be understood and its language should be used. In that way new constructions are proposed, which actually captures the intuition of LFIs - what we call the AGMo system. Thus, we provide a new interpretation for these logics, more in line with formal epistemology. In an alternative approach, by considering the AGM-compliance, we show how the AGM results can be directly applied to LFIs -- resulting the AGMp system. In both approaches, we prove the corresponding representation theorems where needed
Doutorado
Filosofia
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
17

Konecny, Jan. "Isotone fuzzy Galois connections and their applications in formal concept analysis." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
18

Souza, Marlo Vieira dos Santos e. "Choices that make you chnage your mind : a dynamic epistemic logic approach to the semantics of BDI agent programming languages." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150039.

Full text
Abstract:
Dada a importância de agentes inteligentes e sistemas multiagentes na Ciência da Computação e na Inteligência Artificial, a programação orientada a agentes (AOP, do inglês Agent-oriented programming) emergiu como um novo paradigma para a criação de sistemas computacionais complexos. Assim, nas últimas décadas, houve um florescimento da literatura em programação orientada a agentes e, com isso, surgiram diversas linguages de programação seguindo tal paradigma, como AgentSpeak (RAO, 1996; BORDINI; HUBNER; WOOLDRIDGE, 2007), Jadex (POKAHR; BRAUBACH; LAMERSDORF, 2005), 3APL/2APL (DASTANI; VAN RIEMSDIJK; MEYER, 2005; DASTANI, 2008), GOAL (HINDRIKS et al., 2001), entre outras. Programação orientada a agentes é um paradigma de programação proposto por Shoham (1993) no qual os elementos mínimos de um programa são agentes. Shoham (1993) defende que agentes autônomos e sistemas multiagentes configuram-se como uma forma diferente de se organizar uma solução para um problema computacional, de forma que a construção de um sistema multiagente para a solução de um problema pode ser entendida como um paradgima de programação. Para entender tal paradigma, é necessário entender o conceito de agente. Agente, nesse contexto, é uma entidade computacional descrita por certos atributos - chamados de atitudes mentais - que descrevem o seu estado interno e sua relação com o ambiente externo. Atribuir a interpretação de atitudes mentais a tais atributos é válida, defende Shoham (1993), uma vez que esses atributos se comportem de forma semelhante as atitudes mentais usadas para descrever o comportamento humano e desde que sejam pragmaticamente justificáveis, i.e. úteis à solução do problema. Entender, portanto, o significado de termos como ’crença’, ’desejo’, ’intenção’, etc., assim como suas propriedades fundamentais, é de fundamental importância para estabelecer linguagens de programação orientadas a agentes. Nesse trabalho, vamos nos preocupar com um tipo específico de linguagens de programação orientadas a agentes, as chamadas linguagens BDI. Linguagens BDI são baseadas na teoria BDI da Filosofia da Ação em que o estado mental de um agente (e suas ações) é descrito por suas crenças, desejos e intenções. Enquanto a construção de sistemas baseados em agentes e linguagens de programação foram tópicos bastante discutidos na literatura, a conexão entre tais sistemas e linguagens com o trabalho teórico proveniente da Inteligência Artificial e da Filosofia da Ação ainda não está bem estabelecida. Essa distância entre a teoria e a prática da construção de sistemas é bem reconhecida na literatura relevante e comumente chamada de “gap semântico” (gap em inglês significa lacuna ou abertura e representa a distância entre os modelos teóricos e sua implementação em linguagens e sistemas). Muitos trabalhos tentaram atacar o problema do gap semântico para linguagens de programação específicas, como para as linguagens AgentSpeak (BORDINI; MOREIRA, 2004), GOAL (HINDRIKS; VAN DER HOEK, 2008), etc. De fato, Rao (1996, p. 44) afirma que “O cálice sagrado da pesquisa em agentes BDI é mostrar uma correspondência 1-a-1 com uma linguagem razoavelmente útil e expressiva” (tradução nossa)1 Uma limitação crucial, em nossa opinião, das tentativas passadas de estabeler uma conexão entre linguagens de programação orientadas a agentes e lógicas BDI é que elas se baseiam em estabelecer a interpretação de um programa somente no nível estático. De outra forma, dado um estado de um programa, tais trabalhos tentam estabelecer uma interpretação declarativa, i.e. baseada em lógica, do estado do programa respresentando assim o estado mental do agente. Não é claro, entretanto, como a execução do programa pode ser entendida enquanto mudanças no estado mental do agente. A razão para isso, nós acreditamos, está nos formalismos utilizados para especificar agentes BDI. De fato, as lógicas BDI propostas são, em sua maioria, estáticas ou incapazes de representar ações mentais. O ato de revisão uma crença, adotar um objetivo ou mudar de opinião são exemplos de ações mentais, i.e. ações que são executadas internarmente ao agente e afetando somente seu estado mental, sendo portanto não observáveis. Tais ações são, em nossa opinião, intrinsecamente diferentes de ações ônticas que consistem de comportamento observável e que possivelmente afeta o ambiente externo ao agente. Essa diferença é comumente reconhecida no estudo da semântica de linguagens de programação orientadas a agentes (BORDINI; HUBNER; WOOLDRIDGE, 2007; D’INVERNO et al., 1998; MENEGUZZI; LUCK, 2009), entretanto os formalismos disponíveis para se especificar raciocínio BDI, em nosso conhecimento, não provem recursos expressivos para codificar tal diferença. Nós acreditamos que, para atacar o gap semântico, precisamos de um ferramental semântico que permita a especificação de ações mentais, assim como ações ônticas. Lógicas Dinâmicas Epistêmicas (DEL, do inglês Dynamic Epistemic Logic) são uma família de lógicas modais dinâmicas largamente utilizadas para estudar os fenômenos de mudança do estado mental de agentes. Os trabalhos em DEL foram fortemente influenciados pela escola holandesa de lógica, com maior proponente Johna Van Benthem, e seu “desvio dinâmico” em lógica (dynamic turn em inglês) que propõe a utilização de lógicas dinâmicas para compreender ações de mudanças mentais (VAN BENTHEM, 1996). O formalismo das DEL deriva de diversas vertentes do estudo de mudança epistêmica, como o trabalho em teoria da Revisão de Crenças AGM (ALCHOURRÓN; GÄRDENFORS; MAKINSON, 1985), e Epistemologia Bayesiana (HÁJEK; HARTMANN, 2010). Tais lógicas adotam a abordagem, primeiro proposta por Segerberg (1999), de representar mudanças epistêmicas dentro da mesma linguagem utilizada para representar as noções de crença e conhecimento, diferente da abordagem extra-semântica do Revisão de Crenças a la AGM. No contexto das DEL, uma lógica nos parece particulamente interessante para o estudo de programação orientada a agentes: a Lógica Dinâmica de Preferências (DPL, do inglês Dynamic Preference Logic) de Girard (2008). DPL, também conhecida como lógica dinâmica de ordem, é uma lógica dinâmica para o estudo de preferências que possui grande expressibilidade para codificar diversas atiutudes mentais. De fato, tal lógica foi empregada para o estudo de obrigações (VAN BENTHEM; GROSSI; LIU, 2014), crenças (GIRARD; ROTT, 2014), preferências (GIRARD, 2008), etc. Tal lógica possui fortes ligações com raciocínio não-monotônico e com lógicas já propostas para o estudo de atitudes mentais na área de Teoria da Decisão (BOUTILIER, 1994b) Nós acreditamos que DPL constitui um candidato ideal para ser utilizado como ferramental semântico para se estudar atitudes mentais da teoria BDI por permitir grande flexibilidade para representação de tais atitudes, assim como por permitir a fácil representação de ações mentais como revisão de crenças, adoção de desejos, etc. Mais ainda, pelo trabalho de Liu (2011), sabemos que existem representações sintáticas dos modelos de tal lógica que podem ser utilizados para raciocinar sobre atitudes mentais, sendo assim candidatos naturais para serem utilizados como estruturas de dados para uma implementação semanticamente fundamentada de uma linguagem de programação orientada a agentes. Assim, nesse trabalho nós avançamos no problema de reduzir o gap semântico entre linguagens de programação orientadas a agentes e formalismos lógicos para especificar agentes BDI. Nós exploramos não somente como estabelecer as conexões entre as estruturas estáticas, i.e. estado de um programa e um modelo da lógica, mas também como as ações de raciocínio pelas quais se especifica a semântica formal de uma linguagem de programação orientada a agentes podem ser entendidas dentro da lógica como operadores dinâmicos que representam ações mentais do agente. Com essa conexão, nós provemos também um conjunto de operações que podem ser utilizadas para se implementar uma linguagem de programação orientada a agentes e que preservam a conexão entre os programas dessa linguagem e os modelos que representam o estado mental de um agente. Finalmente, com essas conexões, nós desenvolvemos um arcabouço para estudar a dinâmica de atitudes mentais, tais como crenças, desejos e inteções, e como reproduzir essas propriedades na semântica de linguagens de programação.
As the notions of Agency and Multiagent System became important topics for the Computer Science and Artificial Intelligence communities, Agent Programming has been proposed as a paradigm for the development of computer systems. As such, in the last decade, we have seen the flourishing of the literature on Agent Programming with the proposal of several programming languages, e.g. AgentSpeak (RAO, 1996; BORDINI; HUBNER;WOOLDRIDGE, 2007), Jadex (POKAHR; BRAUBACH; LAMERSDORF, 2005), JACK (HOWDEN et al., 2001), 3APL/2APL (DASTANI; VAN RIEMSDIJK; MEYER, 2005; DASTANI, 2008), GOAL (HINDRIKS et al., 2001), among others. Agent Programming is a programming paradigm proposed by Shoham (1993) in which the minimal units are agents. An agent is an entity composed of mental attitudes, that describe the its internal state - such as its motivations and decisions - as well as its relation to the external world - its beliefs about the world, its obligations, etc. This programming paradigm stems from the work on Philosophy of Action and Artificial Intelligence concerning the notions of intentional action and formal models of agents’ mental states. As such, the meaning (and properties) of notions such as belief, desire, intention, etc. as studied in these disciplines are of central importance to the area. Particularly, we will concentrate in our work on agent programming languages influenced by the so-called BDI paradigm of agency, in which an agent is described by her beliefs, desires, intentions. While the engineering of such languages has been much discussed, the connections between the theoretical work on Philosophy and Artificial Intelligence and its implementations in programming languages are not so clearly understood yet. This distance between theory and practice has been acknowledged in the literature for agent programming languages and is commonly known as the “semantic gap”. Many authors have attempted to tackle this problem for different programming languages, as for the case of AgentSpeak (BORDINI; MOREIRA, 2004), GOAL (HINDRIKS; VAN DER HOEK, 2008), etc. In fact, Rao (1996, p. 44) states that “[t]he holy grail of BDI agent research is to show such a one-to-one correspondence with a reasonably useful and expressive language.” One crucial limitation in the previous attempts to connect agent programming languages and BDI logics, in our opinion, is that the connection is mainly established at the static level, i.e. they show how a given program state can be interpreted as a BDI mental state. It is not clear in these attempts, however, how the execution of the program may be understood as changes in the mental state of the agent. The reason for this, in our opinion, is that the formalisms employed to construct BDI logics are usually static, i.e. cannot represent actions and change, or can only represent ontic change, not mental change. The act of revising one’s beliefs or adopting a given desire are mental actions (or internal actions) and, as such, different from performing an action over the environment (an ontic or external action). This difference is well recognized in the literature on the semantics of agent programming languages (D’INVERNO et al., 1998; BORDINI; HUBNER; WOOLDRIDGE, 2007; MENEGUZZI; LUCK, 2009), but this difference is lost when translating their semantics into a BDI logic. We believe the main reason for that is a lack of expressibility in the formalisms used to model BDI reasoning. Dynamic Epistemic Logic, or DEL, is a family of dynamic modal logics to study information change and the dynamics of mental attitudes inspired by the Dutch School on the “dynamic turn” in Logic (VAN BENTHEM, 1996). This formalism stems from various approaches in the study of belief change and differs from previous studies, such as AGM Belief Revision, by shifting from extra-logical characterization of changes in the agents attitudes to their integration within the representation language. In the context of Dynamic Epistemic Logic, the Dynamic Preference Logic of Girard (2008) seems like an ideal candidate, having already been used to study diverse mental attitudes, such as Obligations (VAN BENTHEM; GROSSI; LIU, 2014), Beliefs (GIRARD; ROTT, 2014), Preferences (GIRARD, 2008), etc. We believe Dynamic Preference Logic to be the ideal semantic framework to construct a formal theory of BDI reasoning which can be used to specify an agent programming language semantics. The reason for that is that inside this logic we can faithfully represent the static state of a agent program, i.e. the agent’s mental state, as well as the changes in the state of the agent program by means of the agent’s reasoning, i.e. by means of her mental actions. As such, in this work we go further in closing the semantic gap between agent programs and agency theories and explore not only the static connections between program states and possible worlds models, but also how the program execution of a language based on common operations - such as addition/removal of information in the already mentioned bases - may be understood as semantic transformations in the models, as studied in Dynamic Logics. With this, we provide a set of operations for the implementation of agent programming languages which are semantically safe and we connect an agent program execution with the dynamic properties in the formal theory. Lastly, by these connections, we provide a framework to study the dynamics of different mental attitudes, such as beliefs, goals and intentions, and how to reproduce the desirable properties proposed in theories of Agency in a programming language semantics.
APA, Harvard, Vancouver, ISO, and other styles
19

Taha, Mohamed A. M. S. "Regulated rewriting in formal language theory." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kavvos, Georgios Alexandros. "On the semantics of intensionality and intensional recursion." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:f89b46d8-b514-42fd-9321-e2803452681f.

Full text
Abstract:
Intensionality is a phenomenon that occurs in logic and computation. In the most general sense, a function is intensional if it operates at a level finer than (extensional) equality. This is a familiar setting for computer scientists, who often study different programs or processes that are interchangeable, i.e. extensionally equal, even though they are not implemented in the same way, so intensionally distinct. Concomitant with intensionality is the phenomenon of intensional recursion, which refers to the ability of a program to have access to its own code. In computability theory, intensional recursion is enabled by Kleene's Second Recursion Theorem. This thesis is concerned with the crafting of a logical toolkit through which these phenomena can be studied. Our main contribution is a framework in which mathematical and computational constructions can be considered either extensionally, i.e. as abstract values, or intensionally, i.e. as fine-grained descriptions of their construction. Once this is achieved, it may be used to analyse intensional recursion. To begin, we turn to type theory. We construct a modal λ-calculus, called Intensional PCF, which supports non-functional operations at modal types. Moreover, by adding Löb's rule from provability logic to the calculus, we obtain a type-theoretic interpretation of intensional recursion. The combination of these two features is shown to be consistent through a confluence argument. Following that, we begin searching for a semantics for Intensional PCF. We argue that 1-category theory is not sufficient, and propose the use of P-categories instead. On top of this setting we introduce exposures, which are P-categorical structures that function as abstractions of well-behaved intensional devices. We produce three examples of these structures, based on Gödel numberings on Peano arithmetic, realizability theory, and homological algebra. The language of exposures leads us to a P-categorical analysis of intensional recursion, through the notion of intensional fixed points. This, in turn, leads to abstract analogues of classic intensional results in logic and computability, such as Gödel's Incompleteness Theorem, Tarski's Undefinability Theorem, and Rice's Theorem. We are thus led to the conclusion that exposures are a useful framework, which we propose as a solid basis for a theory of intensionality. In the final chapters of the thesis we employ exposures to endow Intensional PCF with an appropriate semantics. It transpires that, when interpreted in the P-category of assemblies on the PCA K1, the Löb rule can be interpreted as the type of Kleene's Second Recursion Theorem.
APA, Harvard, Vancouver, ISO, and other styles
21

Loomis, Eric John. "Meaning, generality, and rules : language and logic in the later Wittgenstein /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Banks, Christopher Jon. "Spatio-temporal logic for the analysis of biochemical models." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10512.

Full text
Abstract:
Process algebra, formal specification, and model checking are all well studied techniques in the analysis of concurrent computer systems. More recently these techniques have been applied to the analysis of biochemical systems which, at an abstract level, have similar patterns of behaviour to concurrent processes. Process algebraic models and temporal logic specifications, along with their associated model-checking techniques, have been used to analyse biochemical systems. In this thesis we develop a spatio-temporal logic, the Logic of Behaviour in Context (LBC), for the analysis of biochemical models. That is, we define and study the application of a formal specification language which not only expresses temporal properties of biochemical models, but expresses spatial or contextual properties as well. The logic can be used to express, or specify, the behaviour of a model when it is placed into the context of another model. We also explore the types of properties which can be expressed in LBC, various algorithms for model checking LBC - each an improvement on the last, the implementation of the computational tools to support model checking LBC, and a case study on the analysis of models of post-translational biochemical oscillators using LBC. We show that a number of interesting and useful properties can be expressed in LBC and that it is possible to express highly useful properties of real models in the biochemistry domain, with practical application. Statements in LBC can be thought of as expressing computational experiments which can be performed automatically by means of the model checker. Indeed, many of these computational experiments can be higher-order meaning that one succinct and precise specification in LBC can represent a number of experiments which can be automatically executed by the model checker.
APA, Harvard, Vancouver, ISO, and other styles
23

Revenko, Artem. "Automatic Construction of Implicative Theories for Mathematical Domains." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-197794.

Full text
Abstract:
Implication is a logical connective corresponding to the rule of causality "if ... then ...". Implications allow one to organize knowledge of some field of application in an intuitive and convenient manner. This thesis explores possibilities of automatic construction of all valid implications (implicative theory) in a given field. As the main method for constructing implicative theories a robust active learning technique called Attribute Exploration was used. Attribute Exploration extracts knowledge from existing data and offers a possibility of refining this knowledge via providing counter-examples. In frames of the project implicative theories were constructed automatically for two mathematical domains: algebraic identities and parametrically expressible functions. This goal was achieved thanks both pragmatical approach of Attribute Exploration and discoveries in respective fields of application. The two diverse application fields favourably illustrate different possible usage patterns of Attribute Exploration for automatic construction of implicative theories.
APA, Harvard, Vancouver, ISO, and other styles
24

Atzemoglou, George Philip. "Higher-order semantics for quantum programming languages with classical control." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:9fdc4a26-cce3-48ed-bbab-d54c4917688f.

Full text
Abstract:
This thesis studies the categorical formalisation of quantum computing, through the prism of type theory, in a three-tier process. The first stage of our investigation involves the creation of the dagger lambda calculus, a lambda calculus for dagger compact categories. Our second contribution lifts the expressive power of the dagger lambda calculus, to that of a quantum programming language, by adding classical control in the form of complementary classical structures and dualisers. Finally, our third contribution demonstrates how our lambda calculus can be applied to various well known problems in quantum computation: Quantum Key Distribution, the quantum Fourier transform, and the teleportation protocol.
APA, Harvard, Vancouver, ISO, and other styles
25

Place, Thomas. "Decidable characterizations for tree logics." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00744954.

Full text
Abstract:
In this thesis we investigate the expressive power of several logics over finite trees. In particular we want to understand precisely the expressive power of first-order logic over finite trees. Because we study many logics, we proceed by comparison to a logic that subsumes them all and serves as a yardstick: monadic second-order logic. Each logic we consider is a fragment of monadic second-order logic. MSO is linked to the theory of formal languages. To each logical formula corresponds a tree language, which is the language of trees satisfying this formula. Furthermore, given a logic we can associate a class of tree languages: the class of languages definable by a formula of this logic. In the setting of finite trees MSO corresponds exactly to the class of regular tree languages. Given a logic, we actually look for a decidable characterization of the class of languages defined in this logic. By decidable characterization, we mean an algorithm for solving the following problem: given as input a finite tree automaton, decide if the recognized language belongs to the class in question. We will actually obtain our decidable characterizations by exhibiting for each class a set of closure properties such that a language is in the class under investigation if and only if it satisfies these closure properties. Each such closure property is then shown to be decidable. Stating and proving such closure properties usually yields a solid understanding of the expressive power of the corresponding logic. The main open problem in this research area is to obtain a decidable characterization for the class of tree languages that are definable in first-order logic. We provide decidable characterizations for several fragments of FO. First we provide three decidable characterizations for classes of regular languages of trees of bounded rank. The first class we consider is the class of languages definable in the temporal logic EF+F^-1. It essentially navigates the trees using two modalities for moving to a descendant node or an ancestor node. The second class we consider is the class of trees of bounded rank definable using one quantifier alternation. The last class, is the class of languages definable using a boolean combination of existential first order formulas. In the setting of forests, we investigate the class of languages definable in first-order logic using only two variables and two prediactes corresponding respectively to the ancestor and following sibling relations. We provide a characterization for this logic. The last class for which we provide a decidable characterization is the class of locally testable language (LT). A language L is in LT if membership in L depends only on the presence or absence of neighborhoods of a certain fixed size in the tree. We define notions of LT for both unranked trees and trees of bounded rank by adapting the definition of neighborhood to each setting. Then we provide a decidable characterization for both notions of LT.
APA, Harvard, Vancouver, ISO, and other styles
26

Lawley, Michael John, and n/a. "Program Transformation for Proving Database Transaction Safety." Griffith University. School of Computing and Information Technology, 2000. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070228.150125.

Full text
Abstract:
In this thesis we propose the use of Dijkstra's concept of a predicate transformer [Dij75] for the determination of database transaction safety [SS89] and the generation of simple conditions to check that a transaction will not violate the integrity constraints in the case that it is not safe. The generation of this simple condition is something that can be done statically, thus providing a mechanism for generating safe transactions. Our approach treats a database as state, a database transaction as a program, and the database's integrity constraints as a postcondition in order to use a predicate transformer [Dij75] to generate a weakest precondition. We begin by introducing a set-oriented update language for relational databases for which a predicate transformer is then defined. Subsequently, we introduce a more powerful update language for deductive databases and define a new predicate transformer to deal with this language and the more powerful integrity constraints that can be expressed using recursive rules. Next we introduce a data model with object-oriented features including methods, inheritance and dynamic overriding. We then extend the predicate transformer to handle these new features. For each of the predicate transformers, we prove that they do indeed generate a weakest precondition for a transaction and the database integrity constraints. However, the weakest precondition generated by a predicate transformer still involves much redundant checking. For several general classes of integrity constraint, including referential integrity and functional dependencies, we prove that the weakest precondition can be substantially further simplified to avoid checking things we already know to be true under the assumption that the database currently satisfies its integrity con-straints. In addition, we propose the use of the predicate transformer in combination with meta-rules that capture the exact incremental change to the database of a particular transaction. This provides a more general approach to generating simple checks for enforcing transaction safety. We show that this approach is superior to known existing previous approaches to the problem of efficient integrity constraint checking and transaction safety for relational, deductive, and deductive object-oriented databases. Finally we demonstrate several further applications of the predicate transformer to the problems of schema constraints, dynamic integrity constraints, and determining the correctness of methods for view updates. We also show how to support transactions embedded in procedural languages such as C.
APA, Harvard, Vancouver, ISO, and other styles
27

Paddock, Jeff. "Informed by silence." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ62412.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Schwartzkopff, Robert. "The numbers of the marketplace : commitment to numbers in natural language." Thesis, University of Oxford, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lawley, Michael John. "Program Transformation for Proving Database Transaction Safety." Thesis, Griffith University, 2000. http://hdl.handle.net/10072/365511.

Full text
Abstract:
In this thesis we propose the use of Dijkstra's concept of a predicate transformer [Dij75] for the determination of database transaction safety [SS89] and the generation of simple conditions to check that a transaction will not violate the integrity constraints in the case that it is not safe. The generation of this simple condition is something that can be done statically, thus providing a mechanism for generating safe transactions. Our approach treats a database as state, a database transaction as a program, and the database's integrity constraints as a postcondition in order to use a predicate transformer [Dij75] to generate a weakest precondition. We begin by introducing a set-oriented update language for relational databases for which a predicate transformer is then defined. Subsequently, we introduce a more powerful update language for deductive databases and define a new predicate transformer to deal with this language and the more powerful integrity constraints that can be expressed using recursive rules. Next we introduce a data model with object-oriented features including methods, inheritance and dynamic overriding. We then extend the predicate transformer to handle these new features. For each of the predicate transformers, we prove that they do indeed generate a weakest precondition for a transaction and the database integrity constraints. However, the weakest precondition generated by a predicate transformer still involves much redundant checking. For several general classes of integrity constraint, including referential integrity and functional dependencies, we prove that the weakest precondition can be substantially further simplified to avoid checking things we already know to be true under the assumption that the database currently satisfies its integrity con-straints. In addition, we propose the use of the predicate transformer in combination with meta-rules that capture the exact incremental change to the database of a particular transaction. This provides a more general approach to generating simple checks for enforcing transaction safety. We show that this approach is superior to known existing previous approaches to the problem of efficient integrity constraint checking and transaction safety for relational, deductive, and deductive object-oriented databases. Finally we demonstrate several further applications of the predicate transformer to the problems of schema constraints, dynamic integrity constraints, and determining the correctness of methods for view updates. We also show how to support transactions embedded in procedural languages such as C.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Computing and Information Technology
Faculty of Information and Communication Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
30

Silva, Nayara de Souza. "Aplicação de verificação formal em um sistema de segurança veicular." Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/7134.

Full text
Abstract:
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-04-11T19:28:47Z No. of bitstreams: 2 Dissertação - Nayara de Souza Silva - 2017.pdf: 2066646 bytes, checksum: 95e09b89bf69fe61277b09ce9f1812a6 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-04-12T14:32:03Z (GMT) No. of bitstreams: 2 Dissertação - Nayara de Souza Silva - 2017.pdf: 2066646 bytes, checksum: 95e09b89bf69fe61277b09ce9f1812a6 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-04-12T14:32:03Z (GMT). No. of bitstreams: 2 Dissertação - Nayara de Souza Silva - 2017.pdf: 2066646 bytes, checksum: 95e09b89bf69fe61277b09ce9f1812a6 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-03-07
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
The process of developing computer systems takes into account many stages, in which some are more necessary than others, depending on the purpose of the application. The implementation stage is always necessary, indisputably. Sometimes the requirements analysis and testing phases are neglected. And, generally, the part of formal verification correctness is intended for few applications. The use of model checkers has been exploited in the task of validating a behavioral specification in its appropriate level of abstraction, notably specifications validation of critical systems, especially when they involve the preservation of human life, when the existence of errors entails huge financial loss or when deals with information security. Therefore, it proposes to apply formal verification techniques in the validation of the vehicular safety system Avoiding Doored System, considered as critical, in order to verify if the implemented system faithfully meets the requirements for it proposed. For that, it was used as a tool to verify its correctness the Specification and Verification System - PVS, detailing and documenting all the steps employed in the process of specification and formal verification. K
O processo de desenvolvimento de sistemas computacionais leva em conta muitas etapas, nos quais umas são tidas mais necessárias que outras, dependendo da finalidade da aplica- ção. A etapa de implementação sempre é necessária, indiscutivelmente. Por vezes as fases de análise de requisitos e de testes são negligenciadas. E, geralmente, a parte de verifica- ção formal de corretude é destinada a poucas aplicações. O uso de verificadores de modelos tem sido explorado na tarefa de validar uma especificação comportamental no seu nível adequado de abstração, sobretudo, na validação de especificações de sistemas críticos, principalmente quando estes envolvem a preservação da vida humana, quando a existência de erros acarreta enorme prejuízo financeiro ou quando tratam com a segurança da informa- ção. Diante disso, se propõe aplicar técnicas de verificação formal na validação do sistema de segurança veicular Avoiding Doored System, tido como crítico, com o intuito de atestar se o sistema implementado atende, fielmente, os requisitos para ele propostos. Para tal, foi utilizada como ferramenta para a verificação de sua corretude o Specification and Verification System - PVS, detalhando e documentando todas as etapas empregadas no processo de especificação e verificação formal. Pal
APA, Harvard, Vancouver, ISO, and other styles
31

Boronat, Moll Arturo. "A formal framework for model management." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1964.

Full text
Abstract:
El Desarrollo de Software Dirigido por Modelos es una rama de la Ingeniería del Software en la que los artefactos software se representan como modelos para incrementar la productividad, calidady eficiencia económica en el proceso de desarrollo de software, donde un modelo proporciona una representación abstracta del código final de una aplicación. En este campo, la iniciativa Model-Driven Architecture (MDA), patrocinada por la OMG, está constituida por una familia de estándares industriales, entre los que se destacan: Meta-Object Facility (MOF), Unified Modeling Language (UML), Object Constraint Language (OCL), XML Metadata Interchange (XMI), y Query/Views/Transformations (QVT). Estos estándares proporcionan unas directrices comunes para herramientas basadas en modelos y para procesos de desarrollo de software dirigidos por modelos. Su objetivo consiste en mejorar la interoperabilidad entre marcos de trabajo ejecutables, en automatizar el proceso desarrollo de software de software y en proporcionar técnicas que eviten errores durante ese proceso. El estándar MOF describe un marco de trabajo genérico que permite definir la sintaxis abstracta de lenguajes de modelado. Este estándar persigue la definición de los conceptos básicos que son utilizados en procesos de desarrollo de software dirigidos por modelos: que es un modelo, que es un metamodelo, qué es reflexión en un marco de trabajo basado en MOF, etc. Sin embargo, la mayoría de estos conceptos carecen de una semántica formal en la versión actual del estándar MOF. Además, OCL se utiliza como un lenguage de definición de restricciones que permite añadir semántica a un metamodelo MOF. Desafortunadamente, la relación entre un metamodelo y sus restricciones OCL también carece de una semántica formal. Este hecho es debido, en parte, a que los metamodelos solo pueden ser definidos como dato en un marco de trabajo basado en MOF. El estándar MOF también proporciona las llamadas facilidades de reflexión de MOF (MOF Reflecti
Boronat Moll, A. (2007). A formal framework for model management [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1964
Palancia
APA, Harvard, Vancouver, ISO, and other styles
32

Bourdier, Tony. "Méthodes algébriques pour la formalisation et l'analyse de politiques de sécurité." Electronic Thesis or Diss., Nancy 1, 2011. http://www.theses.fr/2011NAN10096.

Full text
Abstract:
Concevoir et mettre en oeuvre des méthodes pour la spécification, l'analyse et la vérification de logiciels et de systèmes sont les principaux moteurs des activités de recherche présentées dans ce manuscrit. Dans ce cadre, nos travaux se positionnent dans la catégorie dite des méthodes formelles appartenant à la communauté plus large du génie logiciel. A l'interface des travaux théoriques et applicatifs, notre objectif est de contribuer aux méthodes permettant d'assurer la correction et la sûreté des systèmes (fonctionnalité, sécurité, fiabilité, ...) en développant ou en améliorant des langages de spécification, des techniques et des outils permettant leur analyse formelle. Dans ce but, nous nous sommes attaché dans cette thèse à proposer et à étudier un cadre formel permettant la définition de politiques de sécurité et la vérification de leurs propriétés. A cet effet, nous avons proposé un cadre pour la spécification de politiques de sécurité basé sur une approche modulaire dans laquelle une politique est vue comme la composition d'un modèle de sécurité et d'une configuration. Nous avons investigué les possibilités offertes par de telles spécifications lorsque les modèles sont exprimés au moyen de contraintes du premier ordre et les configurations au moyen de programmes logiques. En particulier, nous avons proposé un algorithme permettant de transformer une politique exprimée dans un modèle donné vers une autre politique équivalente (au sens où elle engendre les mêmes autorisations) exprimée dans un autre modèle. Dans un second temps, nous nous sommes proposé de tenir compte des aspects dynamiques de la configuration d'une politique vue comme un état du système sur lequel la politique est mise en oeuvre et où chaque action est associée à une procédure de modification des états. Nous avons proposé un langage formel simple pour spécifier séparément les systèmes et les politiques de sécurité puis avons donné une sémantique des spécifications exprimées dans ce cadre sous la forme de systèmes de réécriture. Nous nous sommes ensuite attachés à montrer que les systèmes de réécriture obtenus permettent l'étude de propriétés de sécurité. Dans une troisième partie, nous nous sommes focalisé sur les mécanismes permettant la mise en oeuvre de politiques de sécurité dans les réseaux. Dans ce cadre, nous avons proposé une spécification des firewalls et de leurs compositions basée sur les automates d'arbres et les systèmes de réécriture puis avons montré en quoi ces spécifications nous permettent d'analyser de façon automatique les politiques de sécurité sous-jacentes
Designing and applying formal methods for specifying, analyzing and verifying softwares and systems are the main driving forces behind the work presented in this manuscript. In this context, our activities fall into the category of formal methods belonging to the wider community of software engineering. At the interface between theoretical and applied research, our aim is to contribute to the methods ensuring the correction and the safety of systems (security, reliability, ...) by developing or by improving specification languages, techniques and tools allowing their formal analysis. In this purpose, we became attached in this thesis to propose and to study a formal framework allowing the specification of security policies and the verification of their properties. We first proposed a framework for specifying security policies based on a modular approach in which policies are seen as a composition of security models and configurations. We investigated the possibilities opened by such specifications when models are expressed by means of first order constraints and configurations by means of logical programs. In particular, we proposed an algorithm allowing the transformation of a security policy expressed in a given model towards another equivalent policy expressed in another model. Secondly, we suggested taking into account dynamic aspects of policy configurations which can be seen as states of the system on which the policy is applied and where each action is associated with a procedure of states modification. We proposed a simple formal language to specify separately systems and security policies and then gave a semantics of specifications expressed in this framework under the form of rewriting systems. We then attempted to show that the obtained rewriting systems allow the analysis of security properties. In the third part, we focused on mechanisms enforcing security policies in networks. In this context, we proposed a specification of firewalls and their compositions based on tree automata and rewriting systems and then showed how these specifications allow us to analyze in an automatic way the underlying security policies
APA, Harvard, Vancouver, ISO, and other styles
33

Harwath, Frederik. "On Invariant Formulae of First-Order Logic with Numerical Predicates." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19609.

Full text
Abstract:
Diese Arbeit untersucht ordnungsinvariante Formeln der Logik erster Stufe (FO) und einiger ihrer Erweiterungen, sowie andere eng verwandte Konzepte der endlichen Modelltheorie. Viele Resultate der endlichen Modelltheorie nehmen an, dass Strukturen mit einer Einbettung ihres Universums in ein Anfangsstück der natürlichen Zahlen ausgestattet sind. Dies erlaubt es, beliebige Relationen (z.B. die lineare Ordnung) und Operationen (z.B. Addition, Multiplikation) von den natürlichen Zahlen auf solche Strukturen zu übertragen. Die resultierenden Relationen auf den endlichen Strukturen werden als numerische Prädikate bezeichnet. Werden numerische Prädikate in Formeln verwendet, beschränkt man sich dabei häufig auf solche Formeln, deren Wahrheitswert auf endlichen Strukturen invariant unter Änderungen der Einbettung der Strukturen ist. Wenn das einzige verwendete numerische Prädikat eine lineare Ordnung ist, spricht man beispielsweise von ordnungsinvarianten Formeln. Die Resultate dieser Arbeit können in drei Teile unterteilt werden. Der erste Teil betrachtet die Lokalitätseigenschaften von FO-Formeln mit Modulo-Zählquantoren, die beliebige numerische Prädikate invariant nutzen. Der zweite Teil betrachtet FO-Sätze, die eine lineare Ordnung samt der zugehörigen Addition auf invariante Weise nutzen, auf endlichen Bäumen. Es wird gezeigt, dass diese dieselben regulären Baumsprachen definieren, wie FO-Sätze ohne numerische Prädikate mit bestimmten Kardinalitätsprädikaten. Für den Beweis wird eine algebraische Charakterisierung der in dieser Logik definierbaren Baumsprachen durch Operationen auf Bäumen entwickelt. Der dritte Teil der Arbeit beschäftigt sich mit der Ausdrucksstärke und der Prägnanz von FO und Erweiterungen von FO auf Klassen von Strukturen beschränkter Baumtiefe.
This thesis studies the concept of order-invariance of formulae of first-order logic (FO) and some of its extensions as well as other closely related concepts from finite model theory. Many results in finite model theory assume that structures are equipped with an embedding of their universe into an initial segment of the natural numbers. This allows to transfer arbitrary relations (e.g. linear order) and operations (e.g. addition, multiplication) on the natural numbers to structures. The arising relations on the structures are called numerical predicates. If formulae use these numerical predicates, it is often desirable to consider only such formulae whose truth value in finite structures is invariant under changes to the embeddings of the structures. If the numerical predicates include only a linear order, such formulae are called order-invariant. We study the effect of the invariant use of different kinds of numerical predicates on the expressive power of FO and extensions thereof. The results of this thesis can be divided into three parts. The first part considers the locality and non-locality properties of formulae of FO with modulo-counting quantifiers which may use arbitrary numerical predicates in an invariant way. The second part considers sentences of FO which may use a linear order and the corresponding addition in an invariant way and obtains a characterisation of the regular finite tree languages which can be defined by such sentences: these are the same tree languages which are definable by FO-sentences without numerical predicates with certain cardinality predicates. For the proof, we obtain a characterisation of the tree languages definable in this logic in terms of algebraic operations on trees. The third part compares the expressive power and the succinctness of different ex- tensions of FO on structures of bounded tree-depth.
APA, Harvard, Vancouver, ISO, and other styles
34

Bourdier, Tony. "Méthodes algébriques pour la formalisation et l'analyse de politiques de sécurité." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00646401.

Full text
Abstract:
Concevoir et mettre en œuvre des méthodes pour la spécification, l'analyse et la vérification de logiciels et de systèmes sont les principaux moteurs des activités de recherche présentées dans ce manuscrit. Dans ce cadre, nos travaux se positionnent dans la catégorie dite des méthodes formelles appartenant à la communauté plus large du génie logiciel. A l'interface des travaux théoriques et applicatifs, notre objectif est de contribuer aux méthodes permettant d'assurer la correction et la sûreté des systèmes (fonctionnalité, sécurité, fiabilité, ...) en développant ou en améliorant des langages de spécification, des techniques et des outils permettant leur analyse formelle. Dans ce but, nous nous sommes attaché dans cette thèse à proposer et à étudier un cadre formel permettant la définition de politiques de sécurité et la vérification de leurs propriétés. A cet effet, nous avons proposé un cadre pour la spécification de politiques de sécurité basé sur une approche modulaire dans laquelle une politique est vue comme la composition d'un modèle de sécurité et d'une configuration. Nous avons investigué les possibilités offertes par de telles spécifications lorsque les modèles sont exprimés au moyen de contraintes du premier ordre et les configurations au moyen de programmes logiques. En particulier, nous avons proposé un algorithme permettant de transformer une politique exprimée dans un modèle donné vers une autre politique équivalente (au sens où elle engendre les mêmes autorisations) exprimée dans un autre modèle. Dans un second temps, nous nous sommes proposé de tenir compte des aspects dynamiques de la configuration d'une politique vue comme un état du système sur lequel la politique est mise en œuvre et où chaque action est associée à une procédure de modification des états. Nous avons proposé un langage formel simple pour spécifier séparément les systèmes et les politiques de sécurité puis avons donné une sémantique des spécifications exprimées dans ce cadre sous la forme de systèmes de réécriture. Nous nous sommes ensuite attachés à montrer que les systèmes de réécriture obtenus permettent l'étude de propriétés de sécurité. Dans une troisième partie, nous nous sommes focalisé sur les mécanismes permettant la mise en œuvre de politiques de sécurité dans les réseaux. Dans ce cadre, nous avons proposé une spécification des firewalls et de leurs compositions basée sur les automates d'arbres et les systèmes de réécriture puis avons montré en quoi ces spécifications nous permettent d'analyser de façon automatique les politiques de sécurité sous-jacentes.
APA, Harvard, Vancouver, ISO, and other styles
35

Igor, Dolinka. "O identitetima algebri regularnih jezika." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2000. https://www.cris.uns.ac.rs/record.jsf?recordId=5997&source=NDLTD&language=en.

Full text
Abstract:
Jezik nad E je proizvoljan skup reci nad E, tj. proizvoljan podskup slobodnog monoida E*. Jezici nad datom azbukom formiraju al­ gebre jezika, sa operacijama unije, konkatenacije (dopisivanja red), Kleene-jeve iteracije i sa 0, {A} kao konstantama. Regularni jezici nad E su elementi podalgebre algebre jezika nad E generisane konačnim jezicima. Ispostavlja se da algebre jezika generišu isti varijetet (i stoga zadovoljavaju iste iden­titete) kao i algebre binarnih relacija snabdevene operacijama unije, kompozi­cije, refleksivno-tranzitivnog zatvorenja i praznom relacijom i dijagonalom kao konstantama. Reč je o varijetetu Kleenejevih algebri, i slobodne algebre tog varijeteta su baš algebre regularnih jezika. Na početku disertacije, izloženi su neki aspekti algebarske teorije automata i formalnih jezika, teorije binarnih relacija i univerzalne algebre, relevantni za ispitivanje identiteta na algebrama jezika. Zatim je dat klasični rezultat (Redko, 1964.) da varijetet Kleenejevih algebri nema konačnu bazu identiteta. Ovde je prikazan dokaz Conwaya iz 1971., budući da on sadrži neke ideje koje su se pokazale korisne za dalji rad. Glave 3 i 4 sadrže originalne rezultate usmerene na profinjenje Redkovog rezultata. Pokazano je da uzroci beskonačnosti baze identiteta za Kleenejeve algebre leže u interakciji operacija konkatenacije i iteracije jezika (odnosno, kompozicije i refleksivno-tranzitivnog zatvorenja relacija). Drugim recima, klasa redukata algebri jezika bez operacije unije nema konačnu bazu identiteta. To daje odgovor na problem D. A. Bredikhina iz 1993. godine. S druge strane, proširenjem tipa Kleenejevih algebri involutivnom operacijom inverza jezika, odnosno relacija, takođe se dolazi do beskonačno baziranih varijeteta, čime se rešava problem B. Jonssona iz 1988. godine. Analogno, komutativni jezici nad E su proizvoljni podskupovi slobodnog komutativnog monoida E®. U Glavi 5 je dokazano da se jednakosna teorija algebri komutativnih jezika poklapa sa jednakosnom teorijom algebre (regu­larnih) jezika nad jednoelementnim alfabetom, što daje odgovor na problem koji je još 1969. formulisao A. Salomaa u svojoj monografiji  Theory of Au­tomata.Na taj način, iz poznatih rezultata o jednakosnoj aksiomatizaciji komutativnih jezika se dobija jedna baza za algebre jezika nad jednoelement­nim alfabetom, kao i veoma kratak dokaz poznate činjenice (takođe Redko, 1964.) da algebre komutativnih jezika nemaju konačnu bazu identiteta. Na kraju disertacije, identiteti Kleenejevih algebri se posmatraju u kon­tekstu dinamičkih algebri. Reč je o algebarskoj verziji dinamičkih logika, koje su konstruisane sedamdesetih godina kao matematički model rada računara, kada se na njima izvršava program pisan u nekom imperativnom program­ skom jeziku. Na primer, problemi verifikacije i ekvivalentnosti programa se lako izražavaju preko identiteta dinamičkih algebri, tako da razne njihove jednakosne osobine odgovaraju pojmovima iz teorijskog računarstva. Takođe, interesatno je da je jednakosna teorija Kleenejevih algebri ’’ kodirana” u konačno baziranoj jednakosnoj teoriji dinamičih algebri. Polazeći od poznatih rezul­tata za dvosortne dinamičke algebre (pri čemu je jedna komponenta algebra istog tipa kao i Kleenejeve algebre, dok je druga Booleova algebra), neki od tih rezultata su transformisani i prošireni za Jonssonove dinamičke algebre (jednosortne modele dinamičkih logika). Na primer, ako se Kleenejeva algebra K može predstaviti kao konačan direktan proizvod slobodnih algebri varijeteta Kleenejevih algebri generisanih Kleenejevim relacionim algebrama, tada vari­jetet K-dinamičkih algebri ima odlučivu jednakosnu teoriju. Odavde se izvodi da svaki varijetet Kleenejevih algebri generisan Kleenejevim relacionim algeb­rama takođe ima odlučivu jednakosnu teoriju.
A language over £ is an arbitrary set of words, i.e. any subset of the free monoid £*. All languages over a given alphabet form the algebra of languages, which is equipped with the operations of union, concate­nation, Kleene iteration and 0, {A } as constants. Regular languages over £ are the elements of the subalgebra of the algebra of languages over £ generated by finite languages. It turns out that algebras of languages generate exactly the same variety as algebras of binary relations, endowed with union, rela­tion composition, formation of the refelxive-transitive closure and the empty relation and the diagonal as constants. The variety in question is the vari­ety of Kleene algebras, and the algebras of regular languages are just its free algebras. The present dissertation starts with several aspects of algebraic theory of automata and formal languages, theory of binary relations and universal alge­bra, which are related to problems concerning identities of language algebras. This material is followed by the classical result (Redko, 1964) claiming that the variety of Kleene algebras have no finite equational base. We present the proof of Conway from 1971, since it contains some ideas which can be used for generalizations in different directions. Chapters 3 and 4 contain original results which refine the one of Redko. It is shown that the cause of nonfinite axiomatizability of Kleene algebras lies in the superposition of the concatenation and the iteration of languages, that is, composition of relations and reflexive-transitive closure. In other words, the class of -(--free reducts of algebras of languages has no finite equational base, which answers in the negative a problem of D. A. Bredikhin from 1993. On the other hand, by extending the type of Kleene algebras by the involutive operation of inverse of  languages (converse of relations), we also obtain a nonfinitely based variety, which solves a problem of B. Jonsson from 1988. Analogously, commutative languages over E are defined as subsets of the free commutative monoid £®. It is proved in Chapter 5 that equational the­ ories of algebras of commutative languages and, respectively, of the algebra of (regular) languages over the one-element alphabet, coincide. This result settles a thirty year old problem of A. Salomaa, formulated back in his wellknown monograph  Theory of Automata.Thus, we obtain an equational base for the algebra of one-letter languages, and, on the other hand, a very short proof of another Redko’s result from 1964, according to which there is no finite equational base for algebras of commutative languages. Finally, identities of Kleene algebras are considered in the context of dy­namic algebras, which are just algebraic counterparts of dynamic logics. They were discovered in the seventies as a result of the quest for an appropriate logic for reasoning about computer programs written in an imperative pro­ gramming language. For example, problems concerning program verification and equivalence can be easily translated into identities of dynamic algebras, so that many of their equational properties correspond to notions from computer science. It is also interesting that the whole equational theory of Kleene alge­ bras is ’’encoded” in the finitely based equational theory of dynamic algebras.Starting with known results on two-sorted dynamic algebras (where one com­ ponent is an algebra of the same signature as Kleene algebras, while the other is a Boolean algebra), some of those results are transformed and extended for Jonsson dynamic algebras (that is, one-sorted models of dynamic logics). For example, if a Kleene algebra K can be represented as a finite direct product of free algebras of varieties of Kleene algebras generated by Kleene relation algebras, then the variety of K-dynamic algebras has a decidable equational theory. The latter yields that all varieties of Kleene algebras generated by Kleene relation algebras have decidable equational theories, too.
APA, Harvard, Vancouver, ISO, and other styles
36

Monmege, Benjamin. "Spécification et vérification de propriétés quantitatives : expressions, logiques et automates." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00908990.

Full text
Abstract:
La vérification automatique est aujourd'hui devenue un domaine central de recherche en informatique. Depuis plus de 25 ans, une riche théorie a été développée menant à de nombreux outils, à la fois académiques et industriels, permettant la vérification de propriétés booléennes -- celles qui peuvent être soit vraies soit fausses. Les besoins actuels évoluent vers une analyse plus fine, c'est-à-dire plus quantitative. L'extension des techniques de vérification aux domaines quantitatifs a débuté depuis 15 ans avec les systèmes probabilistes. Cependant, de nombreuses autres propriétés quantitatives existent, telles que la durée de vie d'un équipement, la consommation énergétique d'une application, la fiabilité d'un programme, ou le nombre de résultats d'une requête dans une base de données. Exprimer ces propriétés requiert de nouveaux langages de spécification, ainsi que des algorithmes vérifiant ces propriétés sur une structure donnée. Cette thèse a pour objectif l'étude de plusieurs formalismes permettant de spécifier de telles propriétés, qu'ils soient dénotationnels -- expressions régulières, logiques monadiques ou logiques temporelles -- ou davantage opérationnels, comme des automates pondérés, éventuellement étendus avec des jetons. Un premier objectif de ce manuscript est l'étude de résultats d'expressivité comparant ces formalismes. En particulier, on donne des traductions efficaces des formalismes dénotationnels vers celui opérationnel. Ces objets, ainsi que les résultats associés, sont présentés dans un cadre unifié de structures de graphes. Ils peuvent, entre autres, s'appliquer aux mots et arbres finis, aux mots emboîtés (nested words), aux images ou aux traces de Mazurkiewicz. Par conséquent, la vérification de propriétés quantitatives de traces de programmes (potentiellement récursifs, ou concurrents), les requêtes sur des documents XML (modélisant par exemple des bases de données), ou le traitement des langues naturelles sont des applications possibles. On s'intéresse ensuite aux questions algorithmiques que soulèvent naturellement ces résultats, tels que l'évaluation, la satisfaction et le model checking. En particulier, on étudie la décidabilité et la complexité de certains de ces problèmes, en fonction du semi-anneau sous-jacent et des structures considérées (mots, arbres...). Finalement, on considère des restrictions intéressantes des formalismes précédents. Certaines permettent d'étendre l'ensemble des semi-anneau sur lesquels on peut spécifier des propriétés quantitatives. Une autre est dédiée à l'étude du cas spécial de spécifications probabilistes : on étudie en particulier des fragments syntaxiques de nos formalismes génériques de spécification générant uniquement des comportements probabilistes.
APA, Harvard, Vancouver, ISO, and other styles
37

Ballis, Demis. "Rule-Based Software Verification and Correction." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1948.

Full text
Abstract:
The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature. In this dissertation, we face two challenging problems of verification and correction. Specifically, verification and correction of declarative programs, and the verification and correction of Web sites (i.e. large collections of semistructured data). Firstly, we propose a general correction scheme for automatically correcting declarative, rule-based programs which exploits a combination of bottom-up as well as topdown inductive learning techniques. Our hybrid hodology is able to infer program corrections that are hard, or even impossible, to obtain with a simpler,automatic top-down or bottom-up learner. Moreover, the scheme will be also particularized to some well-known declarative programming paradigm: that is, the functional logic and the functional programming paradigm. Secondly, we formalize a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. We provide a rule-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages. Useful information is gathered during the verification process which can be used to repair the Web site. So, after a verification phase, one can also infer semi-automatically some possible corrections in order to fix theWeb site. The methodology is based on a novel rewrit
Ballis, D. (2005). Rule-Based Software Verification and Correction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1948
Palancia
APA, Harvard, Vancouver, ISO, and other styles
38

Neron, Pierre. "A Quest for Exactness: Program Transformation for Reliable Real Numbers." Phd thesis, Ecole Polytechnique X, 2013. http://tel.archives-ouvertes.fr/tel-00924379.

Full text
Abstract:
Cette thèse présente un algorithme qui élimine les racines carrées et les divi- sions dans des programmes sans boucles, utilisés dans des systèmes embarqués, tout en préservant la sémantique. L'élimination de ces opérations permet d'éviter les erreurs d'arrondis à l'exécution, ces erreurs d'arrondis pouvant entraîner un comportement complètement inattendu de la part du programme. Cette trans- formation respecte les contraintes du code embarqué, en particulier la nécessité pour le programme produit de s'exécuter en mémoire fixe. Cette transformation utilise deux algorithmes fondamentaux développés dans cette thèse. Le premier permet d'éliminer les racines carrées et les divisions des expressions booléennes contenant des comparaisons d'expressions arithmétiques. Le second est un algo- rithme qui résout un problème d'anti-unification particulier, que nous appelons anti-unification contrainte. Cette transformation de programme est définie et prou- vée dans l'assistant de preuves PVS. Elle est aussi implantée comme une stratégie de ce système. L'anti-unification contrainte est aussi utilisée pour étendre la transformation à des programmes contenant des fonctions. Elle permet ainsi d'éliminer les racines carrées et les divisions de spécifications écrites en PVS. La robustesse de cette méthode est mise en valeur par un exemple conséquent: l'élimination des racines carrées et des divisions dans un programme de détection des conflits aériens.
APA, Harvard, Vancouver, ISO, and other styles
39

Janin, David. "Contribution aux fondements des méthodes formelles : jeux, logique et automates." Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2005. http://tel.archives-ouvertes.fr/tel-00659990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Jacquemard, Florent. "Modèles d'automates d'arbres étendus pour la vérification de systèmes infinis." Habilitation à diriger des recherches, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00643595.

Full text
Abstract:
Ce document présente l'étude de plusieurs modèles de machines à états finis qui étendent tous le même formalisme: les automates d'arbres classiques, et leur application dans différentes tâches telles que l'analyse statique de programmes ou de systèmes, la typage, la vérification de la cohérence de spécifications, le model checking... Les arbres sont une structure naturelle de données, très répandue en informatique, par exemple pour la représentation des structures de données hiérarchiques ou imbriquées, pour des algorithmes spécifiques (arbres binaires de recherche, algorithmes distribués), comme modèle abstrait pour des données semi-structurées utilisées pour l'échange d'information dans le Web, pour une présentation algébrique de processus récursifs, comme les termes en logique... Lorsqu'il s'agit de raisonner sur des systèmes manipulant des arbres, ou modelisés par des arbres, il est crucial d'avoir une représentation finie d'ensembles infinis d'arbres. Les automates d'arbres sont des machines à états finis permettant une telle représentation. Ils ont fait la preuve de leur adéquation à des tâches de raisonnement: ils ont un modèle théorique bien établi, en étroite relation avec la logique, ils bénéficient de bonnes propriétés de composition et d'algorithmes de décision efficaces. En particulier, les automates d'arbres sont utilisées au coeur de systèmes de vérification formelle d'outils de déduction automatique. Toutefois, les automates d'arbres ont des limitations sévères en expressivité. Par exemple, ils sont incapables de faire du filtrage non-linéaire ou d'exprimer des contraintes d'intégrité tels que les clés dans les bases de données. Certaines extensions ont été proposées afin d'améliorer le modèle en essayant de conserver de bonnes propriétés. Nous présentons dans ce document de plusieurs de telles extensions, leurs propriétés et leur utilisation en vérification symbolique de systèmes et de programmes.
APA, Harvard, Vancouver, ISO, and other styles
41

Bourreau, Pierre. "Jeux de typage et analyse de lambda-grammaires non-contextuelles." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00733964.

Full text
Abstract:
Les grammaires catégorielles abstraites (ou λ-grammaires) sont un formalisme basé sur le λ-calcul simplement typé. Elles peuvent être vues comme des grammaires générant de tels termes, et ont été introduites afin de modéliser l'interface entre la syntaxe et la sémantique du langage naturel, réunissant deux idées fondamentales : la distinction entre tectogrammaire (c.a.d. structure profonde d'un énoncé) et phénogrammaire (c.a.d représentation de la surface d'un énoncé) de la langue, exprimé par Curry ; et une modélisation algébrique du principe de compositionnalité afin de rendre compte de la sémantique des phrases, due à Montague. Un des avantages principaux de ce formalisme est que l'analyse d'une grammaires catégorielle abstraite permet de résoudre aussi bien le problème de l'analyse de texte, que celui de la génération de texte. Des algorithmes d'analyse efficaces ont été découverts pour les grammaires catégorielles abstraites de termes linéaires et quasi-linéaires, alors que le problème de l'analyse est non-élémentaire dans sa forme la plus générale. Nous proposons d'étudier des classes de termes pour lesquels l'analyse grammaticale reste solvable en temps polynomial. Ces résultats s'appuient principalement sur deux théorèmes de typage : le théorème de cohérence, spécifiant qu'un λ-terme donné est l'unique habitant d'un certain typage ; et le théorème d'expansion du sujet, spécifiant que deux termes β-équivalents habitent les même typages. Afin de mener cette étude à bien, nous utiliserons une représentation abstraite des notions de λ-termes et de typages, sous forme de jeux. En particulier, nous nous appuierons grandement sur cette notion afin de démontrer le théorème de cohérence pour de nouvelles familles de λ-termes et de typages. Grâce à ces résultats, nous montrerons qu'il est possible de construire de manière directe, un reconnaisseur dans le langage Datalog, pour des grammaires catégorielles abstraites de λ-termes quasi-affines.
APA, Harvard, Vancouver, ISO, and other styles
42

Piloni, Diego. "Representación semántica de lenguaje natural en el dominio de fórmulas lógicas." Bachelor's thesis, 2018. http://hdl.handle.net/11086/10761.

Full text
Abstract:
Este trabajo final de licenciatura parte del supuesto que el aprendizaje del lenguaje simbólico de lógica formal es problemático (Oller, 2006). Algunos aspectos de esta dificultad son analizados por los autores de “Language, Proof and Logic” utilizando datos empíricos (Barker-Plummer, 2008). Si bien existen propuestas didácticas con soportes informáticos para el aprendizaje de lógica (Barrionuevo, 2008), la mayoría se centra o bien en los sistemas deductivos o bien en la semántica formal de fórmulas lógicas. Pareciera, por lo tanto, que faltan soportes digitales que transparenten las dificultades inherentes a la traducción de lenguajes naturales al lenguaje simbólico. El objetivo principal de este trabajo es desarrollar una herramienta didáctica informática que facilite la adquisición del lenguaje formal de lógica de primer orden
APA, Harvard, Vancouver, ISO, and other styles
43

Johnson, Donald Gordon. "An improved theorem prover by using the semantics of structure." 1985. http://hdl.handle.net/2097/27463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ras, Charl John. "Automatically presentable structures." Thesis, 2012. http://hdl.handle.net/10210/6838.

Full text
Abstract:
M.Sc.
In this thesis we study some of the propertie of a clas called automatic structures. Automatic structures are structures that can be encoded (in some defined way) into a set of regular languages. This encoding allows one to prove many interesting properties about automatic structures, including decidabilty results.
APA, Harvard, Vancouver, ISO, and other styles
45

Gallasch, Guy Edward. "Parametric verification of the class of stop-and-wait protocols." 2007. http://arrow.unisa.edu.au:8081/1959.8/29552.

Full text
Abstract:
This thesis investigates a method for tackling the verification of parametric systems, systems whose behaviour may depend on the value of one or more parameters. The range of allowable values for such parameters may, in general, be large or unknown. This results in a large number of instances of a system that require verification, one instance for each allowable combination of parameter values. When one or more parameters are unbounded, the family of systems that require verification becomes infinite. Computer protocols are one example of such parametric systems. They may have parameters such as the maximum sequence number or the maximum number of retransmissions. Traditional protocol verification approaches usually only analyse and verify properties of a parametric system for a small range of parameter values. It is impossible to verify in this way every concrete instance of an infinite family of systems. Also, the number of reachable states tends to increase dramatically with increasing parameter values, and thus the well known state explosion phenomenon also limits the range of parameters for which the system can be analysed. In this thesis, we concentrate on the parametric verification of the Stop-and-Wait Protocol (SWP), an elementary flow control protocol. We have used Coloured Petri Nets (CPNs) to model the SWP, operating over an in-order but lossy medium, with two unbounded parameters: the maximum sequence number; and the maximum number of retransmissions. A novel method has been used for symbolically representing the parametric reachability graph of our parametric SWP CPN model. This parametric reachability graph captures exactly the infinite family of reachability graphs resulting from the infinite family of SWP CPNs. The parametric reachability graph is represented symbolically as a set of closed-form algebraic expressions for the nodes and arcs of the reachability graph, expressed in terms of the two parameters. By analysing the reachability graphs of the SWP CPN model for small parameter values, structural regularities in the reachability graphs were identified and exploited to develop the appropriate algebraic expressions for the parametric reachability graph. These expressions can be analysed and manipulated directly, thus the properties that are verified from these expressions are verified for all instances of the system. Several properties of the SWP that are able to be verified directly from the parametric reachability graph have been identified. These include a proof of the size of the parametric reachability graph in terms of both parameters, absence of deadlocks (undesired terminal states), absence of livelocks (undesirable cycles of behaviour from which the protocol cannot escape), absence of dead transitions (actions that can never occur) and the upper bounds on the content of the underlying communication channel. These are verified from the algebraic expressions and thus hold for all parameter values. Significantly, language analysis is also carried out on the parametric SWP. The parametric reachability graph is translated into a parametric Finite State Automaton (FSA), capturing symbolically the infinite set of protocol languages (i.e. sequences of user observable events) by means of similar algebraic expressions to those of the parametric reachability graph. Standard FSA reduction techniques were applied in a symbolic fashion directly to the parametric FSA, firstly to obtain a deterministic representation of the parametric FSA, then to obtain an equivalent minimised FSA. It was found that the determinisation procedure removed the effect of the maximum number of retransmissions parameter, and the minimisation procedure removed the effect of the maximum sequence number parameter. Conformance of all instances of the SWP over both parameters to its desired service language is proved. The development of algebraic expressions to represent the infinite class of Stop-and-Wait Protocols, and the verification of properties (including language analysis) directly from these algebraic expressions, has demonstrated the potential of this method for the verification of more general parametric systems. This thesis provides a significant contribution toward the development of a general parametric verification methodology.
APA, Harvard, Vancouver, ISO, and other styles
46

Bollen, A. W. "Conditional logic programming." Phd thesis, 1988. http://hdl.handle.net/1885/136790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Hai Feng. "Model checking concurrent object oriented scoop programs /." 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR38783.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2007. Graduate Programme in Computer Science.
Typescript. Includes bibliographical references (leaves 153-157). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR38783
APA, Harvard, Vancouver, ISO, and other styles
48

Revenko, Artem. "Automatic Construction of Implicative Theories for Mathematical Domains." Doctoral thesis, 2015. https://tud.qucosa.de/id/qucosa%3A29254.

Full text
Abstract:
Implication is a logical connective corresponding to the rule of causality "if ... then ...". Implications allow one to organize knowledge of some field of application in an intuitive and convenient manner. This thesis explores possibilities of automatic construction of all valid implications (implicative theory) in a given field. As the main method for constructing implicative theories a robust active learning technique called Attribute Exploration was used. Attribute Exploration extracts knowledge from existing data and offers a possibility of refining this knowledge via providing counter-examples. In frames of the project implicative theories were constructed automatically for two mathematical domains: algebraic identities and parametrically expressible functions. This goal was achieved thanks both pragmatical approach of Attribute Exploration and discoveries in respective fields of application. The two diverse application fields favourably illustrate different possible usage patterns of Attribute Exploration for automatic construction of implicative theories.
APA, Harvard, Vancouver, ISO, and other styles
49

Diemert, Simon. "A mathematical basis for medication prescriptions and adherence." Thesis, 2017. https://dspace.library.uvic.ca//handle/1828/8458.

Full text
Abstract:
Medication prescriptions constitute an important type of clinical intervention. Medication adherence is the degree to which a patient consumes their medication as agreed upon with a prescriber. Despite many years of research, medication non-adherence continues to be a problem of note, partially due to its multi-faceted in nature. Numerous interventions have attempted to improve adherence but none have emerged as definitive. A significant sub-problem is the lack of consensus regarding definitions and measurement of adherence. Several recent reviews indicate that discrepancies in definitions, measurement techniques, and study methodologies make it impossible to draw strong conclusions via meta-analyses of the literature. Technological interventions aimed at improving adherence have been the subject of ongoing research. Due to the increasing prevalence of the Internet of Things, technology can be used to provide a continuous stream of data regarding a patient's behaviour. To date, several researchers have proposed interventions that leverage data from the Internet of Things, however none have established an acceptable means of analyzing and acting upon this wealth of data. This thesis introduces a computational definition for adherence that can be used to support continued development of technological adherence interventions. A central part of the proposed definition is a formal language for specifying prescriptions that uses fuzzy set theory to accommodate imprecise concepts commonly found in natural language medication prescriptions. A prescription specified in this language can be transformed into an evaluation function which can be used to score the adherence of a given medication taking behaviour. Additionally, the evaluator function is applied to the problem of scheduling medication administrations. A compiler for the proposed language was implemented and had its breadth of expression and clinical accuracy evaluated. The results indicate that the proposed computational definition of adherence is acceptable as a proof of concept and merits further works.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
50

(9896033), R. Hamdan El Madi. "In what ways can a methodology for standardization of many different logic control languages be implemented?" Thesis, 2012. https://figshare.com/articles/thesis/In_what_ways_can_a_methodology_for_standardization_of_many_different_logic_control_languages_be_implemented_/13459178.

Full text
Abstract:
"This research is about a proposal to pursue a study concerns a method of an agreed-upon standard for logic control programs in the context of discrete part manufacturing systems. Therefore, the aim of this research is to find a standard logic control language that is suitable for all most important logic control programs. The purpose is not to eliminate the various existing programming languages, but to work around them. The objective of the researcher is to make these various languages convert into the new standard language ... By situating this research at the Industrial Electronics Department at Chullora campus, it will be contextualized within the education system in TAFE (NSW)"--Abstract.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography