To see the other types of publications on this topic, follow the link: Graph theory. Formal languages.

Dissertations / Theses on the topic 'Graph theory. Formal languages'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Graph theory. Formal languages.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Reutter, Juan L. "Graph patterns : structure, query answering and applications in schema mappings and formal language theory." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8931.

Full text
Abstract:
Graph data appears in a variety of application domains, and many uses of it, such as querying, matching, and transforming data, naturally result in incompletely specified graph data, i.e., graph patterns. Queries need to be posed against such data, but techniques for querying patterns are generally lacking, and even simple properties of graph patterns, such as the languages needed to specify them, are not well understood. In this dissertation we present several contributions in the study of graph patterns. We analyze how to query them and how to use them as queries. We also analyze some of their applications in two different contexts: schema mapping specification and data exchange for graph databases, and formal language theory. We first identify key features of patterns, such as node and label variables and edges specified by regular expressions, and define a classification of patterns based on them. Next we study how to answer standard graph queries over graph patterns, and give precise characterizations of both data and combined complexity for each class of patterns. If complexity is high, we do further analysis of features that lead to intractability, as well as lower-complexity restrictions that guarantee tractability. We then turn to the the study of schema mappings for graph databases. As for relational and XML databases, our mapping languages are based on patterns. They subsume all previously considered mapping languages for graph databases, and are capable of expressing many data exchange scenarios in the graph database context. We study the problems of materializing solutions and query answering for data exchange under these mappings, analyze their complexity, and identify relevant classes of mappings and queries for which these problems can be solved efficiently. We also introduce a new model of automata that is based on graph patterns, and define two modes of acceptance for them. We show that this model has applications not only in graph databases but in several other contexts. We study the basic properties of such automata, and the key computational tasks associated with them.
APA, Harvard, Vancouver, ISO, and other styles
2

Dorman, Andrei. "Concurrency in Interaction Nets and Graph Rewriting." Phd thesis, Université Paris-Nord - Paris XIII, 2013. http://tel.archives-ouvertes.fr/tel-00937224.

Full text
Abstract:
Ce travail est une étude approfondie de la concurrence dans les extensions non-déterministes des réseaux d'interaction de Lafont (langage graphique qui représente, lui, le calcul fonctionnel). Ces extensions sont de trois sortes : les réseaux multirègles, multiports et multifils, et leurs combinaisons donnent ainsi sept types de réseaux. Un premier travail consiste à déterminer une bonne sémantique pour pouvoir comparer ces extensions. On cherche à définir un sémantique opérationnelle structurelle sur les réseaux en se basant sur des technique connues de réécriture des graphes, plus particulièrement celle de " double-pushout with borrowed contexts ". Nous définissons à partir de cette méthode un système d'étiquetage des transitions donné par des règles de dérivations dans le style des langages de processus qui sont le paradigme principal pour étudier les systèmes de calcul concurrents. Nous définissons de plus une sémantique observationnelle sur les réseaux basée sur une notion paramétrique de barbe, qui permet enfin de donner avec précision une notion de traduction entre systèmes. On considère qu'une extension est plus expressive qu'une autre si tout langage de la seconde peut être traduit dans un langage de la première. Ceci nous permet de classer l'ensemble des extensions de manière hiérarchique en trois groupe selon la possibilité de traduire un système de réseau dans un autre. Du plus fort au plus faible : les réseaux contenant des multiports ; ensuite ceux contenant des multifils; enfin les réseaux multirègles. Ceci nous permet de donner un langage universel pour les réseaux dont l'étude donne un point de vue neuf sur les briques fondamentales de la concurrence.
APA, Harvard, Vancouver, ISO, and other styles
3

Kwon, Ky-Sang. "Multi-layer syntactical model transformation for model based systems engineering." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42835.

Full text
Abstract:
This dissertation develops a new model transformation approach that supports engineering model integration, which is essential to support contemporary interdisciplinary system design processes. We extend traditional model transformation, which has been primarily used for software engineering, to enable model-based systems engineering (MBSE) so that the model transformation can handle more general engineering models. We identify two issues that arise when applying the traditional model transformation to general engineering modeling domains. The first is instance data integration: the traditional model transformation theory does not deal with instance data, which is essential for executing engineering models in engineering tools. The second is syntactical inconsistency: various engineering tools represent engineering models in a proprietary syntax. However, the traditional model transformation cannot handle this syntactic diversity. In order to address these two issues, we propose a new multi-layer syntactical model transformation approach. For the instance integration issue, this approach generates model transformation rules for instance data from the result of a model transformation that is developed for user model integration, which is the normal purpose of traditional model transformation. For the syntactical inconsistency issue, we introduce the concept of the complete meta-model for defining how to represent a model syntactically as well as semantically. Our approach addresses the syntactical inconsistency issue by generating necessary complete meta-models using a special type of model transformation.
APA, Harvard, Vancouver, ISO, and other styles
4

Ngô, Van Chan. "Formal verification of a synchronous data-flow compiler : from Signal to C." Phd thesis, Université Rennes 1, 2014. http://tel.archives-ouvertes.fr/tel-01067477.

Full text
Abstract:
Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
APA, Harvard, Vancouver, ISO, and other styles
5

Diener, Glendon. "Formal languages in music theory." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59610.

Full text
Abstract:
In this paper, the mathematical theory of languages is used to investigate and develop computer systems for music analysis, composition, and performance. Four prominent research projects in the field are critically reviewed. An original grammar-type for the computer representation of music is introduced, and a computer system for music composition and performance based on that grammar is described. A user's manual for the system is provided as an appendix.
APA, Harvard, Vancouver, ISO, and other styles
6

Duboc, Christine. "Commutations dans les monoïdes libres : un cadre théorique pour l'étude du parallélisme." Rouen, 1986. http://www.theses.fr/1986ROUES003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sezinando, Helena Maria da Encarnação. "Formal languages and idempotent semigroups." Thesis, University of St Andrews, 1991. http://hdl.handle.net/10023/13724.

Full text
Abstract:
The structure of the lattice LB of varieties of idempotent semigroups or bands (as universal algebras) was determined by Birjukov, Fennemore and Gerhard. Wis- math determined the structure of a related lattice: the lattice LBM of varieties of band monoids. In the first two parts we study several questions about these varieties. In Part I we compute the cardinalities of the Green classes of the free objects in each variety of LB [LBM]. These cardinalities constitute a useful piece of information in the study of several questions about these varieties and some of the conclusions obtained here are used in parts II and III. Part II concerns expansions of bands [band monoids]. More precisely, we compute here the cut-down to generators of the Rhodes expansions of the free objects in the varieties of LB. We define Rhodes expansion of a monoid, its cut-down to generators and we compute the cut-down to generators of the Rhodes expansions of the free objects in the varieties of LBM. In Part III we deal with Eilenberg varieties of band monoids. The last chapter is particularly concerned with the description of the varieties of languages corresponding to these varieties.
APA, Harvard, Vancouver, ISO, and other styles
8

Emerson, Guy Edward Toh. "Functional distributional semantics : learning linguistically informed representations from a precisely annotated corpus." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/284882.

Full text
Abstract:
The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? The current state of the art is to represent meanings as vectors - but vectors do not correspond to any traditional notion of meaning. In particular, there is no way to talk about 'truth', a crucial concept in logic and formal semantics. In this thesis, I develop a framework for distributional semantics which answers this challenge. The meaning of a word is not represented as a vector, but as a 'function', mapping entities (objects in the world) to probabilities of truth (the probability that the word is true of the entity). Such a function can be interpreted both in the machine learning sense of a classifier, and in the formal semantic sense of a truth-conditional function. This simultaneously allows both the use of machine learning techniques to exploit large datasets, and also the use of formal semantic techniques to manipulate the learnt representations. I define a probabilistic graphical model, which incorporates a probabilistic generalisation of model theory (allowing a strong connection with formal semantics), and which generates semantic dependency graphs (allowing it to be trained on a corpus). This graphical model provides a natural way to model logical inference, semantic composition, and context-dependent meanings, where Bayesian inference plays a crucial role. I demonstrate the feasibility of this approach by training a model on WikiWoods, a parsed version of the English Wikipedia, and evaluating it on three tasks. The results indicate that the model can learn information not captured by vector space models.
APA, Harvard, Vancouver, ISO, and other styles
9

Akkara, Pinto. "Applying DNA Self-assembly in Formal Language Theory." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1368014016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Taha, Mohamed A. M. S. "Regulated rewriting in formal language theory." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Péladeau, Pierre. "Some combinatorial and algebraic problems related to subwords." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ada, Anil. "Non-deterministic communication complexity of regular languages." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112367.

Full text
Abstract:
The notion of communication complexity was introduced by Yao in his seminal paper [Yao79]. In [BFS86], Babai Frankl and Simon developed a rich structure of communication complexity classes to understand the relationships between various models of communication complexity. This made it apparent that communication complexity was a self-contained mini-world within complexity theory. In this thesis, we study the place of regular languages within this mini-world. In particular, we are interested in the non-deterministic communication complexity of regular languages.
We show that a regular language has either O(1) or O(log n) non-deterministic complexity. We obtain several linear lower bound results which cover a wide range of regular languages having linear non-deterministic complexity. These lower bound results also imply a result in semigroup theory: we obtain sufficient conditions for not being in the positive variety Pol(Com).
To obtain our results, we use algebraic techniques. In the study of regular languages, the algebraic point of view pioneered by Eilenberg ([Eil74]) has led to many interesting results. Viewing a semigroup as a computational device that recognizes languages has proven to be prolific from both semigroup theory and formal languages perspectives. In this thesis, we provide further instances of such mutualism.
APA, Harvard, Vancouver, ISO, and other styles
13

Fransson, Tobias. "Simulators for formal languages, automata and theory of computation with focus on JFLAP." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-18351.

Full text
Abstract:
This report discusses simulators in automata theory and which one should be best for use in laboratory assignments. Currently, the Formal Languages, Automata and Theory of Computation course (FABER) at Mälardalen University uses the JFLAP simulator for extra exercises. To see if any other simulators would be useful either along with JFLAP or standalone, tests were made with nine programs that are able to graphically simulate automata and formal languages. This thesis work started by making an overview of simulators currently available.After the reviews it has become clear to the author that JFLAP is the best choice for majority of cases. JFLAP is also the most popular simulator in automata theory courses worldwide.To support the use of JFLAP for the course a manual and course assignments are created to help the student to getting started with JFLAP. The assignments are expected to replace the current material in the FABER course and to help the uninitiated user to get more out of JFLAP.
APA, Harvard, Vancouver, ISO, and other styles
14

Lai, Catherine. "A formal framework for linguistic tree query /." Connect to thesis, 2005. http://eprints.unimelb.edu.au/archive/00001594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ibarra, Louis Walter. "Dynamic algorithms for chordal and interval graphs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58573.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Almeida, João Marcos de 1974. "Logics of Formal Inconsistency." Phd thesis, Instituições portuguesas -- UTL-Universidade Técnica de Lisboa -- IST-Instituto Superior Técnico -- -Departamento de Matemática, 2005. http://dited.bn.pt:80/29635.

Full text
Abstract:
According to the classical consistency presupposition, contradictions have an explosive character: Whenever they are present in a theory, anything goes, and no sensible reasoning can thus take place. A logic is paraconsistent if it disallows such presupposition, and allows instead for some inconsistent yet non-trivial theories to make perfect sense. The Logics of Formal Inconsistency, LFIs, form a particularly expressive class of paraconsistent logics in which the metatheoretical notion of consistency can be internalized at the object-language level. As a consequence, the LFIs are able to recapture consistent reasoning by the addition of appropriate consistency assumptions. The present monograph introduces the LFIs and provides several illustrations of them and of their properties, showing that such logics constitute in fact the majority of interesting paraconsistent systems in the literature. Several ways of performing the recapture of consistent reasoning inside such inconsistent systems are also illustrated. In each case, interpretations in terms of many-valued, possible-translations, or modal semantics are provided, and the problems related to providing algebraic counterparts to such logics are surveyed. A formal abstract approach is proposed to all related definitions and an extended investigation is made into the logical principles and the positive and negative properties of negation.
APA, Harvard, Vancouver, ISO, and other styles
17

Foufa, Aouaouche Fazileit. "Some results on systolic tree automata as acceptors." Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/8167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bennett, Daniel. "On plausible counterexamples to Lehnert's conjecture." Thesis, University of St Andrews, 2018. http://hdl.handle.net/10023/15631.

Full text
Abstract:
A group whose co-word problem is a context free language is called coCF. Lehnert's conjecture states that a group G is coCF if and only if G embeds as a finitely generated subgroup of R. Thompson's group V. In this thesis we explore a class of groups, Faug, proposed by Berns-Zieze, Fry, Gillings, Hoganson, and Mathews to contain potential counterexamples to Lehnert's conjecture. We create infinite and finite presentations for such groups and go on to prove that a certain subclass of Faug consists of groups that do embed into V. By Anisimov a group has regular word problem if and only if it is finite. It is also known that a group G is finite if and only if there exists an embedding of G into V such that its natural action on C₂:= {0,1}w is free on the whole space. We show that the class of groups with a context free word problem, the class of CF groups, is precisely the class of finitely generated demonstrable groups for V. A demonstrable group for V is a group G which is isomorphic to a subgroup in V whose natural action on C₂ acts freely on an open subset. Thus our result extends the correspondence between language theoretic properties of groups and dynamical properties of subgroups of V. Additionally, our result also shows that the final condition of the four known closure properties of the class of coCF groups also holds for the set of finitely generated subgroups of V.
APA, Harvard, Vancouver, ISO, and other styles
19

Schmid, Markus L. "On the membership problem for pattern languages and related topics." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/10304.

Full text
Abstract:
In this thesis, we investigate the complexity of the membership problem for pattern languages. A pattern is a string over the union of the alphabets A and X, where X := {x_1, x_2, x_3, ...} is a countable set of variables and A is a finite alphabet containing terminals (e.g., A := {a, b, c, d}). Every pattern, e.g., p := x_1 x_2 a b x_2 b x_1 c x_2, describes a pattern language, i.e., the set of all words that can be obtained by uniformly substituting the variables in the pattern by arbitrary strings over A. Hence, u := cacaaabaabcaccaa is a word of the pattern language of p, since substituting cac for x_1 and aa for x_2 yields u. On the other hand, there is no way to obtain the word u' := bbbababbacaaba by substituting the occurrences of x_1 and x_2 in p by words over A. The problem to decide for a given pattern q and a given word w whether or not w is in the pattern language of q is called the membership problem for pattern languages. Consequently, (p, u) is a positive instance and (p, u') is a negative instance of the membership problem for pattern languages. For the unrestricted case, i.e., for arbitrary patterns and words, the membership problem is NP-complete. In this thesis, we identify classes of patterns for which the membership problem can be solved efficiently. Our first main result in this regard is that the variable distance, i.e., the maximum number of different variables that separate two consecutive occurrences of the same variable, substantially contributes to the complexity of the membership problem for pattern languages. More precisely, for every class of patterns with a bounded variable distance the membership problem can be solved efficiently. The second main result is that the same holds for every class of patterns with a bounded scope coincidence degree, where the scope coincidence degree is the maximum number of intervals that cover a common position in the pattern, where each interval is given by the leftmost and rightmost occurrence of a variable in the pattern. The proof of our first main result is based on automata theory. More precisely, we introduce a new automata model that is used as an algorithmic framework in order to show that the membership problem for pattern languages can be solved in time that is exponential only in the variable distance of the corresponding pattern. We then take a closer look at this automata model and subject it to a sound theoretical analysis. The second main result is obtained in a completely different way. We encode patterns and words as relational structures and we then reduce the membership problem for pattern languages to the homomorphism problem of relational structures, which allows us to exploit the concept of the treewidth. This approach turns out be successful, and we show that it has potential to identify further classes of patterns with a polynomial time membership problem. Furthermore, we take a closer look at two aspects of pattern languages that are indirectly related to the membership problem. Firstly, we investigate the phenomenon that patterns can describe regular or context-free languages in an unexpected way, which implies that their membership problem can be solved efficiently. In this regard, we present several sufficient conditions and necessary conditions for the regularity and context-freeness of pattern languages. Secondly, we compare pattern languages with languages given by so-called extended regular expressions with backreferences (REGEX). The membership problem for REGEX languages is very important in practice and since REGEX are similar to pattern languages, it might be possible to improve algorithms for the membership problem for REGEX languages by investigating their relationship to patterns. In this regard, we investigate how patterns can be extended in order to describe large classes of REGEX languages.
APA, Harvard, Vancouver, ISO, and other styles
20

Rahm, Ludwig. "Generating functions and regular languages of walks with modular restrictions in graphs." Thesis, Linköpings universitet, Matematik och tillämpad matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138117.

Full text
Abstract:
This thesis examines the problem of counting and describing walks in graphs, and the problem when such walks have modular restrictions on how many timesit visits each vertex. For the special cases of the path graph, the cycle graph, the grid graph and the cylinder graph, generating functions and regular languages for their walks and walks with modular restrictions are constructed. At the end of the thesis, a theorem is proved that connects the generating function for walks in a graph to the generating function for walks in a covering graph.
APA, Harvard, Vancouver, ISO, and other styles
21

Ericson, Petter. "Complexity and expressiveness for formal structures in Natural Language Processing." Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-135014.

Full text
Abstract:
The formalized and algorithmic study of human language within the field of Natural Language Processing (NLP) has motivated much theoretical work in the related field of formal languages, in particular the subfields of grammar and automata theory. Motivated and informed by NLP, the papers in this thesis explore the connections between expressibility – that is, the ability for a formal system to define complex sets of objects – and algorithmic complexity – that is, the varying amount of effort required to analyse and utilise such systems. Our research studies formal systems working not just on strings, but on more complex structures such as trees and graphs, in particular syntax trees and semantic graphs. The field of mildly context-sensitive languages concerns attempts to find a useful class of formal languages between the context-free and context-sensitive. We study formalisms defining two candidates for this class; tree-adjoining languages and the languages defined by linear context-free rewriting systems. For the former, we specifically investigate the tree languages, and define a subclass and tree automaton with linear parsing complexity. For the latter, we use the framework of parameterized complexity theory to investigate more deeply the related parsing problems, as well as the connections between various formalisms defining the class. The field of semantic modelling aims towards formally and accurately modelling not only the syntax of natural language statements, but also the meaning. In particular, recent work in semantic graphs motivates our study of graph grammars and graph parsing. To the best of our knowledge, the formalism presented in Paper III of this thesis is the first graph grammar where the uniform parsing problem has polynomial parsing complexity, even for input graphs of unbounded node degree.
APA, Harvard, Vancouver, ISO, and other styles
22

Gaconnet, Christopher James Tarau Paul. "Force-directed graph drawing and aesthetics measurement in a non-strict pure functional programming language." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/ark:/67531/metadc12125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Renata, Vaderna. "Algoritmi i jezik za podršku automatskom raspoređivanju elemenata dijagrama." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107524&source=NDLTD&language=en.

Full text
Abstract:
U sklopu doktorske disertacije izvršeno je istraživanje vezano za automatskoraspoređivanje elemenata dijagrama. Kroz analizu postojećih rešenja uočen jeprostor za poboljšanja, posebno po pitanju raznovrsnosti dostupnih algoritamai pomoći korisniku pri izboru najpogodnijeg od njih. U okviru istraživanjaproučavan, implementiran i u pojedinim slučajevima unapređen je širokspektar algoritama za crtanje i analizu grafova. Definisan je postupakautomatskog izbora odgovarajućeg algoritma za raspoređivanje elemenatagrafova na osnovu njihovih osobina. Dodatno, osmišljen je jezik specifičan zadomen koji korisnicima grafičkih editora pruža pomoć u izboru algoritma zaraspoređivanje, a programerima brže pisanje koda za poziv željenog algoritma.
This thesis presents a research aimed towards the problem of automaticallylaying out elements of a diagram. The analysis of existing solutions showed that thereis some room for improvement, especially regarding variety of available algorithms.Also, none of the solutions offer possibility of automatically choosing an appropriategraph layout algorithm. Within the research, a large number of different algorithms forgraph drawing and analysis were studied, implemented, and, in some cases,enhanced. A method for automatically choosing the best available layout algorithmbased on properties of a graph was defined. Additionally, a domain-specific languagefor specifying a graph’s layout was designed.
APA, Harvard, Vancouver, ISO, and other styles
24

Gaconnet, Christopher James. "Force-Directed Graph Drawing and Aesthetics Measurement in a Non-Strict Pure Functional Programming Language." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12125/.

Full text
Abstract:
Non-strict pure functional programming often requires redesigning algorithms and data structures to work more effectively under new constraints of non-strict evaluation and immutable state. Graph drawing algorithms, while numerous and broadly studied, have no presence in the non-strict pure functional programming model. Additionally, there is currently no freely licensed standalone toolkit used to quantitatively analyze aesthetics of graph drawings. This thesis addresses two previously unexplored questions. Can a force-directed graph drawing algorithm be implemented in a non-strict functional language, such as Haskell, and still be practically usable? Can an easily extensible aesthetic measuring tool be implemented in a language such as Haskell and still be practically usable? The focus of the thesis is on implementing one of the simplest force-directed algorithms, that of Fruchterman and Reingold, and comparing its resulting aesthetics to those of a well-known C++ implementation of the same algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Nelson, Andrew P. "Funqual: User-Defined, Statically-Checked Call Graph Constraints in C++." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1848.

Full text
Abstract:
Static analysis tools can aid programmers by reporting potential programming mistakes prior to the execution of a program. Funqual is a static analysis tool that reads C++17 code ``in the wild'' and checks that the function call graph follows a set of rules which can be defined by the user. This sort of analysis can help the programmer to avoid errors such as accidentally calling blocking functions in time-sensitive contexts or accidentally allocating memory in heap-sensitive environments. To accomplish this, we create a type system whereby functions can be given user-defined type qualifiers and where users can define their own restrictions on the call graph based on these type qualifiers. We demonstrate that this tool, when used with hand-crafted rules, can catch certain types of errors which commonly occur in the wild. We claim that this tool can be used in a production setting to catch certain kinds of errors in code before that code is even run.
APA, Harvard, Vancouver, ISO, and other styles
26

Filho, Reginaldo Inojosa da Silva. "Uma nova formulação algébrica para o autômato finito adaptativo de segunda ordem aplicada a um modelo de inferência indutiva." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05092012-163421/.

Full text
Abstract:
O objetivo deste trabalho é apresentar o modelo dos autômatos adaptativos de segunda ordem e mostrar a forte conexão desse modelo com o aprendizado indutivo no limite. Tal modelo é definido com a utilização de um conjunto de transformações sobre autômatos finitos não - determinísticos e a conexão com o aprendizado no limite á estabelecida usando o conceito de mutação composta, onde uma hipótese inicial dá início ao processo de aprendizagem, produzindo, após uma sequência de transformações sofridas por essa primeira hipótese, um modelo final que é o resultado correto do aprendizado. Será apresentada a prova de que um autômato adaptativos de segunda ordem, usado como um aprendiz, pode realizar o processo de aprendizado no limite. O formalismo dos autômatos adaptativos de segunda ordem é desenvolvido sobre o modelo dos autômatos adaptativos de primeira ordem, uma extensão natural do modelo dos autômatos adaptativos clássicos. Embora tenha o mesmo poder computacional, o autômato adaptativo de primeira ordem apresenta uma notação mais simples e rigorosa que o seu antecessor, permitindo derivar novas propriedades. Uma dessas propriedades é justamente sua capacidade de aprendizado. Como consequência, o modelo dos autômatos adaptativos de segunda ordem aumenta a expressividade computacional dos dispositivos adaptativos através da sua notação recursiva, e também através do seu potencial para o uso em aplicações de aprendizado de máquina, ilustrados nesta tese. Uma arquitetura de aprendizado de máquina usando os autômatos adaptativos de segunda ordem é proposto e um modelo de identificação no limite, aplicado em processos de inferência para linguagens livre de contexto, é apresentado.
The purpose of this work is to present the second-order adaptive automaton under an transformation automata approach and to show the strong connection of this model with learning in the limit. The connection is established using the adaptive mutations, in which any hypothesis can be used to start a learning process, and produces a correct final model following a step-by-step transformation of that hypothesis by a second-order adaptive automaton. Second-order adaptive automaton learner will be proved to acts as a learning in the limit. The presented formalism is developed over the first-order adaptive automaton, a natural and unified extension of the classical adaptive automaton. First-order adaptive automaton is a new and better representation for the adaptive finite automaton and to also show that both formulations the original and the newly created have the same computational power. Afterwards both formulations show to be equivalent in representation and in computational power, but the new one has a highly simplified notation. The use of the new formulation actually allows simpler theorem proofs and generalizations, as can be verified in this work. As results, the second-order adaptive automaton enhances the computational expressiveness of adaptive automaton through its recursive notation, and also its skills for the use in machine learning applications were illustrated here. An architecture of machine learning to use the adaptive technology is proposed and the model of identification in limit applied in inference processes for free-context languages.
APA, Harvard, Vancouver, ISO, and other styles
27

Brunet, Paul. "Algebras of Relations : from algorithms to formal proofs." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1198/document.

Full text
Abstract:
Les algèbres de relations apparaissent naturellement dans de nombreux cadres, en informatique comme en mathématiques. Elles constituent en particulier un formalisme tout à fait adapté à la sémantique des programmes impératifs. Les algèbres de Kleene constituent un point de départ : ces algèbres jouissent de résultats de décidabilités très satisfaisants, et admettent une axiomatisation complète. L'objectif de cette thèse a été d'étendre les résultats connus sur les algèbres de Kleene à des extensions de celles-ci.Nous nous sommes tout d'abord intéressés à une extension connue : les algèbres de Kleene avec converse. La décidabilité de ces algèbres était déjà connue, mais l'algorithme prouvant ce résultat était trop compliqué pour être utilisé en pratique. Nous avons donné un algorithme plus simple, plus efficace, et dont la correction est plus facile à établir. Ceci nous a permis de placer ce problème dans la classe de complexité PSpace-complete.Nous avons ensuite étudié les allégories de Kleene. Sur cette extension, peu de résultats étaient connus. En suivant des résultats sur des algèbres proches, nous avons établi l'équivalence du problème d'égalité dans les allégories de Kleene à l'égalité de certains ensembles de graphes. Nous avons ensuite développé un modèle d'automate original (les automates de Petri), basé sur les réseaux de Petri, et avons établi l'équivalence de notre problème original avec le problème de comparaison de ces automates. Nous avons enfin développé un algorithme pour effectuer cette comparaison dans le cadre restreint des treillis de Kleene sans identité. Cet algorithme utilise un espace exponentiel. Néanmoins, nous avons pu établir que la comparaison d'automates de Petri dans ce cas est ExpSpace-complète. Enfin, nous nous sommes intéressés aux algèbres de Kleene Nominales. Nous avons réalisé que les descriptions existantes de ces algèbres n'étaient pas adaptées à la sémantique relationnelle des programmes. Nous les avons donc modifiées pour nos besoins, et ce faisant avons trouvé diverses variations naturelles de ce modèle. Nous avons donc étudié en détails et en Coq les ponts que l'on peut établir entre ces variantes, et entre le modèle “classique” et notre nouvelle version
Algebras of relations appear naturally in many contexts, in computer science as well as in mathematics. They constitute a framework well suited to the semantics of imperative programs. Kleene algebra are a starting point: these algebras enjoy very strong decidability properties, and a complete axiomatisation. The goal of this thesis was to export known results from Kleene algebra to some of its extensions. We first considered a known extension: Kleene algebras with converse. Decidability of these algebras was already known, but the algorithm witnessing this result was too complicated to be practical. We proposed a simpler algorithm, more efficient, and whose correctness is easier to establish. It allowed us to prove that this problem lies in the complexity class PSpace-complete.Then we studied Kleene allegories. Few results were known about this extension. Following results about closely related algebras, we established the equivalence between equality in Kleene allegories and equality of certain sets of graphs. We then developed an original automaton model (so-called Petri automata), based on Petri nets. We proved the equivalence between the original problem and comparing these automata. In the restricted setting of identity-free Kleene lattices, we also provided an algorithm performing this comparison. This algorithm uses exponential space. However, we proved that the problem of comparing Petri automata lies in the class ExpSpace-complete.Finally, we studied Nominal Kleene algebras. We realised that existing descriptions of these algebra were not suited to relational semantics of programming languages. We thus modified them accordingly, and doing so uncovered several natural variations of this model. We then studied formally the bridges one could build between these variations, and between the existing model and our new version of it. This study was conducted using the proof assistant Coq
APA, Harvard, Vancouver, ISO, and other styles
28

Caron, Pascal. "Langages rationnels et automates : de la théorie à la programmation." Rouen, 1997. http://www.theses.fr/1997ROUES079.

Full text
Abstract:
Cette thèse constitue un point de départ pour la programmation d'un système de calcul formel sur les automates, les semigroupes et les langages rationnels. On y trouve la caractérisation des automates construits selon l'algorithme de Glushkov. Des caractérisations de familles de langages testables à partir de leurs automates minimaux y sont également décrites. Le logiciel AGL regroupe un ensemble de packages Maple sur les automates, les semigroupes et les langages rationnels. L'ensemble des algorithmes déduits des caractérisations y est implémenté. Ce logiciel constitue un prototype pour un système de calcul formel dédié aux automates, aux semigroupes et aux langages rationnels.
APA, Harvard, Vancouver, ISO, and other styles
29

Tadonki, Claude. "High Performance Computing as a Combination of Machines and Methods and Programming." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00832930.

Full text
Abstract:
High Performance Computing (HPC) aims at providing reasonably fast computing solutions to both scientific and real life technical problems. Many efforts have indeed been made on the way to powerful supercomputers, both generic and customized configurations. However, whatever their current and future breathtaking capabilities, supercomputers work by brute force and deterministic steps, while human mind works by few strokes of brilliance. Thus, in order to take a significant advantage of hardware advances, we need powerful methods to solve problems together with highly skillful programming efforts and relevant frameworks. The advent of multicore architectures is noteworthy in the HPC history, because it has brought the underlying concept of multiprocessing into common consideration and has changed the landscape of standard computing. At a larger scale, there is a keen desire to build or host frontline supercomputers. The yearly Top500 ranking nicely illustrates and orchestrates this supercomputers saga. For many years, computers have been falling in price while gaining processing power often strengthened by specialized accelerator units. We clearly see that what commonly springs up in mind when it comes to HPC is computer capability. However, this availability of increasingly fast computers has changed the rule of scientific discovery and has motivated the consideration of challenging applications. Thus, we are routinely at the door of large-scale problems, and most of time, the speed of calculation by itself is no longer sufficient. Indeed, the real concern of HPC users is the time-to-output. Thus, we need to study each important aspect in the critical path between inputs and outputs, and keep striving to reach the expected level of performance. This is the main concern of the viewpoints and the achievements reported in this book. The document is organized into five chapters articulated around our main contributions. The first chapter depicts the landscape of supercomputers, comments the need for tremendous processing speed, and analyze the main trends in supercomputing. The second chapter deals with solving large-scale combinatorial problems through a mixture of continuous and discrete optimization methods, we describe the main generic approaches and present an important framework on which we have been working so far. The third chapter is devoted to the topic accelerated computing, we discuss the motivations and the issues, and we describe three case studies from our contributions. In chapter four, we address the topic of energy minimization in a formal way and present our method based on a mathematical programming approach. Chapter five debates on hybrid supercomputing, we discuss technical issues with hierarchical shared memories and illustrate hybrid coding through a large-scale linear algebra implementation on a supercomputer.
APA, Harvard, Vancouver, ISO, and other styles
30

Bhat, Sooraj. "Syntactic foundations for machine learning." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47700.

Full text
Abstract:
Machine learning has risen in importance across science, engineering, and business in recent years. Domain experts have begun to understand how their data analysis problems can be solved in a principled and efficient manner using methods from machine learning, with its simultaneous focus on statistical and computational concerns. Moreover, the data in many of these application domains has exploded in availability and scale, further underscoring the need for algorithms which find patterns and trends quickly and correctly. However, most people actually analyzing data today operate far from the expert level. Available statistical libraries and even textbooks contain only a finite sample of the possibilities afforded by the underlying mathematical principles. Ideally, practitioners should be able to do what machine learning experts can do--employ the fundamental principles to experiment with the practically infinite number of possible customized statistical models as well as alternative algorithms for solving them, including advanced techniques for handling massive datasets. This would lead to more accurate models, the ability in some cases to analyze data that was previously intractable, and, if the experimentation can be greatly accelerated, huge gains in human productivity. Fixing this state of affairs involves mechanizing and automating these statistical and algorithmic principles. This task has received little attention because we lack a suitable syntactic representation that is capable of specifying machine learning problems and solutions, so there is no way to encode the principles in question, which are themselves a mapping between problem and solution. This work focuses on providing the foundational layer for enabling this vision, with the thesis that such a representation is possible. We demonstrate the thesis by defining a syntactic representation of machine learning that is expressive, promotes correctness, and enables the mechanization of a wide variety of useful solution principles.
APA, Harvard, Vancouver, ISO, and other styles
31

Berglund, Martin. "Complexities of Parsing in the Presence of Reordering." Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-54643.

Full text
Abstract:
The work presented in this thesis discusses various formalisms for representing the addition of order-controlling and order-relaxing mechanisms to existing formal language models. An immediate example is shuffle expressions, which can represent not only all regular languages (a regular expression is a shuffle expression), but also features additional operations that generate arbitrary interleavings of its argument strings. This defines a language class which, on the one hand, does not contain all context-free languages, but, on the other hand contains an infinite number of languages that are not context-free. Shuffle expressions are, however, not themselves the main interest of this thesis. Instead we consider several formalisms that share many of their properties, where some are direct generalisations of shuffle expressions, while others feature very different methods of controlling order. Notably all formalisms that are studied here have a semi-linear Parikh image, are structured so that each derivation step generates at most a constant number of symbols (as opposed to the parallel derivations in for example Lindenmayer systems), feature interesting ordering characteristics, created either by derivation steps that may generate symbols in multiple places at once, or by multiple generating processes that produce output independently in an interleaved fashion, and are all limited enough to make the question of efficient parsing an interesting and reasonable goal. This vague description already hints towards the formalisms considered; the different classes of mildly context-sensitive devices and concurrent finite-state automata. This thesis will first explain and discuss these formalisms, and will then primarily focus on the associated membership problem (or parsing problem). Several parsing results are discussed here, and the papers in the appendix give a more complete picture of these problems and some related ones.
APA, Harvard, Vancouver, ISO, and other styles
32

Penczek, Frank. "Static guarantees for coordinated components : a statically typed composition model for stream-processing networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/9046.

Full text
Abstract:
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
APA, Harvard, Vancouver, ISO, and other styles
33

Slama, Franck. "Automatic generation of proof terms in dependently typed programming languages." Thesis, University of St Andrews, 2018. http://hdl.handle.net/10023/16451.

Full text
Abstract:
Dependent type theories are a kind of mathematical foundations investigated both for the formalisation of mathematics and for reasoning about programs. They are implemented as the kernel of many proof assistants and programming languages with proofs (Coq, Agda, Idris, Dedukti, Matita, etc). Dependent types allow to encode elegantly and constructively the universal and existential quantifications of higher-order logics and are therefore adapted for writing logical propositions and proofs. However, their usage is not limited to the area of pure logic. Indeed, some recent work has shown that they can also be powerful for driving the construction of programs. Using more precise types not only helps to gain confidence about the program built, but it can also help its construction, giving rise to a new style of programming called Type-Driven Development. However, one difficulty with reasoning and programming with dependent types is that proof obligations arise naturally once programs become even moderately sized. For example, implementing an adder for binary numbers indexed over their natural number equivalents naturally leads to proof obligations for equalities of expressions over natural numbers. The need for these equality proofs comes, in intensional type theories (like CIC and ML) from the fact that in a non-empty context, the propositional equality allows us to prove as equal (with the induction principles) terms that are not judgementally equal, which implies that the typechecker can't always obtain equality proofs by reduction. As far as possible, we would like to solve such proof obligations automatically, and we absolutely need it if we want dependent types to be use more broadly, and perhaps one day to become the standard in functional programming. In this thesis, we show one way to automate these proofs by reflection in the dependently typed programming language Idris. However, the method that we follow is independent from the language being used, and this work could be reproduced in any dependently-typed language. We present an original type-safe reflection mechanism, where reflected terms are indexed by the original Idris expression that they represent, and show how it allows us to easily construct and manipulate proofs. We build a hierarchy of correct-by-construction tactics for proving equivalences in semi-groups, monoids, commutative monoids, groups, commutative groups, semi-rings and rings. We also show how each tactic reuses those from simpler structures, thus avoiding duplication of code and proofs. Finally, and as a conclusion, we discuss the trust we can have in such machine-checked proofs.
APA, Harvard, Vancouver, ISO, and other styles
34

Dahlström, Magnus. "Mängdlära och kardinalitet : Cantors paradis." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-6.

Full text
Abstract:

This paper is about basic set theory and cardinalities for infinite sets. One of the results are that the line R and the plane R2 contains exactly the same number of points. Because of that the set theory is described with a formal language this the paper has an appendix about formal languages.


Denna uppsats behandlar grundläggande mängdlära och inriktar sig sedan på kardinaliteter för oändliga mängder. Bland de resultat som redovisas finns bland annat resultatet som säger att linjen R och planet R2 innehåller precis lika många punkter. Då mängdläran beskrivs av ett formellt språk så innehåller uppsatsen en bilaga om formella språk.

APA, Harvard, Vancouver, ISO, and other styles
35

Degorre, Aldric. "Langages formels : Quelques aspects quantitatifs." Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00665462.

Full text
Abstract:
Les langages formels sont des séquences sur un ensemble discret de symboles appelé alphabet. On les spécifie souvent par des formules dans une certaine logique, par des expressions rationnelles ou bien par des automates discrets de types variés. La théorie actuelle est principalement qualitative, dans le sens où ses objets sont des séquence sur un temps discret, non-métrique, dans le sens où l'acceptation d'une séquence sur un automate dépend du fait que l'on visite ou non un état accepteur, et enfin dans le sens où la comparaison de langages est plus souvent considérée en termes d'inclusion, plutôt qu'en termes de mesures quantitatives. Cette thèse contribue à l'étude de ces aspects souvent négligés en présentant des résultats fondamentaux dans trois nouvelles classes de problèmes quantitatifs sur les langages formels. Dans la première partie, nous étudions une classe de problèmes d'ordonnancement qui combine les aspects structurels associés aux dépendances entre tâches avec les aspects dynamiques liés au fait qu'un flux de requêtes arrive en continu pendant l'exécution. Nous montrons que, dans cette classe de problèmes, certains flux, pourtant admissibles dans le sens que les requêtes ne représentent pas plus de travail que ce que les machines peuvent traiter, ne peuvent pas être ordonnancé avec une latence bornée. Cependant nous développons une politique d'ordonnancement que peut garantir une accumulation de retard bornée pour tout flux de requêtes admissible, même sans le connaître à l'avance. Nous montrons que si les flux sont sous-critiques, alors cette même politique peut garantir une latence bornée. En vérification quantitative, les états et transitions d'un système peuvent être associés à des coûts, et ceux-ci utilisés pour associer des coûts moyens aux comportements infinis. Dans cette seconde partie, nous proposons de définir des omega-langages par des requêtes booléennes sur les coûts moyens. Des spécifications concernant des moyennes, tels que " le taux de perte moyen de messages est inférieur à un certain seuil " ne sont pas omega-régulières, mais exprimables dans notre modèle. Ainsi, nous étudions l'expressivité et la complexité de Borel de telles spécifications. Nous montrons que pour la clôture par intersection, il est nécessaire de considérer des coûts multi-dimensionnels. Nous mettons en évidence que dans le cas général, les conditions d'acceptation portent sur l'ensemble des points d'accumulation de la séquence des coûts moyens des préfixes d'une exécution, et nous donnons une caractérisation précise de tels ensembles. Nous proposons une classe de langages de coût moyen à seuils multiples, comparant les coordonnées minimales et minimales des points de cet ensemble à des constantes. Nous montrons enfin que cette classe est close par opérations booléennes et analysable. Enfin, dans le dernier volet, nous définissons deux mesures pour un langage temporisé : le volume de ses sous-langages de mots à nombre d'événements fixe et l'entropie (vitesse de croissance), mesure asymptotique pour un nombre non borné d'événements. Ces mesures peuvent être utilisées pour la comparaison quantitative de langages, et l'entropie peut être vue comme la quantité d'information par événement dans un mot typique du langage temporisé. Pour les langages acceptés par des automates temporisés déterministes, nous donnons une formule exacte pour le volume. Ensuite, nous caractérisons l'entropie, en utilisant des méthodes d'analyse fonctionnelle, en tant que logarithme du rayon spectral d'un opérateur intégral positif. Nous établissons plusieurs méthodes pour calculer l'entropie : une symbolique pour les automates que nos appelons " à une horloge et demie ", et deux numériques : une utilisant les techniques d'analyse fonctionnelle, l'autre basée sur la discrétisation. Nous donnons une interprétation de l'entropie en théorie de l'information en termes de complexité de Kolmogorov.
APA, Harvard, Vancouver, ISO, and other styles
36

Schwoon, Stefan. "Efficient verification of sequential and concurrent systems." Habilitation à diriger des recherches, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00927066.

Full text
Abstract:
Formal methods provide means for rigorously specifying the desired behaviour of a hardware or software system, making a precise model of its actual behaviour, and then verifying whether that actual behaviour corresponds to the specification.

My habiliation thesis reports on various contributions to this realm, where my main interest has been on algorithmic aspects. This is motivated by the observation that asymptotic worst-case complexity, often used to characterize the difficulty of algorithmic problems, is only loosely related to the difficulty encountered in solving those problems in practice.

The two main types of system I have been working on are pushdown systems and Petri nets. Both are fundamental notions of computation, and both offer, in my opinion, particularly nice opportunities for combining theory and algorithmics.

Pushdown systems are finite automata equipped with a stack; since the height of the stack is not bounded, they represent a class of infinite-state systems that model programs with (recursive) procedure calls. Moreover, we shall see that specifying authorizations is another, particularly interesting application of pushdown systems.

While pushdown systems are primarily suited to express sequential systems, Petri nets model concurrent systems. My contributions in this area all concern unfoldings. In a nutshell, the unfolding of a net N is an acyclic version of N in which loops have been unrolled. Certain verification problems, such as reachability, have a lower complexity on unfoldings than on general Petri nets.
APA, Harvard, Vancouver, ISO, and other styles
37

Ivanov, Sergiu. "On the Power and Universality of Biologically-inspired Models of Computation." Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1012/document.

Full text
Abstract:
Cette thèse adresse les problèmes d'universalité et de complétude computationelle pour plusieurs modèles de calcul inspirés par la biologie. Il s'agit principalement des systèmes d'insertion/effacement, réseaux de processeurs évolutionnaires, ainsi que des systèmes de réécriture de multi-ensembles. Les résultats décrits se classent dans deux catégories majeures : l'étude de la puissance de calcul des opérations d'insertion et d'effacement avec ou sans mécanismes de contrôle, et la construction des systèmes de réécriture de multi-ensembles universels de petite taille. Les opérations d'insertion et d'effacement consistent à rajouter ou supprimer une sous-chaîne dans une chaîne de caractères dans un contexte donné. La motivation pour l'étude de ces opérations vient de la biologie, ainsi que de la linguistique et de la théorie des langages formels. Dans la première partie de ce manuscrit nous examinons des systèmes d'insertion/effacement correspondant à l'édition de l'ARN, un processus qui insère ou supprime des fragments de ces molécules. Une particularité importante de l'édition de l'ARN est que le endroit auquel se font les modifications est déterminé par des séquences de nucléotides se trouvant toujours du même côté du site de modification. En termes d'insertion et d'effacement, ce phénomène se modéliserait par des règles possédant le contexte uniquement d'un seul côté. Nous montrons qu'avec un contexte gauche de deux caractères il est possible d'engendrer tous les langages rationnels. D'autre part, nous prouvons que des contextes plus longs n'augmentent pas la puissance de calcul du modèle. Nous examinons aussi les systèmes d’insertion/effacement utilisant des mécanismes de contrôle d’application des règles et nous montrons l'augmentation de la puissance d'expression. Les opérations d'insertion et d'effacement apparaissent naturellement dans le domaine de la sécurité informatique. Comme exemple on peut donner le modèle des grammaires gauchistes (leftist grammar), qui ont été introduites pour l'étude des systèmes critiques. Dans cette thèse nous proposons un nouvel instrument graphique d'analyse du comportement dynamique de ces grammaires. La deuxième partie du manuscrit s'intéresse au problème d'universalité qui consiste à trouver un élément concret capable de simuler le travail de n'importe quel autre dispositif de calcul. Nous commençons par le modèle de réseaux de processeurs évolutionnaires, qui abstrait le traitement de l'information génétique. Nous construisons des réseaux universels ayant un petit nombre de règles. Nous nous concentrons ensuite sur les systèmes de réécriture des multi-ensembles, un modèle qui peut être vu comme une abstraction des réactions biochimiques. Pour des raisons historiques, nous formulons nos résultats en termes de réseaux de Petri. Nous construisons des réseaux de Petri universels et décrivons des techniques de réduction du nombre de places, de transitions et d'arcs inhibiteurs, ainsi que du degré maximal des transitions. Une bonne partie de ces techniques repose sur une généralisation des machines à registres introduite dans cette thèse et qui permet d'effectuer plusieurs tests et opérations en un seul changement d'état
The present thesis considers the problems of computational completeness and universality for several biologically-inspired models of computation: insertion-deletion systems, networks of evolutionary processors, and multiset rewriting systems. The presented results fall into two major categories: study of expressive power of the operations of insertion and deletion with and without control, and construction of universal multiset rewriting systems of low descriptional complexity. Insertion and deletion operations consist in adding or removing a subword from a given string if this subword is surrounded by some given contexts. The motivation for studying these operations comes from biology, as well as from linguistics and the theory of formal languages. In the first part of the present work we focus on insertion-deletion systems closely related to RNA editing, which essentially consists in inserting or deleting fragments of RNA molecules. An important feature of RNA editing is the fact that the locus the operations are carried at is determined by certain sequences of nucleotides, which are always situated to the same side of the editing site. In terms of formal insertion and deletion, this phenomenon is modelled by rules which can only check their context on one side and not on the other. We show that allowing one-symbol insertion and deletion rules to check a two-symbol left context enables them to generate all regular languages. Moreover, we prove that allowing longer insertion and deletion contexts does not increase the computational power. We further consider insertion-deletion systems with additional control over rule applications and show that the computational completeness can be achieved by systems with very small rules. The motivation for studying insertion-deletion systems also comes from the domain of computer security, for the purposes of which a special kind of insertion-deletion systems called leftist grammars was introduced. In this work we propose a novel graphical instrument for visual analysis of the dynamics of such systems. The second part of the present thesis is concerned with the universality problem, which consists in finding a fixed element able to simulate the work any other computing device. We start by considering networks of evolutionary processors (NEPs), a computational model inspired by the way genetic information is processed in the living cell, and construct universal NEPs with very few rules. We then focus on multiset rewriting systems, which model the chemical processes running in the biological cell. For historical reasons, we formulate our results in terms of Petri nets. We construct a series of universal Petri nets and give several techniques for reducing the numbers of places, transitions, inhibitor arcs, and the maximal transition degree. Some of these techniques rely on a generalisation of conventional register machines, proposed in this thesis, which allows multiple register checks and operations to be performed in a single state transition
APA, Harvard, Vancouver, ISO, and other styles
38

Chatain, Thomas. "Concurrency in Real-Time Distributed Systems, from Unfoldings to Implementability." Habilitation à diriger des recherches, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00926306.

Full text
Abstract:
Formal methods offer a way to deal with the complexity of information systems. They are adapted to a variety of domains like design, verification, model-checking, test and supervision. But information systems are also more and more often distributed, first because of the generalization of information networks, but also because inside a single device, like a computer, the numerous components run concurrently. The problem is that concurrency is known to be a major difficulty for the use of formal methods because it causes a combinatorial explosion of the state space of the systems. This difficulty comes sometimes with another one due to time when it plays an important role in the behaviour of the systems, for instance when the execution time is a critical parameter. These two difficulties, concurrency and real-time, have guided my research works. Sometimes I have tackled one of these two aspects separately, but in many of my works, I have dealt with the problems that arise when one studies systems that are both concurrent and real-time. In my habilitation thesis, I give an overview of my recent research works on dependencies between events in real-time distributed systems and on implementability issues for these systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Jeandel, Emmanuel. "Propriétés structurelles et calculatoires des pavages." Habilitation à diriger des recherches, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00653343.

Full text
Abstract:
Les travaux présentés ici s'intéressent aux coloriages du plan discret. Ce modèle d'inspiration géométrique est intrinsèquement lié aux modèles de calcul, et son étude se décline ici suivant deux axes complémentaires: calculabilité et combinatoire. Nous montrons en particulier ici comment de nombreux résultats récents s'expriment naturellement à travers le concept de bases, propriétés vérifiées par au moins un point de tout ensemble de coloriages, et d'antibases, contre-exemples à ce concept. Nous examinons ensuite les différents codages du calcul par des jeux de tuiles et exhibons en particulier un nouveau codage épars, permettant de caractériser les degrés Turing des ensembles de coloriages. Enfin nous revenons aux origines en étudiant les pavages du point de vue de la logique. Nous caractérisons ainsi les grandes familles d'ensembles de coloriages par des fragments de la logique monadique du second ordre.
APA, Harvard, Vancouver, ISO, and other styles
40

Lombardy, Sylvain. "Approche structurelle de quelques problèmes de la théorie des automates." Phd thesis, Ecole nationale supérieure des telecommunications - ENST, 2001. http://tel.archives-ouvertes.fr/tel-00737830.

Full text
Abstract:
Les travaux développés dans cette thèse empruntent trois directions principales. D'une part, une étude attentive des propriétés de l'automate universel d'un langage rationnel a été menée. Cet automate fini (introduit sous une forme sensiblement différente par J.H. Conway) accepte le langage et a la particularité de contenir l'image par morphisme de n'importe quel automate équivalent. Nous donnons un algorithme pour le construire à partir de l'automate minimal. L'exploitation des propriétés de l'automate universel d'un langage réversible nous a permis de montrer qu'il existe un sous-automate quasi-réversible (à partir duquel on peut facilement construire un automate réversible) de l'automate universel équivalent. De plus, il existe un tel sous-automate sur lequel on peut calculer une expression rationnelle qui représente le langageavec une hauteur d'étoile minimale. D'autre part, nous donnons un algorithme pour décider la séquentialité d'une série (max,+) ou (min,+) réalisée par par un automate sur un alphabet à une lettre. La complexité de cet algorithme ne dépend que de la structure de l'automate et non des valeurs des coefficients. Nous présentons aussi un algorithme qui permet de procéder directement à la déterminisation d'un automate réalisant une série séquentielle et, si ce n'est pas le cas, à l'obtention d'un automate équivalent non ambigu. Ce dernier point rejoint le résultat de Stéphane Gaubert qui montre qu'on peut obtenir une expression (et donc un automate) non ambiguë pour n'importe quel série (max,+) rationnelle sur une lettre. Enfin, nous proposons un algorithme pour construire, à partir d'une expression rationnelle avec multiplicité, un automate qui représente la même série. Cet algorithme, qui est la généralisation des travaux d'Antimirov, permet d'obtenir explicitement un ensemble fini d'expressions qui représentent un ensemble générateur du semi-module auquel appartiennent les quotients de la série rationnelle.
APA, Harvard, Vancouver, ISO, and other styles
41

Léchenet, Jean-Christophe. "Certified algorithms for program slicing." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC056/document.

Full text
Abstract:
La simplification syntaxique, ou slicing, est une technique permettant d’extraire, à partir d’un programme et d’un critère consistant en une ou plusieurs instructions de ce programme, un programme plus simple, appelé slice, ayant le même comportement que le programme initial vis-à-vis de ce critère. Les méthodes d’analyse de code permettent d’établir les propriétés d’un programme. Ces méthodes sont souvent coûteuses, et leur complexité augmente rapidement avec la taille du code. Il serait donc souhaitable d’appliquer ces techniques sur des slices plutôt que sur le programme initial, mais cela nécessite de pouvoir justifier théoriquement l’interprétation des résultats obtenus sur les slices. Cette thèse apporte cette justification pour le cas de la recherche d’erreurs à l’exécution. Dans ce cadre, deux questions se posent. Si une erreur est détectée dans une slice, cela veut-il dire qu’elle se déclenchera aussi dans le programme initial ? Et inversement, si l’absence d’erreurs est prouvée dans une slice, cela veut-il dire que le programme initial en est lui aussi exempt ? Nous modélisons ce problème sur un mini-langage impératif représentatif, autorisant les erreurs et la non-terminaison, et montrons le lien entre la sémantique du programme initial et la sémantique de sa slice, ce qui nous permet de répondre aux deux questions précédentes. Pour généraliser ces résultats, nous nous intéressons à la première brique d’un slicer indépendant du langage : le calcul générique des dépendances de contrôle. Nous formalisons une théorie élégante de dépendances de contrôle sur des graphes orientés finis arbitraires prise dans la littérature et améliorons l’algorithme de calcul proposé.Pour garantir un maximum de confiance dans les résultats, tous ces travaux sont prouvés dans l’assistant de preuve Coq ou dans l’outil de preuve Why3
Program slicing is a technique that extracts, given a program and a criterion that is one or several instructions in this program, a simpler program, called a slice, that has the same behavior as the initial program with respect to the criterion. Program analysis techniques focus on establishing the properties of a program. These techniques are costly, and their complexity increases with the size of the program. Therefore, it would be interesting to apply these techniques on slices rather than the initial program, but it requires theoretical foundations to interpret the results obtained on the slices. This thesis provides this justification for runtime error detection. In this context, two questions arise. If an error is detected in the slice, does this mean that it can also be triggered in the initial program? On the contrary, if the slice is proved to be error-free, does this mean that the initial program is error-free too? We model this problem using a small representative imperative language containing errors and non-termination, and establish the link between the semantics of the initial program and of its slice, which allows to give a precise answer to the two questions raised above. To apply these results in a more general context, we focus on the first step towards a language-independent slicer: an algorithm computing control dependence. We formalize an elegant theory of control dependence on arbitrary finite directed graphs taken from the literature and improve the proposed algorithm. To ensure a high confidence in the results, we prove them in the Coq proof assistant or in the Why3 proof plateform
APA, Harvard, Vancouver, ISO, and other styles
42

Janodet, Jean-Christophe. "L'Inférence Grammaticale au pays des Apprentissages Automatiques : Discussions sur la coexistence de deux disciplines." Habilitation à diriger des recherches, Université Jean Monnet - Saint-Etienne, 2010. http://tel.archives-ouvertes.fr/tel-00659482.

Full text
Abstract:
Quand on cherche à situer l'Inférence Grammaticale dans le paysage de la Recherche, on la place volontiers au sein de l'Apprentissage Automatique, qu'on place lui-même volontiers dans le champ de l'Intelligence Artificielle. Ainsi, dans leur livre de référence, Laurent Miclet et Antoine Cornuéjols préfèrent-ils parler d'Apprentissage Artificiel plutôt que d'Apprentissage Automatique, et consacrent-ils un chapitre complet à l'Inférence Grammaticale. C'est l'histoire du Machine Learning qui explique cette hiérarchie. Pourtant, en 2010, elle n'est pas toujours facile à justifier : combien de chercheurs dans le domaine du Machine Learning connaissent-ils le paradigme d'identification à la limite ? Et combien de chercheurs en Inférence Grammaticale maîtrisent-ils la théorie de la régularisation utilisée en optimisation ? Il suffit de suivre des conférences comme ICGI ou ECML pour constater que les communautés sont différentes, tant sur le plan de leurs motivations que sur celui de leurs cultures scientifiques. En outre, lorsqu'on étudie l'histoire des deux domaines, on observe des points de divergence depuis longtemps déjà. D'un autre côté, plusieurs éléments consolident cette hiérarchie. En effet, tous les algorithmes d'identification fournissent in fine des grammaires qui acceptent les données positives et rejettent les données négatives. Donc les grammaires peuvent être vues comme des sortes de classifieurs, et un algorithme d'Inférence Grammaticale comme un apprenant visant à résoudre un problème de classification. De même, le but de l'Inférence Grammaticale Stochastique est d'identifier des distributions de probabilité, et c'est une thématique qu'on retrouve également en Machine Learning. Ainsi, dans ce manuscrit, nous avons choisi d'étudier, à la lumière de nos travaux, les relations entre Inférence Grammaticale et Classification Supervisée.
APA, Harvard, Vancouver, ISO, and other styles
43

Rispal, Chloé. "Automates sur les ordres linéaires : Complémentation." Phd thesis, Université de Marne la Vallée, 2004. http://tel.archives-ouvertes.fr/tel-00720658.

Full text
Abstract:
Cette thèse traite des ensembles rationnels de mots indexés par des ordres linéaires et en particulier du problème de la fermeture par complémentation. Dans un papier fondateur de 1956, Kleene initie la théorie des langages en montrant que les automates sur les mots finis et les expressions rationnelles ont le même pouvoir d'expression. Depuis, ce résultat a été étendu à de nombreuses structures telles que les mots infinis (Büchi, Muller), bi-infinis (Beauquier, Nivat, Perrin), les mots indexés par des ordinaux (Büchi, Bedon), les traces, les arbres... Plus récemment, Bruyère et Carton ont introduit des automates acceptant des mots indexés par des ordres linéaires et des expressions rationnelles correspondantes. Ces structures linéaires comprennent les mots infinis, les mots indexés par des ordinaux et leurs miroirs. Le théorème de Kleene a été généralisé aux mots indexés par les ordres linéaires dénombrables et dispersés, c'est-à-dire les ordres ne contenant pas de sous-ordre isomorphe à Q. Pour la plupart des structures, la classe des ensembles rationnels forme une algèbre de Boole. Cette propriété est nécessaire pour traduire une logique en automates. La fermeture par complémentation restait un problème ouvert. Dans cette thèse, on résout ce problème de façon positive: on montre que le complément d'un ensemble rationnel de mots indexés par des ordres linéaires dispersés est rationnel. La méthode classique pour obtenir un automate acceptant le complémentaire d'un ensemble rationnel se fait par déterminisation. Nous montrons que cette méthode ne peut-être appliquée dans notre cas: tout automate n'est pas nécessairement équivalent à un automate déterministe. Nous avons utilisé d'autres approches. Dans un premier temps, nous généralisons la preuve de Büchi, basée sur une congruence de mots, et obtenons ainsi la fermeture par complémentation dans le cas des ordres linéaires de rang fini. Pour obtenir le résultat dans le cas général, nous utilisons l'approche algébrique. Nous développons une structure algébrique qui étend la reconnaissance classique par semigroupes finis : les semigroupes sont remplacés par les diamant-semigroupes qui possèdent un produit généralisé. Nous prouvons qu'un ensemble est rationnel si et seulement s'il est reconnu par un diamant-semigroupe fini. Nous montrons aussi qu'un diamant-semigroupe canonique, appelé diamant-semigroupe syntaxique, peut être associé à chaque ensemble rationnel. Notre preuve de la complémentation est effective. Le théorème de Schützenberger établit qu'un ensemble de mots finis est sans étoile si et seulement si son semigroupe syntaxique est fini et apériodique. Pour finir, nous étendons partiellement ce résultat au cas des ordres de rang fini.
APA, Harvard, Vancouver, ISO, and other styles
44

Hélouët, Loïc. "Automates d'ordres : théorie et applications." Habilitation à diriger des recherches, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926742.

Full text
Abstract:
Les automates d'ordres, plus connus sous le nom de Message sequence Charts (MSC), ont connu une énorme popularité depuis les années 1990. Ce succès est à la fois académique et industriel. Les raisons de ce succès sont multiples : le modèle est simple et s'apprend très vite. De plus il possède une puissance d'expression supérieure à celle des automates finis, et pose des problèmes difficiles. L'apparente simplicité des MSCs est en fait trompeuse, et de nombreuses manipulations algorithmiques se révèlent rapidement être des problèmes indécidables. Dans ce document, nous revenons sur 10 années de recherches sur les Message Sequence Charts, et plus généralement sur les langages de scénarios, et tirons quelques conclusions à partir des travaux effectués. Nous revenons sur les propriétés formelles des Message Sequence charts, leur décidabilité, et les sous-classes du langage permettant la décision de tel ou tel problème. L'approche classique pour traiter un problème sur les MSCs est de trouver la plus grande classe possible sur laquelle ce problème est décidable. Un autre challenge est d'augmenter la puissance d'expression des MSCs sans perdre en décidabilité. Nous proposons plusieurs extensions de ce type, permettant la crétion dynamique de processus, ou la définition de protocoles de type "fenêtre glissante". Comme tout modèle formel, les MSCs peuvent difficilement dépasser une taille critique au delà de laquelle un utilisateur ne peut plus vraiment comprendre le diagramme qu'il a sous les yeux. Pour pallier à cette limite, une solution est de travailler sur de plus petits modules comportementaux, puis de les assembler pour obtenir des ensembles de comportements plus grands. Nous étudions plusieurs mécanismes permettant de composer des MSCs, et sur la robustesses des sous-classes de scénarios connues à la composition. La conclusion ce cette partie est assez négative: les scénarios se composent difficilement, et lorsqu'une composition est faisable, peu de propriétés des modèles composés sont préservées. Nous apportons ensuite une contributions à la synthèse automatique de programmes distribués à partir de spécification données sous forme d'automates d'ordres. Cette question répond à un besoin pratique, et permet de situer un role possible des scénarios dans des processus de conception de logiciels distribués. Nous montrons que la synthèse automatique est possible sur un sous ensemble raisonnable des automates d'ordres. Dans une seconde partie de ce document, nous étudions des applications possibles pour les MSCs. Nous regardons entre autres des algorithmes de model-checking, permettant de découvrir des erreurs au moment de la spécification d'un système distribué par des MSCs. La seconde application considérée est le diagnostic, qui permet d'expliciter à l'aide d'un modèle les comportement d'un système réel instrumenté. Enfin, nous regardons l'utilisation des MSCs pour la recherche de failles de sécurité dans un système. Ces deux applications montrent des domaines réalistes d'utilisation des scénarios. Pour finir, nous tirons quelques conclusions sur les scénarios au regard du contenu du document et du travail de ces 10 dernières années. Nous proposons ensuite quelques perspectives de recherche.
APA, Harvard, Vancouver, ISO, and other styles
45

Igor, Dolinka. "O identitetima algebri regularnih jezika." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2000. https://www.cris.uns.ac.rs/record.jsf?recordId=5997&source=NDLTD&language=en.

Full text
Abstract:
Jezik nad E je proizvoljan skup reci nad E, tj. proizvoljan podskup slobodnog monoida E*. Jezici nad datom azbukom formiraju al­ gebre jezika, sa operacijama unije, konkatenacije (dopisivanja red), Kleene-jeve iteracije i sa 0, {A} kao konstantama. Regularni jezici nad E su elementi podalgebre algebre jezika nad E generisane konačnim jezicima. Ispostavlja se da algebre jezika generišu isti varijetet (i stoga zadovoljavaju iste iden­titete) kao i algebre binarnih relacija snabdevene operacijama unije, kompozi­cije, refleksivno-tranzitivnog zatvorenja i praznom relacijom i dijagonalom kao konstantama. Reč je o varijetetu Kleenejevih algebri, i slobodne algebre tog varijeteta su baš algebre regularnih jezika. Na početku disertacije, izloženi su neki aspekti algebarske teorije automata i formalnih jezika, teorije binarnih relacija i univerzalne algebre, relevantni za ispitivanje identiteta na algebrama jezika. Zatim je dat klasični rezultat (Redko, 1964.) da varijetet Kleenejevih algebri nema konačnu bazu identiteta. Ovde je prikazan dokaz Conwaya iz 1971., budući da on sadrži neke ideje koje su se pokazale korisne za dalji rad. Glave 3 i 4 sadrže originalne rezultate usmerene na profinjenje Redkovog rezultata. Pokazano je da uzroci beskonačnosti baze identiteta za Kleenejeve algebre leže u interakciji operacija konkatenacije i iteracije jezika (odnosno, kompozicije i refleksivno-tranzitivnog zatvorenja relacija). Drugim recima, klasa redukata algebri jezika bez operacije unije nema konačnu bazu identiteta. To daje odgovor na problem D. A. Bredikhina iz 1993. godine. S druge strane, proširenjem tipa Kleenejevih algebri involutivnom operacijom inverza jezika, odnosno relacija, takođe se dolazi do beskonačno baziranih varijeteta, čime se rešava problem B. Jonssona iz 1988. godine. Analogno, komutativni jezici nad E su proizvoljni podskupovi slobodnog komutativnog monoida E®. U Glavi 5 je dokazano da se jednakosna teorija algebri komutativnih jezika poklapa sa jednakosnom teorijom algebre (regu­larnih) jezika nad jednoelementnim alfabetom, što daje odgovor na problem koji je još 1969. formulisao A. Salomaa u svojoj monografiji  Theory of Au­tomata.Na taj način, iz poznatih rezultata o jednakosnoj aksiomatizaciji komutativnih jezika se dobija jedna baza za algebre jezika nad jednoelement­nim alfabetom, kao i veoma kratak dokaz poznate činjenice (takođe Redko, 1964.) da algebre komutativnih jezika nemaju konačnu bazu identiteta. Na kraju disertacije, identiteti Kleenejevih algebri se posmatraju u kon­tekstu dinamičkih algebri. Reč je o algebarskoj verziji dinamičkih logika, koje su konstruisane sedamdesetih godina kao matematički model rada računara, kada se na njima izvršava program pisan u nekom imperativnom program­ skom jeziku. Na primer, problemi verifikacije i ekvivalentnosti programa se lako izražavaju preko identiteta dinamičkih algebri, tako da razne njihove jednakosne osobine odgovaraju pojmovima iz teorijskog računarstva. Takođe, interesatno je da je jednakosna teorija Kleenejevih algebri ’’ kodirana” u konačno baziranoj jednakosnoj teoriji dinamičih algebri. Polazeći od poznatih rezul­tata za dvosortne dinamičke algebre (pri čemu je jedna komponenta algebra istog tipa kao i Kleenejeve algebre, dok je druga Booleova algebra), neki od tih rezultata su transformisani i prošireni za Jonssonove dinamičke algebre (jednosortne modele dinamičkih logika). Na primer, ako se Kleenejeva algebra K može predstaviti kao konačan direktan proizvod slobodnih algebri varijeteta Kleenejevih algebri generisanih Kleenejevim relacionim algebrama, tada vari­jetet K-dinamičkih algebri ima odlučivu jednakosnu teoriju. Odavde se izvodi da svaki varijetet Kleenejevih algebri generisan Kleenejevim relacionim algeb­rama takođe ima odlučivu jednakosnu teoriju.
A language over £ is an arbitrary set of words, i.e. any subset of the free monoid £*. All languages over a given alphabet form the algebra of languages, which is equipped with the operations of union, concate­nation, Kleene iteration and 0, {A } as constants. Regular languages over £ are the elements of the subalgebra of the algebra of languages over £ generated by finite languages. It turns out that algebras of languages generate exactly the same variety as algebras of binary relations, endowed with union, rela­tion composition, formation of the refelxive-transitive closure and the empty relation and the diagonal as constants. The variety in question is the vari­ety of Kleene algebras, and the algebras of regular languages are just its free algebras. The present dissertation starts with several aspects of algebraic theory of automata and formal languages, theory of binary relations and universal alge­bra, which are related to problems concerning identities of language algebras. This material is followed by the classical result (Redko, 1964) claiming that the variety of Kleene algebras have no finite equational base. We present the proof of Conway from 1971, since it contains some ideas which can be used for generalizations in different directions. Chapters 3 and 4 contain original results which refine the one of Redko. It is shown that the cause of nonfinite axiomatizability of Kleene algebras lies in the superposition of the concatenation and the iteration of languages, that is, composition of relations and reflexive-transitive closure. In other words, the class of -(--free reducts of algebras of languages has no finite equational base, which answers in the negative a problem of D. A. Bredikhin from 1993. On the other hand, by extending the type of Kleene algebras by the involutive operation of inverse of  languages (converse of relations), we also obtain a nonfinitely based variety, which solves a problem of B. Jonsson from 1988. Analogously, commutative languages over E are defined as subsets of the free commutative monoid £®. It is proved in Chapter 5 that equational the­ ories of algebras of commutative languages and, respectively, of the algebra of (regular) languages over the one-element alphabet, coincide. This result settles a thirty year old problem of A. Salomaa, formulated back in his wellknown monograph  Theory of Automata.Thus, we obtain an equational base for the algebra of one-letter languages, and, on the other hand, a very short proof of another Redko’s result from 1964, according to which there is no finite equational base for algebras of commutative languages. Finally, identities of Kleene algebras are considered in the context of dy­namic algebras, which are just algebraic counterparts of dynamic logics. They were discovered in the seventies as a result of the quest for an appropriate logic for reasoning about computer programs written in an imperative pro­ gramming language. For example, problems concerning program verification and equivalence can be easily translated into identities of dynamic algebras, so that many of their equational properties correspond to notions from computer science. It is also interesting that the whole equational theory of Kleene alge­ bras is ’’encoded” in the finitely based equational theory of dynamic algebras.Starting with known results on two-sorted dynamic algebras (where one com­ ponent is an algebra of the same signature as Kleene algebras, while the other is a Boolean algebra), some of those results are transformed and extended for Jonsson dynamic algebras (that is, one-sorted models of dynamic logics). For example, if a Kleene algebra K can be represented as a finite direct product of free algebras of varieties of Kleene algebras generated by Kleene relation algebras, then the variety of K-dynamic algebras has a decidable equational theory. The latter yields that all varieties of Kleene algebras generated by Kleene relation algebras have decidable equational theories, too.
APA, Harvard, Vancouver, ISO, and other styles
46

Dinh, Trong Hiêu. "Grammaires de graphes et langages formels." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00665732.

Full text
Abstract:
Cette thèse apporte plusieurs contributions dans le domaine des langages formels. Notre premier travail a été de montrer la pertinence des grammaires de graphes comme outil de démonstration de résultats fondamentaux sur les langages algébriques. Nous avons ainsi reformulé avec un point de vue géométrique les démonstrations du lemme des paires itérantes et du lemme de Parikh. Nous avons ensuite étendu aux graphes réguliers des algorithmes de base sur les graphes finis, notamment pour calculer des problèmes de plus court chemin. Ces extensions ont été faites par calcul de plus petits points fixes sur les grammaires de graphes. Enfin, nous avons caractérisé des familles générales de systèmes de récriture de mots dont la dérivation préserve la régularité ou l'algébricité. Ces familles ont été obtenues par décomposition de la dérivation en une substitution régulière suivie de la dérivation du système de Dyck
APA, Harvard, Vancouver, ISO, and other styles
47

Renaud, Fabien. "Les ressources explicites vues par la théorie de la réécriture." Phd thesis, Université Paris-Diderot - Paris VII, 2011. http://tel.archives-ouvertes.fr/tel-00697408.

Full text
Abstract:
Cette thèse s'articule autour de la gestion de ressources explicites dans les langages fonctionnels, en mettant l'accent sur des propriétés de calculs avec substitutions explicites raffinant le lambda-calcul. Dans une première partie, on s'intéresse à la propriété de préservation de la beta-normalisation forte (PSN) pour le calcul lambda s. Dans une seconde partie, on étudie la propriété de confluence pour un large ensemble de calculs avec substitutions explicites. Après avoir donné une preuve générique de confluence basée sur une série d'axiomes qu'un calcul doit satisfaire, on se focalise sur la métaconfluence de lambda j, un calcul où le mécanisme de propagation des substitutions utilise la notion de multiplicité, au lieu de celle de structure. Dans la troisième partie de la thèse on définit un prisme des ressources qui généralise de manière paramétrique le lambda-calcul dans le sens où non seulement la substitution peut être explicite, mais également la contraction et l'affaiblissement. Cela donne un ensemble de huit calculs répartis sur les sommets du prisme pour lesquels on prouve de manière uniforme plusieurs propriétés de bon comportement comme par exemple la simulation de la beta-réduction, la PSN, la confluence, et la normalisation forte pour les termes typés. Dans la dernière partie de la thèse on montre différentes ouvertures vers des domaines plus pratiques. On s'intéresse à la complexité d'un calcul avec substitutions en premier lieu. On présente des outils de recherche et on conjecture des bornes maximales. Enfin, on finit en donnant une spécification formelle du calcul lambda j dans l'assistant à la preuve Coq.
APA, Harvard, Vancouver, ISO, and other styles
48

Groz, Benoit. "Vues de sécurité XML: requêtes, mises à jour et schémas." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00745581.

Full text
Abstract:
Vues de sécurité xml : requêtes, mises à jour, et schémas. Les évolutions technologiques ont consacré l'émergence des services web et du stockage des données en ligne, en complément des bases de données traditionnelles. Ces évolutions facilitent l'accès aux données, mais en contrepartie soulèvent de nouvelles problématiques de sécurité. La mise en œuvre de politiques de contrôle d'accès appropriées est une des approches permettant de réduire ces risques.Nous étudions ici les politiques de contrôle d'accès au niveau d'un document XML, politiques que nous modélisons par des vues de sécurité XML (non matérialisées) à l'instar de Fan et al. Ces vues peuvent être représentées facilement par des alignements d'arbres grâce à l'absence d'opérateurs arithmétiques ou de restructuration. Notre objectif est par conséquent d'examiner comment manipuler efficacement ce type de vues, à l'aide des méthodes formelles, et plus particulièrement des techniques de réécriture de requêtes et la théorie des automates d'arbres. Trois directions principales ont orienté nos recherches: nous avons tout d'abord élaboré des algorithmes pour évaluer l'expressivité d'une vue, en fonction des requêtes qui peuvent être exprimées à travers cette vue. Il s'avère que l'on ne peut décider en général si une vue permet d'exprimer une requête particulière, mais cela devient possible lorsque la vue satisfait des hypothèses générales. En second lieu, nous avons considéré les problèmes soulevés par la mises à jour du document à travers une vue. Enfin, nous proposons des solutions pour construire automatiquement un schéma de la vue. En particulier, nous présentons différentes techniques pour représenter de façon approchée l'ensemble des documents au moyen d'une DTD.
APA, Harvard, Vancouver, ISO, and other styles
49

Monmege, Benjamin. "Spécification et vérification de propriétés quantitatives : expressions, logiques et automates." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00908990.

Full text
Abstract:
La vérification automatique est aujourd'hui devenue un domaine central de recherche en informatique. Depuis plus de 25 ans, une riche théorie a été développée menant à de nombreux outils, à la fois académiques et industriels, permettant la vérification de propriétés booléennes -- celles qui peuvent être soit vraies soit fausses. Les besoins actuels évoluent vers une analyse plus fine, c'est-à-dire plus quantitative. L'extension des techniques de vérification aux domaines quantitatifs a débuté depuis 15 ans avec les systèmes probabilistes. Cependant, de nombreuses autres propriétés quantitatives existent, telles que la durée de vie d'un équipement, la consommation énergétique d'une application, la fiabilité d'un programme, ou le nombre de résultats d'une requête dans une base de données. Exprimer ces propriétés requiert de nouveaux langages de spécification, ainsi que des algorithmes vérifiant ces propriétés sur une structure donnée. Cette thèse a pour objectif l'étude de plusieurs formalismes permettant de spécifier de telles propriétés, qu'ils soient dénotationnels -- expressions régulières, logiques monadiques ou logiques temporelles -- ou davantage opérationnels, comme des automates pondérés, éventuellement étendus avec des jetons. Un premier objectif de ce manuscript est l'étude de résultats d'expressivité comparant ces formalismes. En particulier, on donne des traductions efficaces des formalismes dénotationnels vers celui opérationnel. Ces objets, ainsi que les résultats associés, sont présentés dans un cadre unifié de structures de graphes. Ils peuvent, entre autres, s'appliquer aux mots et arbres finis, aux mots emboîtés (nested words), aux images ou aux traces de Mazurkiewicz. Par conséquent, la vérification de propriétés quantitatives de traces de programmes (potentiellement récursifs, ou concurrents), les requêtes sur des documents XML (modélisant par exemple des bases de données), ou le traitement des langues naturelles sont des applications possibles. On s'intéresse ensuite aux questions algorithmiques que soulèvent naturellement ces résultats, tels que l'évaluation, la satisfaction et le model checking. En particulier, on étudie la décidabilité et la complexité de certains de ces problèmes, en fonction du semi-anneau sous-jacent et des structures considérées (mots, arbres...). Finalement, on considère des restrictions intéressantes des formalismes précédents. Certaines permettent d'étendre l'ensemble des semi-anneau sur lesquels on peut spécifier des propriétés quantitatives. Une autre est dédiée à l'étude du cas spécial de spécifications probabilistes : on étudie en particulier des fragments syntaxiques de nos formalismes génériques de spécification générant uniquement des comportements probabilistes.
APA, Harvard, Vancouver, ISO, and other styles
50

Picard, Celia. "Représentation coinductive des graphes." Phd thesis, Université Paul Sabatier - Toulouse III, 2012. http://tel.archives-ouvertes.fr/tel-00862507.

Full text
Abstract:
Nous nous intéressons à la représentation de graphes dans le prouveur Coq. Nous avons choisi de les représenter par des types coinductifs dont nous voulions explorer l'utilisation. Ceux-ci permettent de rendre succincte et élégante la représentation et d'obtenir la navigabilité par construction. Nous avons dû contourner la condition de garde dont le but est d'assurer la validité des opérations effectuées sur les objets coinductifs. Son implantation dans Coq est restrictive et interdit parfois des définitions sémantiquement correctes. Une formalisation canonique des graphes dépasse ainsi l'expressivité directe de Coq. Nous avons donc proposé une solution respectant ces limitations, puis nous avons défini une relation sur les graphes nous permettant d'obtenir la même notion d'équivalence qu'avec une représentation classique tout en gardant les avantages de la coinduction. Nous montrons qu'elle est équivalente à une relation basée sur des observations finies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography