To see the other types of publications on this topic, follow the link: Calculo de sequentes.

Dissertations / Theses on the topic 'Calculo de sequentes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 dissertations / theses for your research on the topic 'Calculo de sequentes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Moura, José Eduardo de Almeida. "Um estudo de C omega em calculo de sequentes e dedução natural." [s.n.], 2001. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280385.

Full text
Abstract:
Orientador: Itala Maria Loffredo D'Ottaviano
Tese (doutorado) - Universidade Estadual de Campinas,Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-07-27T20:38:33Z (GMT). No. of bitstreams: 1 Moura_JoseEduardodeAlmeida_D.pdf: 6732549 bytes, checksum: bf228cc3830350eb48de2759857a076e (MD5) Previous issue date: 2001
Resumo: A partir dos trabalhos de Raggio, datados de 1968 e 1978, sobre os sistemas Cn1w, desenvolve-se uma análise de Cw em Cálculo de Seqüentes e Dedução Natural, apresentando como resultados mais destacados os Teoremas de Eliminação do Corte e a de Normalização Forte. Características relevantes são o tratamento dado à negação e a permissividade da definição de prova normal
Abstract: Following Raggio's 1968 and 1978 papers on Cn1w systems, it isdeveloped here an analysis of Cw in Sequent Calculus and Natural Deduction, presenting respectively the Cut Elimination and the Strong Normalization Theorems as main results. Relevant characteristics are the treatment applied to negation and the permissibility of normal proof definition
Doutorado
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
2

Quispe, Cruz Marcela. "Em direção aos N-Grafos intuicionistas." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1938.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:53:19Z (GMT). No. of bitstreams: 2 arquivo1909_1.pdf: 1094258 bytes, checksum: 0313bee566cae55fbe650f2274bb2925 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
A apresentação dos N-Grafos foi feita por De Oliveira no ano 2001. Este é um sistema de provas que possui regras lógicas representadas graficamente por meio de digrafos. Estes grafos de provas se baseiam na dedução natural e no cálculo de sequentes de Gentzen, combinando idéias de quatro abordagens geométricas consolidadas na literatura de teoria da prova: tabelas de desenvolvimento (Kneale, 1957), redes-de-prova (Girard, 1987), logical flow graphs (Buss, 1991), e principalmente provas-como-grafos (Statman, 1974). Nesta dissertação é dado prosseguimento ao estudo dos N-grafos que foram propostos para a lógica proposicional clássica, o qual não possui uma versão para a lógica proposicional intuicionista. Realizamos uma revisão dos cálculos para a lógica intuicionista, destacando entre elas o trabalho apresentado por Gentzen na década de 1930, assim como as versões para múltiplas conclusões posteriores a este, como por exemplo o sistema LJ' (Maehara, 1954), os sistemas propostos por Kleene (Kleene, 1964) e o sistema FIL (De Paiva e Pereira, 2005). Assim a intenção desta dissertação é fazer um estudo sobre as dificuldades e possíveis soluções para a construção de um sistema de provas no estilo N-Grafos para a lógica intuicionista. Apresentando assim uma proposta de solução dos N-Grafos para a lógica intuicionista proposicional
APA, Harvard, Vancouver, ISO, and other styles
3

Herbelin, Hugo. "Sequents qu'on calcule : de l'interpretation du calcul des sequents comme calcul de lambda-termes et comme calcul de strategies gagnantes." Paris 7, 1995. https://tel.archives-ouvertes.fr/tel-00382528.

Full text
Abstract:
L'objet de cette these est l'etude des systemes formels du type des systemes lj et lk de gentzen (couramment appeles calculs des sequents) dans leur rapport avec la calculabilite. Le procede de calcul dans ces systemes consiste en l'elimination des coupures. Deux interpretations sont considerees. Le lambda-calcul constitue le support de la premiere interpretation. Nous etablissons une correspondance de type curry-howard entre lj et une variante syntaxique du lambda-calcul avec operateur explicite de substitution (de type let - in - ). Une procedure de normalisation/elimination des coupures confluente et terminant fortement est donnee et l'extension de la correspondance a lk se fait en considerant l'operateur mu du lambda-mu-calcul de parigot. La theorie des jeux constitue le support de la deuxieme interpretation : les preuves des calculs des sequents sont vues comme des strategies gagnantes pour certains types de jeux a deux joueurs (dialogues) se disputant la validite de la formule prouvee. Nous donnons deux resultats. Dans un premier temps, nous montrons qu'il suffit de considerer des restrictions ljq de lj puis lkq de lk pour etablir, dans le cas propositionnel, une bijection entre les preuves de ces systemes et les e-dialogues intuitionnistes puis classiques definis par lorenzen dans un but de fondement de la prouvabilite en termes de jeux. Ceci affine et generalise un resultat de felscher d'equivalence entre l'existence d'une preuve d'une formule a dans lj et l'existence d'une strategie gagnante pour le premier des joueurs dans un e-dialogue a propos de a. Dans un deuxieme temps, nous partons d'une logique propositionnelle infinitaire sans variable consideree par coquand pour y definir une interaction prouvee terminante entre les preuves vues comme strategies gagnantes. Nous montrons une correspondance operationnelle entre ce procede d'interaction et l'elimination faible de tete des coupures, celle-ci etant independamment prouvee terminante.
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson-Freyd, Philip. "Properties of Sequent-Calculus-Based Languages." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23191.

Full text
Abstract:
Programmers don't just have to write programs, they are have to reason about them. Programming languages aren't just tools for instructing computers what to do, they are tools for reasoning. And, it isn't just programmers who reason about programs: compilers and other tools reason similarly as they transform from one language into another one, or as they optimize an inefficient program into a better one. Languages, both surface languages and intermediate ones, need therefore to be both efficiently implementable and to support effective logical reasoning. However, these goals often seem to be in conflict. This dissertation studies programming language calculi inspired by the Curry-Howard correspondence, relating programming languages to proof systems. Our focus is on calculi corresponding logically to classical sequent calculus and connected computationally to abstract machines. We prove that these calculi have desirable properties to help bridge the gap between reasoning and implementation. Firstly, we explore a persistent conflict between extensionality and effects for lazy functional programs that manifests in a loss of confluence. Building on prior work, we develop a new rewriting theory for lazy functions and control which we first prove corresponds to the desired equational theory and then prove, by way of reductions into a smaller system, to be confluent. Next, we turn to the inconsistency between weak-head normalization and extensionality. Using ideas from our study of confluence, we develop a new operational semantics and series of abstract machines for head reduction which show us how to retain weak-head reduction's ease of implementation. After demonstrating the limitations of the above approach for call-by-value or types other than functions, we turn to typed calculi, showing how a type system can be used not only for mixing different kinds of data, but also different evaluation strategies in a single program. Building on variations of the reducibility candidates method such as biorthogonality and symmetric candidates, we present a uniform proof of strong normalization for our mixed-strategy system which works so long as all the strategies used satisfy criteria we isolate. This dissertation includes previously published co-authored material.
APA, Harvard, Vancouver, ISO, and other styles
5

Johnson-Freyd, Philip Alden. "Properties of Sequent-Calculus-Based Languages." Thesis, University of Oregon, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10684255.

Full text
Abstract:

Programmers don't just have to write programs, they are have to reason about them. Programming languages aren't just tools for instructing computers what to do, they are tools for reasoning. And, it isn't just programmers who reason about programs: compilers and other tools reason similarly as they transform from one language into another one, or as they optimize an inefficient program into a better one. Languages, both surface languages and intermediate ones, need therefore to be both efficiently implementable and to support effective logical reasoning. However, these goals often seem to be in conflict.

This dissertation studies programming language calculi inspired by the Curry-Howard correspondence, relating programming languages to proof systems. Our focus is on calculi corresponding logically to classical sequent calculus and connected computationally to abstract machines. We prove that these calculi have desirable properties to help bridge the gap between reasoning and implementation.

Firstly, we explore a persistent conflict between extensionality and effects for lazy functional programs that manifests in a loss of confluence. Building on prior work, we develop a new rewriting theory for lazy functions and control which we first prove corresponds to the desired equational theory and then prove, by way of reductions into a smaller system, to be confluent. Next, we turn to the inconsistency between weak-head normalization and extensionality. Using ideas from our study of confluence, we develop a new operational semantics and series of abstract machines for head reduction which show us how to retain weak-head reduction's ease of implementation.

After demonstrating the limitations of the above approach for call-by-value or types other than functions, we turn to typed calculi, showing how a type system can be used not only for mixing different kinds of data, but also different evaluation strategies in a single program. Building on variations of the reducibility candidates method such as biorthogonality and symmetric candidates, we present a uniform proof of strong normalization for our mixed-strategy system which works so long as all the strategies used satisfy criteria we isolate.

This dissertation includes previously published co-authored material.

APA, Harvard, Vancouver, ISO, and other styles
6

Nigam, Vivek. "Exploiting non-canonicity in the sequent calculus." Phd thesis, Ecole Polytechnique X, 2009. http://pastel.archives-ouvertes.fr/pastel-00005487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brotherston, James. "Sequent calculus proof systems for inductive definitions." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/1458.

Full text
Abstract:
Inductive definitions are the most natural means by which to represent many families of structures occurring in mathematics and computer science, and their corresponding induction / recursion principles provide the fundamental proof techniques by which to reason about such families. This thesis studies formal proof systems for inductive definitions, as needed, e.g., for inductive proof support in automated theorem proving tools. The systems are formulated as sequent calculi for classical first-order logic extended with a framework for (mutual) inductive definitions. The default approach to reasoning with inductive definitions is to formulate the induction principles of the inductively defined relations as suitable inference rules or axioms, which are incorporated into the reasoning framework of choice. Our first system LKID adopts this direct approach to inductive proof, with the induction rules formulated as rules for introducing atomic formulas involving inductively defined predicates on the left of sequents. We show this system to be sound and cut-free complete with respect to a natural class of Henkin models. As a corollary, we obtain cut-admissibility for LKID. The well-known method of infinite descent `a la Fermat, which exploits the fact that there are no infinite descending chains of elements of well-ordered sets, provides an alternative approach to reasoning with inductively defined relations. Our second proof system LKIDw formalises this approach. In this system, the left-introduction rules for formulas involving inductively defined predicates are not induction rules but simple case distinction rules, and an infinitary, global soundness condition on proof trees — formulated in terms of “traces” on infinite paths in the tree — is required to ensure soundness. This condition essentially ensures that, for every infinite branch in the proof, there is an inductive definition that is unfolded infinitely often along the branch. By an infinite descent argument based upon the well-foundedness of inductive definitions, the infinite branches of the proof can thus be disregarded, whence the remaining portion of proof is well-founded and hence sound. We show this system to be cutfree complete with respect to standard models, and again infer the admissibility of cut. The infinitary system LKIDw is unsuitable for formal reasoning. However, it has a natural restriction to proofs given by regular trees, i.e. to those proofs representable by finite graphs. This restricted “cyclic” proof system, CLKIDw, is suitable for formal reasoning since proofs have finite representations and the soundness condition on proofs is thus decidable. We show how the formulation of our systems LKIDw and CLKIDw can be generalised to obtain soundness conditions for a general class of infinite proof systems and their corresponding cyclic restrictions. We provide machinery for manipulating and analysing the structure of proofs in these essentially arbitrary cyclic systems, based primarily on viewing them as generating regular infinite trees, and we show that any proof can be converted into an equivalent proof with a restricted cycle structure. For proofs in this “cycle normal form”, a finitary, localised soundness condition exists that is strictly stronger than the general, infinitary soundness condition, but provides more explicit information about the proof. Finally, returning to the specific setting of our systems for inductive definitions, we show that any LKID proof can be transformed into a CLKIDw proof (that, in fact, satisfies the finitary soundness condition). We conjecture that the two systems are in fact equivalent, i.e. that proof by induction is equivalent to regular proof by infinite descent.
APA, Harvard, Vancouver, ISO, and other styles
8

Rothenberg, Robert. "On the relationship between hypersequent calculi and labelled sequent calculi for intermediate logics with geometric Kripke semantics." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1350.

Full text
Abstract:
In this thesis we examine the relationship between hypersequent and some types of labelled sequent calculi for a subset of intermediate logics—logics between intuitionistic (Int), and classical logics—that have geometric Kripke semantics, which we call Int∗/Geo. We introduce a novel calculus for a fragment of first-order classical logic, which we call partially-shielded formulae (or PSF for short), that is adequate for expressing the semantic validity of formulae in Int∗/Geo, and apply techniques from correspondence theory to provide translations of hypersequents, simply labelled sequents and relational sequents (simply labelled sequents with relational formulae) into PSF. Using these translations, we show that hypersequents and simply labelled sequents for calculi in Int∗/Geo share the same models. We also use these translations to justify various techniques that we introduce for translating simply labelled sequents into relational sequents and vice versa. In particular, we introduce a technique called "transitive unfolding" for translating relational sequents into simply labelled sequents (and by extension, hypersequents) which preserves linear models in Int∗/Geo. We introduce syntactic translations between hypersequent calculi and simply labelled sequent calculi. We apply these translations to a novel hypersequent framework HG3ipm∗ for some logics in Int∗/Geo to obtain a corresponding simply labelled sequent framework LG3ipm∗, and to an existing simply labelled calculus for Int from the literature to obtain a novel hypersequent calculus for Int. We introduce methods for translating a simply labelled sequent calculus into a cor- responding relational calculus, and apply these methods to LG3ipm∗ to obtain a novel relational framework RG3ipm∗ that bears similarities to existing calculi from the literature. We use transitive unfolding to translate proofs in RG3ipm∗ into proofs in LG3ipm∗ and HG3ipm∗ with the communication rule, which corresponds to the semantic restriction to linear models.
APA, Harvard, Vancouver, ISO, and other styles
9

Birštunas, Adomas. "Sequent calculi with an efficient loop-check for BDI logics." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100302_095327-67575.

Full text
Abstract:
Sequent calculi for BDI logics is a research object of the thesis. BDI logics are widely used for agent system description and implementation. Agents are autonomous systems, those acts in some environment and aspire to achieve preassigned goals. Implementation of the decision making is the main and the most complicated part in agent systems implementation. Logic calculi may be used for the decision making implementation. In this thesis, there are researched sequent calculi for BDI logics. Sequent calculi for BDI logics, like sequent calculi for other modal logics, use loop-check technique to get decidability. Inefficient loop-check takes a major part of the resources used for the derivation. For some modal logics, there are known loop-check free sequent calculi or calculi with an efficient loop-check. In this thesis, there is presented loop-check free sequent calculus for KD45 logic, which is the main fragment of the BDI logics. Introduced calculus not only eliminates loop-check, but also simplifies sequent derivation. For the branching time logic (another BDI logic fragment) there is presented sequent calculus with an efficient loop-check. Obtained results are adapted for creation sequent calculi for monoagent and multiagent BDI logics. Introduced calculi use only restricted loop-check. Moreover, loop-check is totally eliminated for some types of the loops. These results enables to create more efficient agent systems, those are based on the BDI logics.
Darbe nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikos yra plačiai naudojamos agentinių sistemų aprašymui ir realizavimui. Agentai yra autonomiškos sistemos, kurios veikia kažkurioje aplinkoje ir siekia įvykdyti iš anksto apibrėžtus tikslus. Sprendimų priėmimo realizavimas yra svarbiausia ir sudėtingiausia dalis realizuojant agentines sistemas. Sprendimo priėmimo realizavimui gali būti naudojami logikos skaičiavimai. Šiame darbe ir yra nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikose, kaip ir kitose modalumo logikose, yra naudojama ciklų paieška išsprendžiamumui gauti. Neefektyvi ciklų paieška užima didesnę išvedimų paieškos resursų dalį. Kai kurioms modalumo logikoms yra žinomi becikliai skaičiavimai ar skaičiavimai naudojantys efektyvią ciklų paiešką. Šiame darbe yra pateikiamas beciklis sekvencinis skaičiavimas KD45 logikai, kuri yra esminis BDI logikų fragmentas. Pateiktas skaičiavimas ne tik eliminuoja ciklų paiešką, bet ir supaprastina patį sekvencijos išvedimą. Skaidaus laiko logikai (kitam BDI logikų fragmentui) yra pateikiamas sekvencinis skaičiavimas naudojantis efektyvią ciklų paiešką. Gauti rezultatai yra pritaikyti sukuriant sekvencinius skaičiavimus vianaagentinei ir daugiaagentinei BDI logikoms. Pristatyti skaičiavimai naudoja tik apribotą ciklų paiešką. Be to, kai kurių tipų ciklus eliminuoja visiškai. Šie rezultatai įgalina kurti efektyvesnes agentines sistemas, paremtas BDI logikomis.
APA, Harvard, Vancouver, ISO, and other styles
10

Lellmann, Björn. "Sequent calculi with context restrictions and applications to conditional logic." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/18059.

Full text
Abstract:
In this thesis we consider generic tools and techniques for the proof-theoretic investigation of not necessarily normal modal logics based on minimal, intuitionistic or classical propositional logic. The underlying framework is that of ordinary symmetric or asymmetric two-sided sequent calculi without additional structural connectives, and the point of interest are the logical rules in such a system. We introduce the format of a sequent rule with context restrictions and the slightly weaker format of a shallow rule. The format of a rule with context restrictions is expressive enough to capture most normal modal logics in the S5 cube, standard systems for minimal, intuitionistic and classical propositional logic and a wide variety of non-normal modal logics. For systems given by such rules we provide sufficient criteria for cut elimination and decidability together with generic complexity results. We also explore the expressivity of such systems with the cut rule in terms of axioms in a Hilbert-style system by exhibiting a corresponding syntactically defined class of axioms along with automatic translations between axioms and rules. This enables us to show a number of limitative results concerning amongst others the modal logic S5. As a step towards a generic construction of cut free and tractable sequent calculi we then introduce the notion of cut trees as representations of rules constructed by absorbing cuts. With certain limitations this allows the automatic construction of a cut free and tractable sequent system from a finite number of rules. For cases where such a system is to be constructed by hand we introduce a graphical representation of rules with context restrictions which simplifies this process. Finally, we apply the developed tools and techniques and construct new cut free sequent systems for a number of Lewis’ conditional logics extending the logic V. The systems yield purely syntactic decision procedures of optimal complexity and proofs of the Craig interpolation property for the logics at hand.
APA, Harvard, Vancouver, ISO, and other styles
11

Roddick, Cheryl Diane. "A comparison study of students from two calculus sequences on their achievement in calculus-dependent courses /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487947501133527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zunic, Dragisa. "Computing with sequents and diagrams in classical logic - calculi *X, dX and ©X." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2007. http://tel.archives-ouvertes.fr/tel-00265549.

Full text
Abstract:
Cette thèse de doctorat étudie l'interprétation calculatoire des preuves de la logique classique. Elle présente trois calculs reflétant trois approches différentes de la question.

Cette thèse est donc composée de trois parties.

La première partie introduit le *X calcul, dont les termes représentent des preuves dans le calcul des séquents classique. Les règles de réduction du *X calcul capture la plupart des caractéristiques de l'élimination des coupures du calcul des séquents. Ce calcul introduit des termes permettant une
implémentation implicite de l'effacement et de la duplication. Pour autant que nous sachions, c'est le premier tel calcul pour la logique classique.

La deuxième partie étudie la possibilité de représenter les calculs classiques au moyen de diagrammes. Nous présentons le dX calcul, qui est le calcul diagrammatique de la logique classique, et dont les diagrammes sont issus des
*X-termes. La différence principale réside dans le fait que dX fonctionne à un niveau supérieur d'abstraction. Il capture l'essence des preuves du calcul des séquents ainsi que l'essence de l'élimination classique des coupures.

La troisième partie relie les deux premières. Elle présente le $copy;X calcul qui est une version unidimensionnelle du calcul par diagramme. Nous commencons par le *X, où nous identifions explicitement les termes qui doivent l'être. Ceux-ci
sont les termes qui encodent les preuves des séquents qui sont équivalentes modulo permutation de règles d'inférence indépendantes. Ces termes ont également la même représentation par diagramme. Une telle identification induit une relation de congruence sur les termes. La relation de réduction est définie modulo la congruence, et les règles de réduction correspondent à celle du dX calcul.
APA, Harvard, Vancouver, ISO, and other styles
13

Espírito, Santo José Carlos Soares. "Conservative extensions of the λ-calculus for the computational interpretation of sequent calculus." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/27984.

Full text
Abstract:
This thesis offers a study of the Curry-Howard correspondence for a certain fragment (the canonical fragment) of sequent calculus based on an investigation of the relationship between cut elimination in that fragment and normalisation. The output of this study may be summarised in a new assignment Theta, to proofs in the canonical fragment, of terms from certain conservative extensions of the lambda-calculus. This assignment, in a sense, is an optimal improvement over the traditional assignment phi, in that it is an isomorphism both in the sense of sound bijection of proofs and isomorphism of normalisation procedures. First, a systematic definition of calculi of cut-elimination for the canonical fragment is carried out. We study various right protocols, i.e. cut-elimination procedures which give priority to right permutation. We pay particular attention to the issue of what parts of the procedure are to be implicit, that is, performed by meta-operators in the style of natural deduction. Next, a comprehensive study of the relationship between normalisation and these calculi of cut-elimination is done, producing several new insight of independent interest, particularly concerning a generalisation of Prawitz's mapping of normal natural deduction proofs into sequent calculus. This study suggests the definition of conservative extensions of natural deduction (and lambda-calculus) based on the idea of a built-in distinction between applicative term and application, and also between head and tail application. These extensions offer perfect counterparts to the calculi in the canonical fragment, as established by the mentioned mapping Theta. Conceptual rearrangements in proof-theory deriving from these extensions of natural deduction are discussed. Finally, we argue that, computationally, both the canonical fragment and natural deduction (in the extended sense introduced here) correspond to extensions of the lambda-calculus with applicative terms; and that what distinguishes them is the way applicative terms are structured. In the canonical fragment, the head application of an applicative term is "focused". This, in turn, explains the following observation: some reduction rules of calculi in the canonical fragment may be interpreted as transition rules for abstract call-by-name machines.
APA, Harvard, Vancouver, ISO, and other styles
14

Joinet, Jean-Baptiste. "Etude de la normalisation du calcul des sequents classique a travers la logique lineaire." Paris 7, 1993. http://www.theses.fr/1993PA077066.

Full text
Abstract:
Il s'agit d'une contribution a l'etude des problemes calculatoires de la logique classique, avec pour objet central la normalisation des calculs des sequents intuitionniste et, surtout, classique, pour outil principal la logique lineaire. La premiere partie presente des resultats sur la correction et la normalisation des reseaux de preuves. La seconde partie expose l'engendrement de deux systemes constructifs de calcul des sequents classique, a partir de plongements uniformes du calcul classique dans la logique lineaire. Elle s'acheve par la construction de plongements (preuve a preuve) minima de la logique classique dans la logique lineaire
APA, Harvard, Vancouver, ISO, and other styles
15

COLOMAR, MARIA FERNANDA PALLARES. "MULTIPLE SUCCEDENT SEQUENT CALCULUS FOR INTUITIONISTIC FIRST-ORDER LOGIC." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11144@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
A primeira apresentação de um Cálculo de Seqüentes foi feita por Gerhard Gentzen na década de 1930. Neste tipo de sistema, a diferença entre as versões clássica e intuicionista radicardinalidade do sucedente. O sucedente múltiplo foi tradicionalmente considerado como o elemento que representava o aspecto clássico do sistema, enquanto os seqüentes intuicionistas podiam ter, no máximo, uma fórmula no sucedente. Nas décadas seguintes foram formulados diversos cálculos intuicionistas de sucedente múltiplo que atenuaram essa restrição forte na cardinalidade do sucedente. Na década de 1990, estudou-se a relação de conexão ou dependência entre as fórmulas visando assegurar o caráter intuicionista dos sistemas. Nós realizamos uma revisão dos sistemas de se seqüentes intuicionistas e algumas das suas aplicações. Apresentamos a versão do sistema FIL (feita para o caso proposicional por De Paiva e Pereira) para a lógica intuicionista de primeira ordem provando que o mesmo é correto, completo e satisfaz eliminação de corte.
The first Sequent Calculus was presented by Gerhard Gentzen in the thirties. In this system, the difference between intuitionistic logic and classical logic is based on the cardinality of the succedent. Traditionally, the multiple succedent was considered the element that preserved the classical aspect of the system, while intuitionistic sequents have, at most, one formula in the succedent. In the following decades, several multiple succedent intuitionistic calculus were presented that diminish shed this st strong restriction in the cardinality of the su succedent. In the decade of 1990, this cardinality restriction was replaced by a dependency relation between the formula ocurrences in the antecedent and in the succedent of a sequent in order to ensure the intuitionistic character of the system. We make a revision of the intuitionistic systems and some of their applications. We present a version of the system FIL (accomplish shed for the propositional case by De Paiva and Pereira) for first-order logic proving that it is sound, complete and that it satisfies the cut-elimination theorem.
APA, Harvard, Vancouver, ISO, and other styles
16

LUSTOSA, CECILIA REIS ENGLANDER. "ON SOME RELATIONS BETWEEN NATURAL DEDUCTION AND SEQUENT CALCULUS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=24302@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Segerberg apresentou uma prova geral da completude para lógicas proposicionais. Para tal, um sistema de dedução foi definido de forma que suas regras sejam regras para um operador booleano arbitrário para uma dada lógica proposicional. Cada regra desse sistema corresponde a uma linha na tabela de verdade desse operador. Na primeira parte desse trabalho, mostramos uma extensão da ideia de Segerberg para lógicas proposicionais finito-valoradas e para lógicas não-determinísticas. Mantemos a ideia de definir um sistema de dedução cujas regras correspondam a linhas de tabelas verdade, mas ao invés de termos um tipo de regra para cada valor de verdade da lógica correspondente, usamos uma representação bivalente que usa a técnica de fórmulas separadoras definidas por Carlos Caleiro e João Marcos. O sistema definido possui tantas regras que pode ser difícil trabalhar com elas. Acreditamos que um sistema de cálculo de sequentes definido de forma análoga poderia ser mais intuitivo. Motivados por essa observação, a segunda parte dessa tese é dedicada à definição de uma tradução entre cálculo de sequentes e dedução natural, onde procuramos definir uma bijeção melhor do que as já existentes.
Segerberg presented a general completeness proof for propositional logics. For this purpose, a Natural Deduction system was defined in a way that its rules were rules for an arbitrary boolean operator in a given propositional logic. Each of those rules corresponds to a row on the operator s truth-table. In the first part of this thesis we extend Segerbergs idea to finite-valued propositional logic and to non-deterministic logic. We maintain the idea of defining a deductive system whose rules correspond to rows of truth-tables, but instead of having n types of rules (one for each truth-value), we use a bivalent representation that makes use of the technique of separating formulas as defined by Carlos Caleiro and João Marcos. The system defined has so many rules it might be laborious to work with it. We believe that a sequent calculus system defined in a similar way would be more intuitive. Motivated by this observation, in the second part of this thesis we work out translations between Sequent Calculus and Natural Deduction, searching for a better bijective relationship than those already existing.
APA, Harvard, Vancouver, ISO, and other styles
17

Farooque, Mahfuza. "Automated Reasoning Techniques as Proof-search in Sequent Calculus." Palaiseau, Ecole polytechnique, 2013. http://pastel.archives-ouvertes.fr/docs/00/96/13/44/PDF/Farooque.pdf.

Full text
Abstract:
Le raisonnement assisté par ordinateur joue un rôle crucial en informatique et en logique mathématique, de la programmation logique à la déduction automatique, en passant par les assistants à la démonstration. Le but de cette thèse est la conception d'un cadre général où différentes techniques de raisonnement assisté par ordinateur peuvent être implémentées, pour que ces dernières puissent collaborer, être généralisées, et être implémentées de manière plus sûre. Le cadre que je propose est un calcul des séquents appelé LKp(T), qui généralise un système de la littérature à la présence d'une théorie pour laquelle nous avons une procédure de décision, comme l'arithmétique linéaire. Cette thèse développe la méta-théorie de LKp(T), avec par exemple la propriété de complétude logique. Nous montrons ensuite comment le système spécifie une procédure de recherche de preuve qui émule une technique connue du domaine de la Satisfiabilité-modulo-théories appelée DPLL(T). Enfin, les tableaux de clauses et les tableaux de connexions sont d'autres techniques populaires en déduction automatique, d'une nature relativement différente de DPLL. Cette thèse décrit donc également comment ces techniques de tableaux peuvent être décrites en termes de recherche de preuve dans LKp(T). La simulation est donnée à la fois pour la logique propositionnelle et la logique du premier ordre, ce qui ouvre de nouvelles perspectives de généralisation et de collaboration entre les techniques de tableaux et DPLL, même en présence d'une théorie
Computer-aided reasoning plays a great role in computer science and mathematical logic, from logic programing to automated reasoning, via interactive proof assistants, etc. The general aim of this thesis is to design a general framework where various techniques of Computer-aided reasoning can be implemented, so that they can collaborate, be generalised, and implemented in a safe and trusted way. The framework I propose is a sequent calculus called LKp(T), which generalises an older calculus of the literature to the presence of an arbitrary background theory for which we have a decision procedure, like linear arithmetic. The thesis develops the meta-theory of LKp(T), such as its logical completeness. We then show how it specifies a proof-search procedure that can emulate a well-known technique from the field of Satisfiability-modulo-Theories, namely DPLL(T). Finally, clause and connection tableaux are other widely used techniques of automated reasoning, of a rather different nature from that of DPLL. This thesis also described how such tableaux techniques can be described as bottom-up proof-search in LKp(T). The simulation is given for both propositional and first-order logic, opening up new perspectives of generalisation and collaboration between tableaux techniques and DPLL, even in presence of a background theory
APA, Harvard, Vancouver, ISO, and other styles
18

Rizk, Guillaume. "Parallelization on graphic hardware : contributions to RNA folding and sequence alignment." Rennes 1, 2011. https://ecm.univ-rennes1.fr/nuxeo/site/esupversions/df86b1c1-46f8-4fe8-ac6c-fb4920b31b84.

Full text
Abstract:
La bioinformatique nécessite l'analyse de grandes quantités de données. Avec l'apparition de nouvelles technologies permettant un séquençage à haut débit à bas coût, la puissance de calcul requise pour traiter les données a énormément augmenté. . Cette thèse examine la possibilité d'utiliser les processeurs graphiques (GPU) pour des applications de bioinformatique. Dans un premier temps, ce travail s'intéresse au calcul des structures secondaires d'ARN. Ce problème est en général calculé par programmation dynamique, avec un algorithme qui pose de sérieux problèmes pour un code GPU. Nous introduisons une nouvelle implémentation tuilée qui fait apparaitre une bonne localité mémoire, permettant ainsi un programme GPU très efficace. Cette modification permet également de vectoriser le code CPU et donc de faire une comparaison honnête des performances entre GPU et CPU. Dans un deuxième temps, ce travail aborde le problème d'alignements de séquences. Nous présentons une parallélisation GPU d'une méthode utilisant une indexation par graines. L' implémentation sur GPU n'étant pas efficace, nous nous tournons vers le développement d'une version CPU. Notre contribution principale est le développement d'un nouvel algorithme éliminant rapidement les nombreux alignements potentiels, basé sur le précalcul de portions de la matrice de programmation dynamique. Ce nouvel algorithme a conduit au développement d'un nouveau programme d'alignement très efficace. Notre travail fournit l'exemple de deux problèmes différents dont seulement un a pu être efficacement parallélisé sur GPU. Ces deux expériences nous permettent d'évaluer l'efficacité des GPU et leur place en bioinformatique
Bioinformatics require the analysis of large amounts of data. With the recent advent of next generation sequencing technologies generating data at a cheap cost, the computational power needed has increased dramatically. Graphic Processing Units (GPU) are now programmable beyond simple graphic computations, providing cheap high performance for general purpose applications. This thesis explores the usage of GPUs for bioinformatics applications. First, this work focuses on the computation of secondary structures of RNA sequences. It is traditionally conducted with a dynamic programming algorithm, which poses significant challenges for a GPU implementation. We introduce a new tiled implementation providing good data locality and therefore very efficient GPU code. We note that our algorithmic modification also enables tiling and subsequent vectorization of the CPU program, allowing us to conduct a fair CPU-GPU comparison. Secondly, this works addresses the short sequence alignment problem. We present an attempt at GPU parallelization using the seed-and-extend paradigm. Since this attempt is unsuccessful, we then focus on the development of a program running on CPU. Our main contribution is the development of a new algorithm filtering candidate alignment locations quickly, based on the pre computation of tiles of the dynamic programming matrix. This new algorithm proved to be in fact more effective on a sequential CPU program and lead to an efficient new CPU aligner. Our work provides the example of both successful an unsuccessful attempts at GPU parallelization. These two points of view allow us to evaluate GPUs efficiency and the role they can play in bioinformatics
APA, Harvard, Vancouver, ISO, and other styles
19

Jelena, Ivetić. "Intesection types and resource control in the intuitionistic sequent lambda calculus." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2013. http://dx.doi.org/10.2298/NS20131009IVETIC.

Full text
Abstract:
This thesis studies computational interpretations of the intuitionistic sequentcalculus with implicit and explicit structural rules, with focus on the systemswith intersection types. The contributions of the thesis are grouped into threeparts. In the first part intersection types are introduced into the lambdaGentzen calculus. The second part presents an extension of the lambdaGentzen calculus to a term calculus with resource control, i.e. with explicitoperators for contraction and weakening, and apropriate intersection typeassignment system which characterises strong normalisation in the proposedcalculus. In the third part both previously studied calculi are integrated intoone framework by introducing the notion of the resource control cube.
Ова дисертација се бави рачунским интерпретацијамаинтуиционистичког секвентног рачуна са имплицитним и експлицитнимструктурним правилима, са фокусом на типске системе са пресеком.Оригинални резултати тезе су груписани у три целине. У првом делу сутипови са пресеком уведени у lambda Gentzen рачун. Други деопредставља проширење lambda Gentzen рачуна на формални рачун саконтролом ресурса, тј. са експлицитним операторима контракције ислабљења, као и одговарајући типски систем са пресеком којикарактерише јаку нормализацију у уведеном рачуну. У трећем делу обарачуна су интегрисана у заједнички оквир увођењем структуре resourcecontrol cube.
Ova disertacija se bavi računskim interpretacijamaintuicionističkog sekventnog računa sa implicitnim i eksplicitnimstrukturnim pravilima, sa fokusom na tipske sisteme sa presekom.Originalni rezultati teze su grupisani u tri celine. U prvom delu sutipovi sa presekom uvedeni u lambda Gentzen račun. Drugi deopredstavlja proširenje lambda Gentzen računa na formalni račun sakontrolom resursa, tj. sa eksplicitnim operatorima kontrakcije islabljenja, kao i odgovarajući tipski sistem sa presekom kojikarakteriše jaku normalizaciju u uvedenom računu. U trećem delu obaračuna su integrisana u zajednički okvir uvođenjem strukture resourcecontrol cube.
APA, Harvard, Vancouver, ISO, and other styles
20

Downen, Paul. "Sequent Calculus: A Logic and a Language for Computation and Duality." Thesis, University of Oregon, 2017. http://hdl.handle.net/1794/22659.

Full text
Abstract:
Truth and falsehood, questions and answers, construction and deconstruction; most things come in dual pairs. Duality is a mirror that reveals the new from the old via opposition. This idea appears pervasively in logic, where duality inverts "true" with "false" and "and" with "or." However, even though programming languages are closely connected to logics, this kind of strong duality is not so apparent in practice. Sum types (disjoint tagged unions) and product types (structures) are dual concepts, but in the realm of programming, natural biases obscure their duality. To better understand the role of duality in programming, we shift our perspective. Our approach is based on the Curry-Howard isomorphism which says that programs following a specification are the same as proofs for mathematical theorems. This thesis explores Gentzen's sequent calculus, a logic steeped in duality, as a model for computational duality. By applying the Curry-Howard isomorphism to the sequent calculus, we get a language that combines dual programming concepts as equal opposites: data types found in functional languages are dual to co-data types (interface-based objects) found in object-oriented languages, control flow is dual to information flow, induction is dual to co-induction. This gives a duality-based semantics for reasoning about programs via orthogonality: checking safety and correctness based on a comprehensive test suite. We use the language of the sequent calculus to apply ideas from logic to issues relevant to program compilation. The idea of logical polarity reveals a symmetric basis of primitive programming constructs that can faithfully represent all user-defined data and co-data types. We reflect the lessons learned back into a core language for functional languages, at the cost of symmetry, with the relationship between the sequent calculus and natural deduction. This relationship lets us derive a pure lambda calculus with user-defined data and co-data which we further extend by bringing out the implicit control-flow in functional programs. Explicit control-flow lets us share and name control the same way we share and name data, enabling a direct representation of join points, which are essential for tractable optimization and compilation.
APA, Harvard, Vancouver, ISO, and other styles
21

Roh, Kyeong Hah. "College students' intuitive understanding of the concept of limit and their level of reverse thinking." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1124365986.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xiv, 260 p.; also includes graphics (some col.). Includes bibliographical references (p. 210-217). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
22

Maurer, Luke. "The Design of Intermediate Languages in Optimizing Compilers." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23903.

Full text
Abstract:
Every compiler passes code through several stages, each a sort of mini- compiler of its own. Thus each stage may deal with the code in a different representation, which may have little to do with the source or target language. We can describe these in-memory representations as languages in their own right, which we call intermediate languages. Each intermediate language is designed to accomodate the stage of compilation that handles it. Those toward the end of the compilation pipeline, for instance, tend to have features expressing low-level details of computation. A subtler case is that of the optimization stage, whose role is to transform the program so that it runs faster, uses less memory, and so forth. The optimizer faces tradeoffs: The language should provide enough information to guide optimization algorithms, but all of this information must be kept up to date as the program is transformed. Also, establishing invariants in the language can be helpful both in implementing algorithms and in debugging the implementation, but each invariant may complicate desirable transformations or rule them out altogether. Finally, a ivlanguage where the invariants are obviously correct may have a form too awkward or otherwise unsuited to the compiler’s needs. Given the properties and invariants that we would like the language to provide, we can approach the design task in a way that gives these features without necessarily sacrificing implementability. Namely, begin with a formal language that makes the desired properties obvious, then translate it to one more suitable for implementation. We can even translate theorems about valid transformations in the formal language to derive correct algorithms in the implementation language. This dissertation explores the connections between different intermediate languages and how they can be interderived, then demonstrates how translation lead to an improvement to the Glasgow Haskell Compiler opimization engine. This dissertation includes previously published coauthored material.
APA, Harvard, Vancouver, ISO, and other styles
23

Ritter, Gerd. "Verification formelle d'equivalence des systemes digitaux sequentiels par simulation symbolique." Université Joseph Fourier (Grenoble), 2001. http://www.theses.fr/2001GRE10044.

Full text
Abstract:
Nous proposons une nouvelle methodologie de simulation symbolique, permettant la verification des circuits sequentiels decrits a des niveaux d'abstraction differents. Nous avons utilise un outil automatique de verification formelle afin de montrer l'equivalence entre une description structurelle precisant les details de realisation et sa specification comportementale. Des descriptions au niveau portes logiques issues d'un outil de synthese commercial ont ete comparees a des specifications comportementales et structurelles au niveau transfert de registres. Cependant, il n'est pas necessaire que la specification soit synthetisable ni qu'elle soit equivalente a la realisation a chaque cycle d'horloge. Ulterieurement cette methode pourra aussi s'appliquer a la verification des proprietes. La simulation symbolique est executee en suivant des chemins dont l'outil garantit la coherence logique. Nous obtenons un bon compromis entre precision et vitesse en detectant des equivalences grace a un ensemble extensible de techniques. Nous utilisons des diagrammes de decisions binaires (obdd) pour detecter les equivalences dans certains cas particuliers. Nous evitons l'explosion combinatoire en utilisant les resultats des autres techniques de detection et en ne representant qu'une petite partie du probleme a verifier par des diagrammes de decisions. La cooperation de toutes les techniques, et la generation de traces permettant la correction d'erreurs, ont ete rendues possibles par le fait que nous associons des relations a des classes d'equivalence, au lieu de manipuler des expressions symboliques.
APA, Harvard, Vancouver, ISO, and other styles
24

Khouzam, Bassem. "Neural networks as cellular computing models for temporal sequence processing." Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0007/document.

Full text
Abstract:
La thèse propose une approche de l'apprentissage temporel par des mécanismes d'auto-organisation à grain fin. Le manuscrit situe dans un premier temps le travail dans la perspective de contribuer à promouvoir une informatique cellulaire. Il s'agit d'une informatique où les calculs se répartissent en un grand nombre de calculs élémentaires, exécutés en parallèle, échangeant de l'information entre eux. Le caractère cellulaire tient à ce qu'en plus d’être à grain fin, une telle architecture assure que les connexions entre calculateurs respectent une topologie spatiale, en accord avec les contraintes des évolutions technologiques futures des matériels. Dans le manuscrit, la plupart des architectures informatiques distribuées sont examinées suivant cette perspective, pour conclure que peu d'entre elles relèvent strictement du paradigme cellulaire.Nous nous sommes intéressé à la capacité d'apprentissage de ces architectures, du fait de l'importance de cette notion dans le domaine connexe des réseaux de neurones par exemple, sans oublier toutefois que les systèmes cellulaires sont par construction des systèmes complexes dynamiques. Cette composante dynamique incontournable a motivé notre focalisation sur l'apprentissage temporel, dont nous avons passé en revue les déclinaisons dans les domaines des réseaux de neurones supervisés et des cartes auto-organisatrices.Nous avons finalement proposé une architecture qui contribue à la promotion du calcul cellulaire en ce sens qu'elle exhibe des propriétés d'auto-organisation pour l'extraction de la représentation des états du système dynamique qui lui fournit ses entrées, même si ces dernières sont ambiguës et ne reflètent que partiellement cet état. Du fait de la présence d'un cluster pour nos simulations, nous avons pu mettre en œuvre une architecture complexe, et voir émerger des phénomènes nouveaux. Sur la base de ces résultats, nous développons une critique qui ouvre des perspectives sur la suite à donner à nos travaux
The thesis proposes a sequence learning approach that uses the mechanism of fine grain self-organization. The manuscript initially starts by situating this effort in the perspective of contributing to the promotion of cellular computing paradigm in computer science. Computation within this paradigm is divided into a large number of elementary calculations carried out in parallel by computing cells, with information exchange between them.In addition to their fine grain nature, the cellular nature of such architectures lies in the spatial topology of the connections between cells that complies with to the constraints of the technological evolution of hardware in the future. In the manuscript, most of the distributed architecture known in computer science are examined following this perspective, to find that very few of them fall within the cellular paradigm.We are interested in the learning capacity of these architectures, because of the importance of this notion in the related domain of neural networks for example, without forgetting, however, that cellular systems are complex dynamical systems by construction.This inevitable dynamical component has motivated our focus on the learning of temporal sequences, for which we reviewed the different models in the domains of neural networks and self-organization maps.At the end, we proposed an architecture that contributes to the promotion of cellular computing in the sense that it exhibits self-organization properties employed in the extraction of a representation of a dynamical system states that provides the architecture with its entries, even if the latter are ambiguous such that they partially reflect the system state. We profited from an existing supercomputer to simulate complex architecture, that indeed exhibited a new emergent behavior. Based on these results we pursued a critical study that sets the perspective for future work
APA, Harvard, Vancouver, ISO, and other styles
25

Straßburger, Lutz. "Linear Logic and Noncommutativity in the Calculus of Structures." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2003. http://nbn-resolving.de/urn:nbn:de:swb:14-1063208959250-72937.

Full text
Abstract:
In this thesis I study several deductive systems for linear logic, its fragments, and some noncommutative extensions. All systems will be designed within the calculus of structures, which is a proof theoretical formalism for specifying logical systems, in the tradition of Hilbert's formalism, natural deduction, and the sequent calculus. Systems in the calculus of structures are based on two simple principles: deep inference and top-down symmetry. Together they have remarkable consequences for the properties of the logical systems. For example, for linear logic it is possible to design a deductive system, in which all rules are local. In particular, the contraction rule is reduced to an atomic version, and there is no global promotion rule. I will also show an extension of multiplicative exponential linear logic by a noncommutative, self-dual connective which is not representable in the sequent calculus. All systems enjoy the cut elimination property. Moreover, this can be proved independently from the sequent calculus via techniques that are based on the new top-down symmetry. Furthermore, for all systems, I will present several decomposition theorems which constitute a new type of normal form for derivations.
APA, Harvard, Vancouver, ISO, and other styles
26

Arisaka, Ryuta. "Proof-theoretical observations of BI and BBI base-logic interactions, and development of phased sequence calculus to define logic combinations." Thesis, Teesside University, 2013. http://hdl.handle.net/10149/315552.

Full text
Abstract:
I study sequent calculus of combined logics in this thesis. Two specific logics are looked at-Logic BI that combines intuitionistic logic and multiplicative intuitionistic linear logic and Logic BBI that combines classical logic and multiplicative linear logic. A proof-theoretical study into logical combinations themsel ves then follows. To consolidate intuition about what this thesis is all about, let us suppose that we know about two different logics, Logic A developed for reasoning about Purpose A and Logic B developed for reasoning about Purpose B. Logic A serves Purpose A very well, but not Purpose B. Logic B serves Purpose B very well but not Purpose A. We wish to fulfill both Purpose A and Purpose B, but presently we can only afford to let one logic guide through our reasoning. What shall we do? One option is to be content with having Logic A with which we handle Purpose A efficiently and Purpose B rather inefficiently. Another option is to choose Logic B instead. But there is yet another option: we combine Logic A and Logic B to derive a new logic Logic C which is still one logic but which serves both Purpose A and Purpose B efficiently. The combined logic is synthetic of the strengths in more basic logics (Logic A and Logic B). As it nicely takes care of our requirements, it may be the best choice among all that have been so far considered. Yet this is not the end of the story. Depending on the manner Logic A and Logic B combine, Logic C may have extensions serving more purposes than just Purpose A and Purpose B. Ensuing is the following problem: we know about Logic A and Logic B, but we may not know about combined logics of the base logics. To understand the combined logics, we need to understand the extensions in which base logics interact each other. Analysis on the interesting parts tends to be non-trivial, however. The mentioned two specific combined logics BI and BBI do not make an exception, for which proof-theoretical development has been particularly slow. It has remained in obscurity how to properly handle base-logic interactions of the combined logics as appearing syntactically. As one objective of this thesis, I provide analysis on the syntactic phenomena of the BI and BBI base-logic interactions within sequent calculus, to augment the knowledge. For BI, I deliver, through appropriate methodologies to reason about the syntactic phenomena of the base-logic interactions, the first BI sequent calculus free of any structural rules. Given its positive consequence to efficient proof searches, this is a significant step forward in further maturity of BI proof theory. Based on the calculus, I prove decidability of a fragment of BI purely syntactically. For BBI which is closely connected to application via separation logic, I develop adequate sequent calculus conventions and consider the implication of the underlying semantics onto syntax. Sound BBI sequent calculi result with a closer syntax-semantics correspondence than previously envisaged. From them, adaptation to separation logic is also considered. To promote the knowledge of combined logics in general within computer science, it is also important that we be able to study logical combinations themselves. Towards this direction of generalisation, I present the concept of phased sequent calculus - sequent calculus which physically separates base logics, and in which a specific manner of logical combination to take place between them can be actually developed and analysed. For a demonstration, the said decidable BI fragment is formulated in phased sequent calculus, and the sense of logical combination in effect is analysed. A decision procedure is presented for the fragment.
APA, Harvard, Vancouver, ISO, and other styles
27

Chapman, Peter. "Tools and techniques for formalising structural proof theory." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/933.

Full text
Abstract:
Whilst results from Structural Proof Theory can be couched in many formalisms, it is the sequent calculus which is the most amenable of the formalisms to metamathematical treatment. Constructive syntactic proofs are filled with bureaucratic details; rarely are all cases of a proof completed in the literature. Two intermediate results can be used to drastically reduce the amount of effort needed in proofs of Cut admissibility: Weakening and Invertibility. Indeed, whereas there are proofs of Cut admissibility which do not use Invertibility, Weakening is almost always necessary. Use of these results simply shifts the bureaucracy, however; Weakening and Invertibility, whilst more easy to prove, are still not trivial. We give a framework under which sequent calculi can be codified and analysed, which then allows us to prove various results: for a calculus to admit Weakening and for a rule to be invertible in a calculus. For the latter, even though many calculi are investigated, the general condition is simple and easily verified. The results have been applied to G3ip, G3cp, G3s, G3-LC and G4ip. Invertibility is important in another respect; that of proof-search. Should all rules in a calculus be invertible, then terminating root-first proof search gives a decision procedure for formulae without the need for back-tracking. To this end, we present some results about the manipulation of rule sets. It is shown that the transformations do not affect the expressiveness of the calculus, yet may render more rules invertible. These results can guide the design of efficient calculi. When using interactive proof assistants, every case of a proof, however complex, must be addressed and proved before one can declare the result formalised. To do this in a human readable way adds a further layer of complexity; most proof assistants give output which is only legible to a skilled user of that proof assistant. We give human-readable formalisations of Cut admissibility for G3cp and G3ip, Contraction admissibility for G4ip and Craig's Interpolation Theorem for G3i using the Isar vernacular of Isabelle. We also formalise the new invertibility results, in part using the package for reasoning about first-order languages, Nominal Isabelle. Examples are given showing the effectiveness of the formalisation. The formal proof of invertibility using the new methods is drastically shorter than the traditional, direct method.
APA, Harvard, Vancouver, ISO, and other styles
28

Chaux, Pierre-Yves. "Formalisation de la cohérence et calcul des séquences de coupe minimales pour les systèmes binaires dynamiques et réparables." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00910331.

Full text
Abstract:
L'analyse prévisionnelle des risques d'un système complexe repose aujourd'hui sur une modélisation de la dynamique du système vis-à-vis des défaillances et réparations de ses composants. L'analyse qualitative d'un tel système consiste à rechercher et à analyser les scénarios conduisant à la panne. En raison de leur nombre, il est courant de ne s'intéresser qu'aux scénarios les plus caractéristiques, les Séquences de Coupe Minimales (SCM). L'absence de formalisation de ces SCM a généré soit des définitions spécifiques à certains outils de modélisation soit des définitions informelles. Les travaux présentés dans cette thèse proposent: i) un cadre et une définition formelle des séquences de coupe minimales, tout deux indépendants de l'outil de modélisation de fiabilité utilisé, ii) une méthode permettant leur calcul, méthode basée sur des propriétés déduites de leur définition, iii) l'extension des premières définitions aux composants multimodes. Ce cadre permet le calcul des SCM pour des installations décrites avec les Boolean logic Driven Markov Processes (BDMP). Sous l'hypothèse que l'ensemble des scénarios représentés implicitement via le modèle de sûreté établi peut être modélisé à l'aide d'un automate fini, ces travaux définissent la notion de cohérence des systèmes dynamiques et réparables, et le moyen d'obtenir une représentation minimale de l'ensemble des scénarios menant à la défaillance du système.
APA, Harvard, Vancouver, ISO, and other styles
29

Fonseca, Maycon Odailson dos Santos da. "Proposta de tarefas para um estudo inicial de derivadas." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2499.

Full text
Abstract:
Acompanha: Caderno de tarefas: proposta de tarefas para um estudo inicial de derivadas
CNPq
Esta dissertação apresenta uma proposta de tarefas para o estudo inicial de derivadas no ensino de cálculo diferencial e integral (CDI) no Ensino Superior, em turmas regulares de um curso de Engenharia da Universidade Tecnológica Federal do Paraná (UTFPR) do campus Londrina. Elencou-se como objetivo geral da pesquisa a proposição de tarefas que oportunizem aos estudantes a exploração de ideias necessárias à compreensão do conceito de derivadas, em especial tarefas a serem aplicadas em momentos que iniciam o estudo de derivadas, em sua abordagem mais formal. Por se tratar de um mestrado em âmbito profissional, intencionou-se a construção de caderno de tarefas (o produto educacional), na qual após a aplicação de dois ciclos de pesquisa, elencaram-se três tarefas para compor o produto final da pesquisa, a qual por meio das análises notou-se a necessidade entre os ciclos a adaptação/reformulação das tarefas, e em especial na tarefa 3 a intencionalidade de uma nova reformulação e aplicação em um novo ciclo de pesquisa.
This dissertation presents a proposal to the initial study of derivatives in the teaching of differential and integral calculus (CDI) in higher education, in regular classes of an engineering degree from the Federal University of technology-Paraná (UTFPR) campus. Presented as general purpose of research the proposition of tasks that create opportunities for students to exploration of ideas necessary for the understanding of the concept of derived in particular tasks to be applied at times to begin the study of derived, in your more formal approach. As a master's degree in professional, intended the construction of notebook (the educational product), in which after two cycles of research, bleeding cool is-if three tasks to compose the final research product, which by means of analyses the need was noted between cycles the adaptation/recasting of tasks, and in particular in task 3 the intentionality of a new makeover and application in a new cycle of research.
APA, Harvard, Vancouver, ISO, and other styles
30

Birštunas, Adomas. "Sekvenciniai skaičiavimai BDI logikoms su efektyvia ciklų paieška." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100302_095338-77193.

Full text
Abstract:
Darbe nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikos yra plačiai naudojamos agentinių sistemų aprašymui ir realizavimui. Agentai yra autonomiškos sistemos, kurios veikia kažkurioje aplinkoje ir siekia įvykdyti iš anksto apibrėžtus tikslus. Sprendimų priėmimo realizavimas yra svarbiausia ir sudėtingiausia dalis realizuojant agentines sistemas. Sprendimo priėmimo realizavimui gali būti naudojami logikos skaičiavimai. Šiame darbe ir yra nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikose, kaip ir kitose modalumo logikose, yra naudojama ciklų paieška išsprendžiamumui gauti. Neefektyvi ciklų paieška užima didesnę išvedimų paieškos resursų dalį. Kai kurioms modalumo logikoms yra žinomi becikliai skaičiavimai ar skaičiavimai naudojantys efektyvią ciklų paiešką. Šiame darbe yra pateikiamas beciklis sekvencinis skaičiavimas KD45 logikai, kuri yra esminis BDI logikų fragmentas. Pateiktas skaičiavimas ne tik eliminuoja ciklų paiešką, bet ir supaprastina patį sekvencijos išvedimą. Skaidaus laiko logikai (kitam BDI logikų fragmentui) yra pateikiamas sekvencinis skaičiavimas naudojantis efektyvią ciklų paiešką. Gauti rezultatai yra pritaikyti sukuriant sekvencinius skaičiavimus vianaagentinei ir daugiaagentinei BDI logikoms. Pristatyti skaičiavimai naudoja tik apribotą ciklų paiešką. Be to, kai kurių tipų ciklus eliminuoja visiškai. Šie rezultatai įgalina kurti efektyvesnes agentines sistemas, paremtas BDI logikomis.
Sequent calculi for BDI logics is a research object of the thesis. BDI logics are widely used for agent system description and implementation. Agents are autonomous systems, those acts in some environment and aspire to achieve preassigned goals. Implementation of the decision making is the main and the most complicated part in agent systems implementation. Logic calculi may be used for the decision making implementation. In this thesis, there are researched sequent calculi for BDI logics. Sequent calculi for BDI logics, like sequent calculi for other modal logics, use loop-check technique to get decidability. Inefficient loop-check takes a major part of the resources used for the derivation. For some modal logics, there are known loop-check free sequent calculi or calculi with an efficient loop-check. In this thesis, there is presented loop-check free sequent calculus for KD45 logic, which is the main fragment of the BDI logics. Introduced calculus not only eliminates loop-check, but also simplifies sequent derivation. For the branching time logic (another BDI logic fragment) there is presented sequent calculus with an efficient loop-check. Obtained results are adapted for creation sequent calculi for monoagent and multiagent BDI logics. Introduced calculi use only restricted loop-check. Moreover, loop-check is totally eliminated for some types of the loops. These results enables to create more efficient agent systems, those are based on the BDI logics.
APA, Harvard, Vancouver, ISO, and other styles
31

Menezes, Daniel BrandÃo. "O ensino do cÃlculo diferencial e integral na perspectiva da SequÃncia Fedathi: caracterizaÃÃo do comportamento de um bom professor." Universidade Federal do CearÃ, 2017. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=20559.

Full text
Abstract:
nÃo hÃ
Os cursos da Ãrea de CiÃncias Exatas, em particular, as licenciaturas em MatemÃtica no Cearà possuem ainda muitos desafios com a disciplina CÃlculo Diferencial e Integral, no tocante aos aspectos metodolÃgicos desenvolvidos nas sessÃes didÃticas que ainda representam alguns pontos de insatisfaÃÃo, motivo de desistÃncia e reprovaÃÃo por parte dos alunos. Ante esse problema, esta Tese trata de uma pesquisa expressa na metodologia de pesquisa e ensino SequÃncia Fedathi (SF) cuja finalidade foi investigar como sua relaÃÃo com a Teoria do Pensamento MatemÃtico AvanÃado (PMA) pode alicerÃar os processos de ensino de CÃlculo Diferencial e Integral (CDI) dos alunos de um grupo de estudos da Universidade Estadual Vale do AcaraÃ, respondendo de que maneira isso contribui para a aprendizagem de conceitos e procedimentos nessa disciplina, em particular, do conteÃdo de Taxas de VariaÃÃo, e como pode ser feita a caracterizaÃÃo do docente em amparo nesses conceitos. Como suporte teÃrico preliminar, foram utilizados estudos da SequÃncia Fedathi, Teoria do Pensamento MatemÃtico AvanÃado e do recurso computacional (Geogebra) para contribuir com a melhoria do processo de ensino-aprendizagem. EntÃo, para alcanÃar os objetivos, as sessÃes didÃticas foram trabalhadas com a SequÃncia Fedathi como metodologia para elaboraÃÃo e conduÃÃo no ensino do conteÃdo. A pesquisa à de natureza qualitativa, delineada como participante e, alÃm disso, seguiu o mÃtodo cientÃfico SequÃncia Fedathi, descrito e estudado no decorrer do trabalho; como campo e sujeitos da investigaÃÃo, o ensaio delineou-se num grupo de estudos criados no curso de MatemÃtica da Universidade Estadual Vale do Acaraà e os sujeitos foram os alunos inscritos e o professor que mediou os encontros. No decorrer da experimentaÃÃo, as perguntas da pesquisa foram respondidas e colhidos resultados que serviram como embasamento para a classificaÃÃo de bons professores e bons alunos. A metodologia de pesquisa mostrou-se como um rÃgido mÃtodo a ser promovido cientificamente, direcionando corretamente cada etapa do experimento e os instrumentos metodolÃgicos necessÃrios para a obtenÃÃo e coleta de dados. No decorrer das prÃticas e consequente anÃlise, foi possÃvel estabelecer relaÃÃo entre a SF e o PMA nos testes que foram aplicados com os alunos. AlÃm disso, concluiu-se, o quÃo benÃfico foi para a compreensÃo dos conteÃdos o uso do recurso computacional, com as questÃes contextualizadas utilizadas como problemas na vivÃncia da Tomada de PosiÃÃo, ou seja, contribuiu para demandar compreensÃes para o ensino do CÃlculo Diferencial e Integral, alÃm de desenvolver avanÃos para as pesquisas em EducaÃÃo MatemÃtica e, acima de tudo, como o comportamento docente influenciou nas emoÃÃes dos alunos em relaÃÃo à MatemÃtica e na conduÃÃo da vivÃncia da SequÃncia Fedathi.
APA, Harvard, Vancouver, ISO, and other styles
32

Giedra, Haroldas. "Proof system for logic of correlated knowledge." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141230_152734-55494.

Full text
Abstract:
Automated proof system for logic of correlated knowledge is presented in the dissertation. The system consists of the sequent calculus GS-LCK and the proof search procedure GS-LCK-PROC. Sequent calculus is sound, complete and satisfy the properties of invertibility of rules, admissibility of weakening, contraction and cut. The procedure GS-LCK-PROC is terminating and allows to check if the sequent is provable. Also decidability of logic of correlated knowledge has been proved. Using the terminating procedure GS-LCK-PROC the validity of all formulas of logic of correlated knowledge can be checked.
Automatinė įrodymų sistema koreliatyvių žinių logikai yra pristatoma disertacijoje. Sistemą sudaro sekvencinis skaičiavimas GS-LCK ir įrodymo paieškos procedūra GS-LCK-PROC. Sekvencinis skaičiavimas yra pagrįstas, pilnas ir tenkina taisyklių apverčiamumo, silpninimo, prastinimo ir pjūvio leistinumo savybes. Procedūra GS-LCK-PROC yra baigtinė ir leidžia patikrinti, ar sekvencija yra išvedama. Taip pat buvo įrodytas koreliatyvių žinių logikos išsprendžiamumas. Naudojant baigtinę procedūra GS-LCK-PROC, visų koreliatyvių žinių logikos formulių tapatus teisingumas gali būti patikrintas.
APA, Harvard, Vancouver, ISO, and other styles
33

Giedra, Haroldas. "Įrodymų sistema koreliatyvių žinių logikai." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141230_152716-86030.

Full text
Abstract:
Automatinė įrodymų sistema koreliatyvių žinių logikai yra pristatoma disertacijoje. Sistemą sudaro sekvencinis skaičiavimas GS-LCK ir įrodymo paieškos procedūra GS-LCK-PROC. Sekvencinis skaičiavimas yra pagrįstas, pilnas ir tenkina taisyklių apverčiamumo, silpninimo, prastinimo ir pjūvio leistinumo savybes. Procedūra GS-LCK-PROC yra baigtinė ir leidžia patikrinti, ar sekvencija yra išvedama. Taip pat buvo įrodytas koreliatyvių žinių logikos išsprendžiamumas. Naudojant baigtinę procedūra GS-LCK-PROC, visų koreliatyvių žinių logikos formulių tapatus teisingumas gali būti patikrintas.
Automated proof system for logic of correlated knowledge is presented in the dissertation. The system consists of the sequent calculus GS-LCK and the proof search procedure GS-LCK-PROC. Sequent calculus is sound, complete and satisfy the properties of invertibility of rules, admissibility of weakening, contraction and cut. The procedure GS-LCK-PROC is terminating and allows to check if the sequent is provable. Also decidability of logic of correlated knowledge has been proved. Using the terminating procedure GS-LCK-PROC the validity of all formulas of logic of correlated knowledge can be checked.
APA, Harvard, Vancouver, ISO, and other styles
34

Bezerra, Cristina Alves. "Proposta de abordagem para as tÃcnicas de integraÃÃo usando o software Geogebra." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14598.

Full text
Abstract:
Este trabalho propÃe uma forma de abordagem para as TÃcnicas de IntegraÃÃo â que serve como um complemento para aquelas que sÃo trabalhadas por parte dos autores dos livros de CÃlculo Diferencial e Integral (C.D.I). Seu objetivo geral à estruturar e propor situaÃÃes de ensino apoiadas na Tecnologia Digital, mais precisamente no software Geogebra, relativa Ãs TÃcnicas de IntegraÃÃo, onde sejam explorados os padrÃes grÃfico-geomÃtricos relacionados com as funÃÃes integrandas e suas primitivas. A organizaÃÃo da pesquisa seguiu as duas fases iniciais da Engenharia DidÃtica (E.D) â AnÃlises Preliminares e AnÃlise a Priori. A estruturaÃÃo das sessÃes didÃticas, envolvendo situaÃÃes-problema diferenciadas, respeitou as fases da SequÃncia Fedathi â Tomada de posiÃÃo, MaturaÃÃo, SoluÃÃo e Prova. Iniciamos o trabalho com o levantamento da problemÃtica â identificamos que o conteÃdo, TÃcnicas de IntegraÃÃo, Ã, nos livros didÃticos da disciplina de C.D.I, citados no Programa de Unidade DidÃtica (PUD) do Curso Licenciatura em MatemÃtica do IFCE â Juazeiro do Norte, trabalhado unicamente por meio do carÃter algÃbrico. Com a intenÃÃo de registrarmos essa observaÃÃo, fizemos comentÃrios sobre as formas de abordagens dos autores Stewart (2010), Guidorizzi (2011) e Leithold (1994), em que pudemos deixar registrado que, de fato, hà uma limitaÃÃo sobre a exploraÃÃo dos padrÃes grÃfico-geomÃtricos relativos Ãs TÃcnicas: SubstituiÃÃo de VariÃveis, Por Partes, FraÃÃes Parciais e SubstituiÃÃo TrigonomÃtrica. Como produto educacional, foi desenvolvido um âsiteâ em que disponibilizamos as videoaulas e as respectivas sessÃes didÃticas.
APA, Harvard, Vancouver, ISO, and other styles
35

Brage, Jens. "A Natural Interpretation of Classical Proofs." Doctoral thesis, Stockholm : Dept. of mathematics, Stockholm university, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tête, Claire. "Profondeur, dimension et résolutions en algèbre commutative : quelques aspects effectifs." Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT2288/document.

Full text
Abstract:
Cette thèse d'algèbre commutative porte principalement sur la théorie de la profondeur. Nous nous efforçons d'en fournir une approche épurée d'hypothèse noethérienne dans l'espoir d'échapper aux idéaux premiers et ceci afin de manier des objets élémentaires et explicites. Parmi ces objets, figurent les complexes algébriques de Koszul et de Cech dont nous étudions les propriétés cohomologiques grâce à des résultats simples portant sur la cohomologie du totalisé d'un bicomplexe. Dans le cadre de la cohomologie de Cech, nous avons établi la longue suite exacte de Mayer-Vietoris avec un traitement reposant uniquement sur le maniement des éléments. Une autre notion importante est celle de dimension de Krull. Sa caractérisation en termes de monoïdes bords permet de montrer de manière expéditive le théorème d'annulation de Grothendieck en cohomologie de Cech. Nous fournissons également un algorithme permettant de compléter un polynôme homogène en un h.s.o.p.. La profondeur est intimement liée à la théorie des résolutions libres/projectives finies, en témoigne le théorème de Ferrand-Vasconcelos dont nous rapportons une généralisation due à Jouanolou. Par ailleurs, nous revenons sur des résultats faisant intervenir la profondeur des idéaux caractéristiques d'une résolution libre finie. Nous revisitons, dans un cas particulier, une construction due à Tate permettant d'expliciter une résolution projective totalement effective de l'idéal d'un point lisse d'une hypersurface. Enfin, nous abordons la théorie de la régularité en dimension 1 via l'étude des idéaux inversibles et fournissons un algorithme implémenté en Magma calculant l'anneau des entiers d'un corps de nombres
This Commutative Algebra thesis focuses mainly on the depth theory. We try to provide an approach without noetherian hypothesis in order to escape prime ideals and to handle only basic and explicit concepts. We study the algebraic complexes of Koszul and Cech and their cohomological properties by using simple results on the cohomology of the totalization of a bicomplex. In the Cech cohomology context we established the long exact sequence of Mayer-Vietoris only with a treatment based on the elements. Another important concept is that of Krull dimension. Its characterization in terms of monoids allows us to show expeditiously the vanishing Grothendieck theorem in Cech cohomology.We also provide an algorithm to complete a omogeneous polynomial in a h.s.o.p.. The depth is closely related to the theory of finite free/projective resolutions. We report a generalization of the Ferrand-Vasconcelos theorem due to Jouanolou. In addition, we review some results involving the depth of the ideals of expected ranks in a finite free resolution.We revisit, in a particular case, a construction due to Tate. This allows us to give an effective projective resolution of the ideal of a point of a smooth hypersurface. Finally, we discuss the regularity theory in dimension 1 by studying invertible ideals and provide an algorithm implemented in Magma computing the ring of integers of a number field
APA, Harvard, Vancouver, ISO, and other styles
37

Ben, Nsira Nadia. "Algorithme de recherche incrémentale d'un motif dans un ensemble de séquences d'ADN issues de séquençages à haut débit." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR143/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de recherche incrémentale de motifs dans des séquences fortement similaires (On-line Pattern Matching on Highly Similar Sequences), issues de technologies de séquençage à haut débit (SHD). Ces séquences ne diffèrent que par de très petites quantités de variations et présentent un niveau de similarité très élevé. Il y a donc un fort besoin d'algorithmes efficaces pour effectuer la recherche rapide de motifs dans de tels ensembles de séquences spécifiques. Nous développons de nouveaux algorithmes pour traiter ce problème. Cette thèse est répartie en cinq parties. Dans la première partie, nous présentons un état de l'art sur les algorithmes les plus connus du problème de recherche de motifs et les index associés. Puis, dans les trois parties suivantes, nous développons trois algorithmes directement dédiés à la recherche incrémentale de motifs dans un ensemble de séquences fortement similaires. Enfin, dans la cinquième partie, nous effectuons une étude expérimentale sur ces algorithmes. Cette étude a montré que nos algorithmes sont efficaces en pratique en terme de temps de calcul
In this thesis, we are interested in the problem of on-line pattern matching in highly similar sequences, On-line Pattern Matching on Highly Similar Sequences, outcoming from Next Generation Sequencing technologies (NGS). These sequences only differ by a very small amount. There is thus a strong need for efficient algorithms for performing fast pattern matching in such specific sets of sequences. We develop new algorithms to process this problem. This thesis is partitioned into five parts. In the first part, we present a state of the art on the most popular algorithms of finding problem and the related indexes. Then, in the three following parts, we develop three algorithms directly dedicated to the on-line search for patterns in a set of highly similar sequences. Finally, in the fifth part, we conduct an experimental study on these algorithms. This study shows that our algorithms are efficient in practice in terms of computation time
APA, Harvard, Vancouver, ISO, and other styles
38

Ngo, Thoi-Nhan. "Contrôle optimal en temps discret et en horizon infini." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E062/document.

Full text
Abstract:
Cette thèse contient des contributions originales à la théorie du Contrôle Optimal en temps discret et en horizon infini du point de vue de Pontryagin. Il y a 5 chapitres dans cette thèse. Dans le chapitre 1, nous rappelons des résultats préliminaires sur les espaces de suites à valeur dans et des résultats de Calcul Différentiel. Dans le chapitre 2, nous étudions le problème de Contrôle Optimal, en temps discret et en horizon infini avec la contrainte asymptotique et avec le système autonome. En utilisant la structure d'espace affine de Banach de l'ensemble des suites convergentes vers 0, et la structure d'espace vectoriel de Banach de l'ensemble des suites bornées, nous traduisons ce problème en un problème d'optimisation statique dam des espaces de Banach. Après avoir établi des résultats originaux sur les opérateurs de Nemytskii sur les espaces de suites et après avoir adapté à notre problème un théorème d'existence de multiplicateurs, nous établissons un nouveau principe de Pontryagin faible pour notre problème. Dans le chapitre 3, nous établissons un principe de Pontryagin fort pour les problèmes considérés au chapitre 2 en utilisant un résultat de Ioffe-Tihomirov. Le chapitre 4 est consacré aux problèmes de Contrôle Optimal, en temps discret et en horizon infini, généraux avec plusieurs critères différents. La méthode utilisée est celle de la réduction à l'horizon fini, initiée par J. Blot et H. Chebbi en 2000. Les problèmes considérés sont gouvernés par des équations aux différences ou des inéquations aux différences. Un nouveau principe de Pontryagin faible est établi en utilisant un résultat récent de J. Blot sur les multiplicateurs à la Fritz John. Le chapitre 5 est consacré aux problèmes multicritères de Contrôle Optimal en temps discret et en horizon infini. De nouveaux principes de Pontryagin faibles et forts sont établis, là-aussi en utilisant des résultats récents d'optimisation, sous des hypothèses plus faibles que celles des résultats existants
This thesis contains original contributions to the optimal control theory in the discrete-time framework and in infinite horizon following the viewpoint of Pontryagin. There are 5 chapters in this thesis. In Chapter 1, we recall preliminary results on sequence spaces and on differential calculus in normed linear space. In Chapter 2, we study a single-objective optimal control problem in discrete-time framework and in infinite horizon with an asymptotic constraint and with autonomous system. We use an approach of functional analytic for this problem after translating it into the form of an optimization problem in Banach (sequence) spaces. Then a weak Pontyagin principle is established for this problem by using a classical multiplier rule in Banach spaces. In Chapter 3, we establish a strong Pontryagin principle for the problems considered in Chapter 2 using a result of Ioffe and Tihomirov. Chapter 4 is devoted to the problems of Optimal Control, in discrete time framework and in infinite horizon, which are more general with several different criteria. The used method is the reduction to finite-horizon initiated by J. Blot and H. Chebbi in 2000. The considered problems are governed by difference equations or difference inequations. A new weak Pontryagin principle is established using a recent result of J. Blot on the Fritz John multipliers. Chapter 5 deals with the multicriteria optimal control problems in discrete time framework and infinite horizon. New weak and strong Pontryagin principles are established, again using recent optimization results, under lighter assumptions than existing ones
APA, Harvard, Vancouver, ISO, and other styles
39

Reysset, Aurelien. "Conception préliminaire d'actionneurs électromécaniques - outils d'aide à la spécification et à la génération de procédures de dimensionnement pour l'optimisation." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0003/document.

Full text
Abstract:
Cette thèse a pour objectif d’apporter un ensemble d’outils logiciels s’inscrivant dans une méthodologie globale de conception de systèmes mécatroniques. Elle arrive en complément de travaux déjà menés au sein du laboratoire sur le pré-dimensionnement d’actionneurs aéronautiques de nouvelle génération : les actionneurs électromécaniques (EMA). Cette technologie apporte de nouvelles problématiques qui forcent les ingénieurs à modifier leur processus de développement et ce dès la phase de spécification où des profils de mission devront être générés/transformés/analysés de manière à simplifier la conception et assurer leur validation. Une toolbox Simulink a donc été créée dans cette thèse pour répondre à ce besoin de transformation de l’information entre avionneur et systémier. Comme tout système embarqué, le concepteur fait face à des compromis entre performances, durée de vie et intégration, qui peuvent se résumer à un problème d’optimisation décrit par un ensemble d’équations et de contraintes. Un effort particulier de description a été mené sur le conditionnement de ces équations sous la forme d’un séquencement de calculs explicites adaptés aux algorithmes d’optimisation. La méthode et son implémentation logicielle, toutes deux basées sur la théorie des graphes, interagissent avec le concepteur de manière à l’informer des erreurs de singularité ou de bouclages algébriques apparaissant dans son problème et à lui fournir des pistes de résolution. Pour finir, des études de pré-dimensionnement d’actionneurs de train d’atterrissage et de surfaces de vol primaires (aileron et spoiler), réalisées dans le cadre de cette thèse, dresseront les possibilités offertes par cette approche innovante : conception intégrée avec une cinématique complexe, conception collaborative pluri-partenaires découplée, utilisation de surfaces de réponse pour accélérer l’optimisation
The aim of this thesis is to bring a package of software tools included in a whole methodology dealing with mechatronic systems design. It comes as an add-on to the work already carried out at the laboratory in the field of the new generation of aircraft actuation systems: electromechanical actuators (EMA). This technology triggers new problematics leading the engineers to modify their development process as early as the specification phase, when mission profiles have to be generated/transformed/analyzed in order to simplify the design and ensure the validation step. Thus a Simulink toolbox has been created to meet the need for an information translator working as an intermediate between airframer and system-supplier. As for all the embedded systems, the designer has to face some performance-lifetime-integration trade-off, which can be considered as an optimization problem described by a set of equations and constraints. Particular attention is paid here to the conditioning of those explicit equations in order to obtain a standardized calculation sequence adapted to many optimization algorithms. The method and implemented software, both based on the graph theory, interact with the designer to inform him on the possible singularity and algebraic loop issues, providing some leads for their resolution. Finally, some preliminary sizing studies of landing gear and primary flight control surfaces (aileron and spoiler) actuation systems are presented to highlight the possibilities brought out by this innovative approach: integrated design with complex kinematics, collaborative multi-partners design, use of response surfaces to speed up the optimization
APA, Harvard, Vancouver, ISO, and other styles
40

Kosowski, Adrian. "Time and Space-Efficient Algorithms for Mobile Agents in an Anonymous Network." Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00867765.

Full text
Abstract:
Computing with mobile agents is rapidly becoming a topic of mainstream research in the theory of distributed computing. The main research questions undertaken in this study concern the feasibility of solving fundamental tasks in an anonymous network, subject to limitations on the resources available to the agent. The considered challenges include: exploring a graph by means of an agent with limited memory, discovery of the network topology, and attempting to meet with another agent in another network (rendezvous). The constraints imposed on the agent include the number of moves which the agent is allowed to perform in the network, the amount of state memory available to the agent, the ability of the agent to communicate with other agents, as well as its a priori knowledge of the network topology or of global parameters.
APA, Harvard, Vancouver, ISO, and other styles
41

ALVES, Francisco Regis Vieira. "Aplicações da sequência Fedathi na promoção do raciocínio intuitivo no cálculo a várias variáveis." www.teses.ufc.br, 2011. http://www.repositorio.ufc.br/handle/riufc/3166.

Full text
Abstract:
ALVES, Francisco Regis Vieira. Aplicações da sequência Fedathi na promoção do raciocínio intuitivo no cálculo a várias variáveis. 2011. 398f. Tese (Doutorado em Educação) – Universidade Federal do Ceará, Faculdade de Educação, Programa de Pós-Graduação em Educação Brasileira, Fortaleza-CE, 2011.
Submitted by Maria Josineide Góis (josineide@ufc.br) on 2012-07-11T14:43:16Z No. of bitstreams: 1 2011_Tese_ FRVALVES.pdf: 14201518 bytes, checksum: 22b6bb75fce50eeb927d7cdec1d5d361 (MD5)
Approved for entry into archive by Maria Josineide Góis(josineide@ufc.br) on 2012-07-11T15:04:21Z (GMT) No. of bitstreams: 1 2011_Tese_ FRVALVES.pdf: 14201518 bytes, checksum: 22b6bb75fce50eeb927d7cdec1d5d361 (MD5)
Made available in DSpace on 2012-07-11T15:04:21Z (GMT). No. of bitstreams: 1 2011_Tese_ FRVALVES.pdf: 14201518 bytes, checksum: 22b6bb75fce50eeb927d7cdec1d5d361 (MD5) Previous issue date: 2011
Este estudo trata do ensino/aprendizagem do Cálculo Diferencial e Integral a Várias Variáveis - CVV. Seu objetivo geral foi a identificação/descrição das categorias do raciocínio intuitivo ao longo das fases de ensino da metodologia nominada Sequência Fedathi. A estruturação e a concepção de situações didáticas de ensino envolvendo situações-problema diferenciadas, entretanto, com respeito aos rituais algorítmicos identificados nos livros didáticos de CVV, foram atingidos com base numa visão de complementaridade entre a Teoria das Representações Semióticas e as categorias do raciocínio intuitivo descrita por Fischbein (1987), exploradas nas quatro fases previstas pela Sequência Fedathi. Assim, iniciamos o trabalho com o levantamento e compreensão do ensino e da aprendizagem do Cálculo em Uma Variável Real – CUV e dos poucos estudos científicos desenvolvidos, tanto no Brasil como no Exterior acerca do ensino do CVV. Damos ênfase final à descrição da transição interna do CUV para o CVV, o que não se observa em estudos acadêmicos. Em seguida, com a intenção de delinear, caracterizar, discutir e compreender a natureza do principal raciocínio que tencionamos registrar, discutimos a natureza epistemológica, filosófica e psicológica do raciocínio intuitivo, suas categorias (intuição afirmativa, intuição conjectural e intuição antecipatória) e outras faculdades psíquicas vinculadas a este, nomeadas por percepção e insight. Depois de caracterizar um ensino de CVV apoiado na crença e na certeza matemática, apresentamos e discutimos os principais elementos da Sequência Fedathi e das teorias propostas por Fischbein (1987) e Duval (1991; 1995a). Em seguida, no que diz respeito ao desenvolvimento da pesquisa e a investigação de campo, com arrimo no viés de complementaridade destas teorias, analisamos obras didáticas reconhecidas de CVV, que servem como referência de estudo, com a intenção de identificar e superar possíveis entraves no tocante à elaboração das atividades aplicadas aos estudantes. Os dados empíricos foram obtidos por meio de documentos produzidos por um grupo de oito estudantes escolhidos em uma amostra total de 80 alunos do curso de Licenciatura em Matemática do Instituto Federal de Educação, Ciência e Tecnologia – IFCE – Fortaleza, no período de 2009/2010, matriculados na disciplina Cálculo III, por meio de entrevistas semiestruturadas efetuadas durante e após as atividades, de modo individual e com o registro visual do momento em que desenvolveram suas estratégias. Todavia, para efeito de discussão no corpo da tese, apresentamos apenas oito estudantes. No final deste estudo, podemos dizer que a exploração didática de categorias do raciocínio intuitivo (intuição afirmativa, intuição conjectural e intuição antecipatória), com base em uma mediação didática que envolveu a exploração de registros de representação semiótica, pode proporcionar a evolução do conhecimento do estudante a respeito dos conceitos principais do CVV. Para tanto, o apoio computacional, com o emprego de softwares como o Geogebra e do CAS Maple, pode indicar elementos mais significativos no que diz respeito à transição interna do CUV para o CVV. Outro ponto relevante concerne à importância do estímulo à elaboração de imagens mentais produzidas pelo ensino que estimula a intuição matemática, a produção de metáforas e a apreensão perceptual dos objetos em 3D do CVV e, deste modo, a evolução de crenças e valores epistêmicos não contraditórios relativos às propriedades formais do CVV.
APA, Harvard, Vancouver, ISO, and other styles
42

LIN, BING-XIN, and 林秉心. "A study of operational calculus and sequence equations." Thesis, 1987. http://ndltd.ncl.edu.tw/handle/24494868935946924832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chvalovský, Karel. "Nerozhodnutelnost některých substrukturálních logik." Doctoral thesis, 2015. http://www.nusl.cz/ntk/nusl-333791.

Full text
Abstract:
This thesis deals with the algorithmic undecidability (unsolvability) of provability in some non-classical logics. In fact, there are two natural variants of this problem. Fix a logic, we can study its set of theorems or its consequence relation, which is a more general problem. It is well-known that both these problems can be undecidable already for propositional logics and we provide further examples of such logics in this thesis. In particular, we study propositional substructural logics which are obtained from the sequent calculus LJ for intuitionistic logic by dropping structural rules. Our main results are the following. First, (finite) consequence relations in some basic non-associative substructural logics are shown to be undecidable. Second, we prove that a basic associative substructural logic with the contraction rule, which is notorious for being hard to handle, has an undecidable set of theorems. Since the studied logics have natural algebraic semantics, we also obtain corresponding algebraic results which are interesting in their own right.
APA, Harvard, Vancouver, ISO, and other styles
44

Šmídová, Kristýna. "Reedukace formálních poznatků z oblasti matematické analýzy u studentů vysoké školy." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-434555.

Full text
Abstract:
The topic of this thesis is the didactics of mathematical analysis. The thesis describes selected observations from the reeducation in an individual tutoring environment of for- mal knowledge of university students in the field of calculus. The aim of the thesis is to describe what formal knowledge appeared, to describe and evaluate selected reeducation interventions and on this basis formulate appropriate methodological recommendation. In the first chapter we deal with the contradiction between definition and concept concept of students, we outline how to convey to students the purpose of definitions and we suggest how to teach students to work with definitions properly, including understanding quan- tified propositions. In the second chapter we present the theory of process and concept together with the generic model theory. In the third chapter we explain the methods of work with students and the methods of the analysis of videos from tutoring. In the fourth chapter we analyze cognitive processes of the concept of sequence limits. KEYWORDS reeducation, individual tutoring, mechanical knowledge, calculus, definitions, quantified proposition, infinity, sequence, limit 1
APA, Harvard, Vancouver, ISO, and other styles
45

Laurin-Lemay, Simon. "Modélisation des biais mutationnels et rôle de la sélection sur l’usage des codons." Thesis, 2019. http://hdl.handle.net/1866/24481.

Full text
Abstract:
L’acquisition de données génomiques ne cesse de croître, ainsi que l’appétit pour les interpréter. Mais déterminer les processus qui ont façonné l’évolution des séquences codantes (et leur importance relative) est un défi scientifique passant par le développement de modèles statistiques de l’évolution prenant en compte de plus en plus d’hétérogénéités au niveau des processus mutationnels et de sélection. Identifier la sélection est une tâche qui nécessite typiquement de détecter un écart entre deux modèles : un modèle nulle ne permettant pas de régime évolutif adaptatif et un modèle alternatif qui lui en permet. Lorsqu’un test entre ces deux modèles rejette le modèle nulle, on considère avoir détecter la présence d’évolution adaptative. La tâche est d’autant plus difficile que le signal est faible et confondu avec diverses hétérogénéités négligées par les modèles. La détection de la sélection sur l’usage des codons spécifiquement est controversée, particulièrement chez les Vertébrés. Plusieurs raisons peuvent expliquer cette controverse : (1) il y a un biais sociologique à voir la sélection comme moteur principal de l’évolution, à un tel point que les hétérogénéités relatives aux processus de mutation sont historiquement négligées ; (2) selon les principes de la génétique des populations, la petite taille efficace des populations des Vertébrés limite le pouvoir de la sélection sur les mutations synonymes conférant elles-mêmes un avantage minime ; (3) par contre, la sélection sur l’usage des codons pourrait être très localisée le long des séquences codantes, à des sites précis, relevant de contraintes de sélection relatives à des motifs utilisés par la machinerie d’épissage, par exemple. Les modèles phylogénétiques de type mutation-sélection sont les outils de prédilection pour aborder ces questions, puisqu’ils modélisent explicitement les processus mutationnels ainsi que les contraintes de sélection. Toutes les hétérogénéités négligées par les modèles mutation-sélection de Yang and Nielsen [2008] peuvent engendrer de faux positifs allant de 20% (préférence site-spécifique en acides aminés) à 100% (hypermutabilité des transitions en contexte CpG) [Laurin-Lemay et al., 2018b]. En particulier, l’hypermutabilité des transitions du contexte CpG peut à elle seule expliquer la sélection détectée par Yang and Nielsen [2008] sur l’usage des codons. Mais, modéliser des phénomènes qui prennent en compte des interdépendances dans les données (par exemple l’hypermutabilité du contexte CpG) augmente de beaucoup la complexité des fonctions de vraisemblance. D’autre part, aujourd’hui le niveau de sophistication des modèles fait en sorte que des vecteurs de paramètres de haute dimensionnalité sont nécessaires pour modéliser l’hétérogénéité des processus étudiés, dans notre cas de contraintes de sélection sur la protéine. Le calcul bayésien approché (Approximate Bayesian Computation ou ABC) permet de contourner le calcul de la vraisemblance. Cette approche diffère de l’échantillonnage par Monte Carlo par chaîne de Markov (MCMC) communément utilisé pour faire l’approximation de la distribution a posteriori. Nous avons exploré l’idée de combiner ces approches pour une problématique spécifique impliquant des paramètres de haute dimensionnalité et de nouveaux paramètres prenant en compte des dépendances entre sites. Dans certaines conditions, lorsque les paramètres de haute dimensionnalité sont faiblement corrélés aux nouveaux paramètres d’intérêt, il est possible d’inférer ces mêmes paramètres de haute dimensionnalité avec la méthode MCMC, et puis les paramètres d’intérêt au moyen de l’ABC. Cette nouvelle approche se nomme CABC [Laurin-Lemay et al., 2018a], pour calcul bayésien approché conditionnel (Conditional Approximate Bayesian Computation : CABC). Nous avons pu vérifier l’efficacité de la méthode CABC en étudiant un cas d’école, soit celui de l’hypermutabilité des transitions en contexte CpG chez les Eutheria [Laurin-Lemay et al., 2018a]. Nous trouvons que 100% des 137 gènes testés possèdent une hypermutabilité des transitions significative. Nous avons aussi montré que les modèles incorporant l’hypermutabilité des transitions en contexte CpG prédisent un usage des codons plus proche de celui des gènes étudiés. Ceci suggère qu’une partie importante de l’usage des codons peut être expliquée à elle seule par les processus mutationnels et non pas par la sélection. Finalement nous explorons plusieurs pistes de recherche suivant nos développements méthodologiques : l’application de la détection de l’hypermutabilité des transitions en contexte CpG à l’échelle des Vertébrés ; l’expansion du modèle pour reconnaître des contextes autres que seul le CpG (e.g., hypermutabilité des transitions et transversions en contexte CpG et TpA) ; ainsi que des perspectives méthodologiques d’amélioration de la performance du CABC.
The acquisition of genomic data continues to grow, as does the appetite to interpret them. But determining the processes that shaped the evolution of coding sequences (and their relative importance) is a scientific challenge that requires the development of statistical models of evolution that increasingly take into account heterogeneities in mutation and selection processes. Identifying selection is a task that typically requires comparing two models: a null model that does not allow for an adaptive evolutionary regime and an alternative model that allows it. When a test between these two models rejects the null, we consider to have detected the presence of adaptive evolution. The task is all the more difficult as the signal is weak and confounded with various heterogeneities neglected by the models. The detection of selection on codon usage is controversial, particularly in Vertebrates. There are several reasons for this controversy: (1) there is a sociological bias in seeing selection as the main driver of evolution, to such an extent that heterogeneities relating to mutation processes are historically neglected; (2) according to the principles of population genetics, the small effective size of vertebrate populations limits the power of selection over synonymous mutations conferring a minimal advantage; (3) On the other hand, selection on the use of codons could be very localized along the coding sequences, at specific sites, subject to selective constraints related to DNA patterns used by the splicing machinery, for example. Phylogenetic mutation-selection models are the preferred tools to address these issues, as they explicitly model mutation processes and selective constraints. All the heterogeneities neglected by the mutation-selection models of Yang and Nielsen [2008] can generate false positives, ranging from 20% (site-specific amino acid preference) to 100% (hypermutability of transitions in CpG context)[Laurin-Lemay et al., 2018b]. In particular, the hypermutability of transitions in the CpG context alone can explain the selection on codon usage detected by Yang and Nielsen [2008]. However, modelling phenomena that take into account data interdependencies (e.g., hypermutability of the CpG context) greatly increases the complexity of the likelihood function. On the other hand, today’s sophisticated models require high-dimensional parameter vectors to model the heterogeneity of the processes studied, in our case selective constraints on the protein. Approximate Bayesian Computation (ABC) is used to bypass the calculation of the likelihood function. This approach differs from the Markov Chain Monte Carlo (MCMC) sampling commonly used to approximate the posterior distribution. We explored the idea of combining these approaches for a specific problem involving high-dimensional parameters and new parameters taking into account dependencies between sites. Under certain conditions, when the high dimensionality parameters are weakly correlated to the new parameters of interest, it is possible to infer the high dimensionality parameters with the MCMC method, and then the parameters of interest using the ABC. This new approach is called Conditional Approximate Bayesian Computation (CABC) [Laurin-Lemay et al., 2018a]. We were able to verify the effectiveness of the CABC method in a case study, namely the hypermutability of transitions in the CpG context within Eutheria [Laurin-Lemay et al.,2018a]. We find that 100% of the 137 genes tested have significant hypermutability of transitions. We have also shown that models incorporating hypermutability of transitions in CpG contexts predict a codon usage closer to that of the genes studied. This suggests that a significant part of codon usage can be explained by mutational processes alone. Finally, we explore several avenues of research emanating from our methodological developments: the application of hypermutability detection of transitions in CpG contexts to the Vertebrate scale; the expansion of the model to recognize contexts other than only CpG (e.g., hypermutability of transitions and transversions in CpG and TpA context); and methodological perspectives to improve the performance of the CABC approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography