To see the other types of publications on this topic, follow the link: Natural logic.

Dissertations / Theses on the topic 'Natural logic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Natural logic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ishtiaq, Samin. "A relevant analysis of natural deduction." Thesis, Queen Mary, University of London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sanz, Wagner de Campos. "Uma investigação acerca das regras para a negação e o absurdo em dedução natural." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280089.

Full text
Abstract:
Orientador: Marcelo Esteban Coniglio
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-08-07T00:21:55Z (GMT). No. of bitstreams: 1 Sanz_WagnerdeCampos_D.pdf: 2570437 bytes, checksum: 15352759879927665653f4fc165c3703 (MD5) Previous issue date: 2006
Resumo: O objetivo desta tese é o de propor uma elucidação da negação e do absurdo no âmbito dos sistemas de dedução natural para as lógicas intuicionista e clássica. Nossa investigação pode ser vista como um desenvolvimento de uma proposta apresentada por Russell há mais de cem anos e a qual ele parece ter abandonado posteriormente. Focaremos a atenção, em primeiro lugar, sobre a negação e, depois, como conseqüência das propostas para a negação, sobre a constante de absurdo. Nosso ponto de partida é, na verdade, um problema de natureza conceitual. Questionaremos a correção e a adequação da análise da negação e do absurdo atualmente predominante no meio-ambiente de dedução natural de estilo gentzeniano. O questionamento dessas análises adota como ponto focal o conceito de hipótese. O conceito de hipótese é uma noção central para os sistemas de dedução natural e a nossa proposta de análise desse conceito servirá de esteio para a formulação das propostas elucidatórias para a negação e o absurdo dentro dos sistemas de dedução natural
Abstract: The purpose of this thesis is to present an elucidation of negation and absurd for intuitionist and classical logics in the range of natural deduction systems. Our study could be seen as a development of a proposal presented by Russell over a hundred years ago, which he presumably abandoned later on. First, we will focus on negation and then on the absurd constant, as a consequence of the claims we are making for negation. As a matter of fact, our starting point is a problem of a conceptual nature. We will question the correctness and the adequacy of the analysis of negation and absurd, prevailing nowadays in the Gentzen-style natural deduction circle. The concept of hypothesis is the focus point in questioning these analyses. The concept of hypothesis is a central notion for natural deduction systems and the purpose of our analysis of this concept is to support the formulation of elucidative propositions for negation and absurd in natural deduction systems
Doutorado
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
3

Schoter, Andreas. "The computational application of bilattice logic to natural reasoning." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/434.

Full text
Abstract:
Chapter 1 looks at natural reasoning. It begins by considering the inferences that people make, particularly in terms of how those inferences differ from what is sanctioned by classical logic. I then consider the role of logic in relation to psychology and compare this relationship with the competence/performance distinction from syntax. I discuss four properties of natural reasoning that I believe are key to any theory: specifically partially, paraconsistancy, relevance and defeasibility. I then discuss whether these are semantic properties or pragmatic ones, and conclude by describing a new view of logic and inference prevalent in some contemporary writings. Chapter 2 looks at some of the existing formal approaches to the four properties. For each property I present the basic idea in formal terms, and then discuss a number of systems from the literature. Each section concludes with a brief discussion of the importance of the given property in the field of computation. Chapter 3 develops the formal system used in this thesis. this is an evidential, bilattice-based logic (EBL). I begin by presenting the mathematical preliminaries, and then show how the four properties of natural reasoning can be captured. The details of the logic itself are presented, beginning with the syntax and then moving on to the semantics. The role of pragmatic inferences in the logic is considered and a formal solution is advanced. I conclude by comparing EBL to some of the logics discussed in Chapter 2. Chapter 4 rounds off Part 1 by considering the implementation of the logic and some of it's computational properties. It begins by considering the application of evidential bilattice logic to logic programming; it extends Fitting's work in this area to construct a programming language, QLOG2. I give some examples of this language in use. The QLOG2 language is then used as a part of the implementation of the EBL system itself: I describe the details of this Implementation and then give some examples of the system in use. The chapter concludes by giving an informal presentation of some basic complexity results for logical closure in EBL, based on the given implementation. Chapter 5 presents some interesting data from linguistics that reflects some of the principles of natural reasoning; in particular I concentrate on implicatures and presupposition. I begin by describing the data and then consider a number of approaches from both the logical and the linguistic literature. Chapter 6 uses the logic developed in Chapter 3 to analyse the data presented in Chapter 5. I consider the basic inference cases, and then move on to more complex examples involving contextual interactions. The results are quite successful, and add weight to Mercer's quest for a common logical semantics for entailment and presupposition. All of the examples considered in this chapter can be handled by the implemented system described in Chapter 4. Finally, Chapter 7 rounds off by presenting some further areas of research that have been raised by this investigation. In particular, the issues of quantification and modality are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Pareschi, Remo. "Type-driven natural language analysis." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/19215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pizer, Ian. "On a natural construction of real closed subfields of the reals." Thesis, University of Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brage, Jens. "A Natural Interpretation of Classical Proofs." Doctoral thesis, Stockholm : Dept. of mathematics, Stockholm university, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hug, Joachim Josef [Verfasser]. "Exploring the biosynthetic logic of myxobacterial natural products / Joachim Josef Hug." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1221129422/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sturla, Giancarlo (Giancarlo F. ). "A two-phased approach for natural language parsing into formal logic." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113294.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 53-56).
Natural language is an intuitive medium for a human to communicate with a robot. Additionally, there are many tasks in areas such as manufacturing, military, and disaster response where communication is limited among the agents performing these tasks. Due to this limited communication, we focus on a protocol where most of the communication is done before and after the mission execution. As a first step in analyzing the effectiveness of this protocol, this thesis presents a two-phased approach to parsing natural language into an arbitrary formal logic. In the first phase, we aim to learn the generic structure of the logical expression associated with a natural language utterance. For example, if the sentence "Approach the target from the west" were to be parsed into the expression Approach(target;west), then the first phase would output a generic structure such as f(c0; c1), where f, c0, and c1 are placeholders for the actual values Approach, target, and west, respectively. In the second phase, we aim the learn how to assign the intended values to these placeholders. The method developed in this thesis is able to achieve an accuracy of 46% and 78% for the first and second phase of our natural language parser, respectively. With the help of our natural language parser, we can use the outputted logical expressions in future work to help in the analysis of the mission execution's success or failure.
by Giancarlo Sturla.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Taing, Austin. "Application of Boolean Logic to Natural Language Complexity in Political Discourse." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/77.

Full text
Abstract:
Press releases serve as a major influence on public opinion of a politician, since they are a primary means of communicating with the public and directing discussion. Thus, the public’s ability to digest them is an important factor for politicians to consider. This study employs several well-studied measures of linguistic complexity and proposes a new one to examine whether politicians change their language to become more or less difficult to parse in different situations. This study uses 27,500 press releases from the US Senate between 2004–2008 and examines election cycles and natural disasters, namely hurricanes, as situations where politicians’ language may change. We calculate the syntactic complexity measures clauses per sentence, T-unit length, and complex-T ratio, as well as the Automated Readability Index and Flesch Reading Ease of each press release. We also propose a proof-of-concept measure called logical complexity to find if classical Boolean logic can be applied as a practical linguistic complexity measure. We find that language becomes more complex in coastal senators’ press releases concerning hurricanes, but see no significant change for those in election cycles. Our measure shows similar results to the well-established ones, showing that logical complexity is a useful lens for measuring linguistic complexity.
APA, Harvard, Vancouver, ISO, and other styles
10

Mercer, Robert Ernest. "A default logic approach to the derivation of natural language presuppositions." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/27457.

Full text
Abstract:
A hearer's interpretation of the meaning of an utterance consists of more than what is conveyed by just the sentence itself. Other parts of the meaning are produced as inferences from three knowledge sources: the sentence itself, knowledge about the world, and knowledge about language use. One inference of this type is the natural language presupposition. This category of inference is distinguished by a number of features: the inferences are generated only, but not necessarily, if certain lexical or syntactic environments are present in the uttered sentence; normal interpretations of these presuppositional environments in the scope of a negation in a simple sentence produce the same inferences as the unnegated environment; and the inference can be cancelled by information in the conversational context. We propose a method for deriving presuppositions of natural language sentences that has its foundations in an inference-based concept of meaning. Whereas standard (monotonic) forms of reasoning are able to capture portions of a sentence's meaning, such as its entailments, non-monotonic forms of reasoning are required to derive its presuppositions. Gazdar's idea of presuppositions being consistent with the context, and the usual connection of presuppositions with lexical and syntactic environments motivates the use of Default Logic as the formal nonmonotonic reasoning system. Not only does the default logic approach provide a natural means to represent presuppositions, but also a single (slightly restricted) default proof procedure is all that is required to generate the presuppositions. The naturalness and simplicity of this method contrasts with the traditional projection methods. Also available to the logical approach is the proper treatment of 'or' and 'if ... then ...' which is not available to any of the projection methods. The default logic approach is compared with four others, three projection methods and one non-projection method. As well as serving the function of demonstrating empirical and methodological difficulties with the other methods, the detailed investigation also provides the motivation for the topics discussed in connection with default logic approach. Some of the difficulties have been solved using the default logic method, while possible solutions for others have only been sketched. A brief discussion of a new method for providing corrective answers to questions is presented. The novelty of this method is that the corrective answers are viewed as correcting presuppositions of the answer rather than of the question.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Roberts, Lesley. "Towards a probabilistic semantics for natural language /." [St. Lucia, Qld.], 2003. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18482.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dongier, François. "ND, a rule-based implementation of natural deduction : design of the theorem-prover and tutoring system." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Guo, Xiao. "Lexical vagueness handling using fuzzy logic in human robot interaction." Thesis, University of Bedfordshire, 2011. http://hdl.handle.net/10547/294440.

Full text
Abstract:
Lexical vagueness is a ubiquitous phenomenon in natural language. Most of previous works in natural language processing (NLP) consider lexical ambiguity as the main problem in natural language understanding rather than lexical vagueness. Lexical vagueness is usually considered as a solution rather than a problem in natural language understanding since precise information is usually failed to be provided in conversations. However, lexical vagueness is obviously an obstacle in human robot interaction (HRI) since the robots are expected to precisely understand their users' utterances in order to provide reliable services to their users. This research aims to develop novel lexical vagueness handling techniques to enable service robots to precisely understand their users' utterance so that they can provide the reliable services to their users. A novel integrated system to handle lexical vagueness is proposed in this research based on an in-depth understanding of lexical ambiguity and lexical vagueness including why they exist, how they are presented, what differences are in between them, and the mainstream techniques to handle lexical ambiguity and lexical vagueness. The integrated system consists of two blocks: the block of lexical ambiguity handling and the block of lexical vagueness handling. The block of lexical ambiguity handling first removes syntactic ambiguity and lexical ambiguity. The block of lexical vagueness handling is then used to model and remove lexical vagueness. Experimental results show that the robots endowed with the developed integrated system are able to understand their users' utterances. The reliable services to their users, therefore, can be provided by the robots.
APA, Harvard, Vancouver, ISO, and other styles
14

Porter, Harry H. "A logic-based grammar formalism incorporating feature-structures and inheritance /." Full text open access at:, 1988. http://content.ohsu.edu/u?/etd,181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mukhopadhyay, Trisha. "A Flexible, Natural Deduction, Automated Reasoner for Quick Deployment of Non-Classical Logic." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7862.

Full text
Abstract:
Automated Theorem Provers (ATP) are software programs which carry out inferences over logico-mathematical systems, often with the goal of finding proofs to some given theorem. ATP systems are enormously powerful computer programs, capable of solving immensely difficult problems. Currently, many automated theorem provers exist like E, vampire, SPASS, ACL2, Coq etc. However, all the available theorem provers have some common problems: (1) Current ATP systems tend not to try to find proofs entirely on their own. They need help from human experts to supply lemmas, guide the proof, etc. (2) There is not a single proof system available which provides fully automated platforms for both First Order Logic (FOL) and other Higher Order Logic (HOL). (3) Finally, current proof systems do not have an easy way to quickly deploy and reason over new logical systems, which a logic researcher may want to test. In response to these problems, I introduce the MATR framework. MATR is a platform-independent, codelet-based (independently operating processes) proof system with an easy-to-use Graphical User Interface (GUI), where multiple codelets can be selected based on the formal system desired. MATR provides a platform for different proof strategies like deduction and backward reasoning, along with different formal systems such as non-classical logics. It enables users to design their own proof system by selecting from the list of codelets without needing to write an ATP from scratch.
APA, Harvard, Vancouver, ISO, and other styles
16

Lämmermann, Sven. "Runtime Service Composition via Logic-Based Program Synthesis." Doctoral thesis, KTH, Microelectronics and Information Technology, IMIT, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sharpe, Dean. "The acquisition of natural language negation : a logical resources approach." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ44581.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ambrose, Jennifer Marie. "Geographies of responsibility: the cultural logic of 21st century weather emergencies." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/2178.

Full text
Abstract:
Geographies of Responsibility: The Cultural Logic of 21st Century Weather Emergencies analyzes the role of narrative in contemporary severe weather events. The speed and diversity of media through which we now communicate "the weather" significantly impact how U.S. communities experience these events and their possible social, cultural, and political meanings. This project explores four weather emergencies, covering physical geographies of the far northwest, Great Plains, mid-Atlantic, and Caribbean, that were circulated and reframed via a range of media--from newspapers to television, social, and new media--who discussed these events, and to what ends. Chapter 1 examines reporting on the 2004 Alaska wildfires directed at U.S. national and Alaska state communities to explore the importance of the "nation" as a continuing relevant relative spatial scale. Chapter 2 investigates the 2007 Greensburg tornado and subsequent "green" (re)development of the town. Chapter 3 analyzes the 2010 "Snowmageddon" blizzards in Washington, D.C., which initiated "playful" acts that highlighted how urban economic realities and historical social geographies of race are embedded in particular urban sites. Chapter 4 explores the 2010 Haiti earthquake, which evoked economies of responsibility across multiple scales of mobilization that reiterated the cultural and historical "weather map" laid down by Hurricane Katrina. These mass mediated weather events each mobilized attention and response through narratives that evoked an emergency to communities across multiple geographic scales put into relationships with one another through storylines far more complex than an analysis of how "global" and local weather systems co-create each other.
APA, Harvard, Vancouver, ISO, and other styles
19

Reul, Quentin H. "Role of description logic reasoning in ontology matching." Thesis, University of Aberdeen, 2012. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=186278.

Full text
Abstract:
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
APA, Harvard, Vancouver, ISO, and other styles
20

Roettenbacher, Christian Wolfgang. "A framework for specifying business rules based on logic with a syntax close to natural language." Thesis, De Montfort University, 2017. http://hdl.handle.net/2086/15326.

Full text
Abstract:
The systematic interaction of software developers with the business domain experts that are usually no software developers is crucial to software system maintenance and creation and has surfaced as the big challenge of modern software engineering. Existing frameworks promoting the typical programming languages with artificial syntax are suitable to be processed by computers but do not cater to domain experts, who are used to documents written in natural language as a means of interaction. Other frameworks that claim to be fully automated, such as those using natural language processing, are too imprecise to handle the typical requirements documents written in heterogeneous natural language flavours. In this thesis, a framework is proposed that can support the specification of business rules that is, on the one hand, understandable for nonprogrammers and on the other hand semantically founded, which enables computer processability. This is achieved by the novel language Adaptive Business Process and Rule Integration Language (APRIL). Specifications in APRIL can be written in a style close to natural language and are thus suitable for humans, which was empirically evaluated with a representative group of test persons. A useful and uncommon feature of APRIL is the ability to define reusable abstract mixfix operators as sentence patterns, that can mimic natural language. The semantic underpinning of the mixfix operators is achieved by customizable atomic formulas, allowing to tailor APRIL to specific domains. Atomic formulas are underpinned by a denotational semantics, which is based on Tempura (executable subset of Interval Temporal Logic (ITL)) to describe behaviour and the Object Constraint Language (OCL) to describe invariants and pre- and postconditions. APRIL statements can be used as the basis for automatically generating test code for software systems. An additional aspect of enhancing the quality of specification documents comes with a novel formal method technique (ISEPI) applicable to behavioural business rules semantically based on Propositional Interval Temporal Logic (PITL) and complying with the newly discovered 2-to-1 property. This work discovers how the ISE subset of ISEPI can be used to express complex behavioural business rules in a more concise and understandable way. The evaluation of ISE is done by an example specification taken from the car industry describing system behaviour, using the tools MONA and PITL2MONA. Finally, a methodology is presented that helps to guide a continuous transformation starting from purely natural language business rule specification to the APRIL specification which can then be transformed to test code. The methodologies, language concepts, algorithms, tools and techniques devised in this work are part of the APRIL-framework.
APA, Harvard, Vancouver, ISO, and other styles
21

Arruda, Alexandre Matos. "A infinitary system of the logic of least fixed-point." Universidade Federal do CearÃ, 2007. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1325.

Full text
Abstract:
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico
A noÃÃo de menor ponto-fixo de um operador à amplamente aplicada na ciÃncia da computaÃÃo como, por exemplo, no contexto das linguagens de consulta para bancos de dados relacionais. Algumas extensÃes da LÃgica de Primeira-Ordem (FOL)1 com operadores de ponto-fixo em estruturas finitas, como a lÃgica de menor ponto-fixo (LFP)2, foram propostas para lidar com problemas relacionados à expressividade de FOL. A LFP captura as classes de complexidade PTIME sobre a classe das estruturas finitas ordenadas. A caracterizaÃÃo descritiva de classes computacionais à uma abordagem central em Teoria do Modelos Finitos (FMT)3. O teorema de Trakhtenbrot, considerado o ponto de partida para FMT, estabelece que a validade sobre modelos finitos nÃo à recursivamente enumerÃvel, isto Ã, a completude falha sobre modelos finitos. Este resultado à baseado na hipÃtese de que qualquer sistema dedutivo à de natureza finita. Entretanto, nos podemos relaxar tal hipÃtese como foi feito no escopo da teoria da prova para aritmÃtica. A teoria da prova tem raÃzes no programa de Hilbert. ConseqÃÃncias teÃricas da noÃÃo de prova sÃo, por exemplo, relacionadas a teoremas de normalizaÃÃo, consistÃncia, decidibilidade, e resultados de complexidade. A teoria da prova para aritmÃtica tambÃm à motivada pelos teoremas de incompletude de GÃdel, cujo alvo foi fornecer um exemplo de um princÃpio matemÃtico verdadeiro e significativo que nÃo à derivÃvel na aritmÃtica de primeira-ordem. Um meio de apresentar esta prova à baseado na definiÃÃo de um sistema de prova com uma regra infinitÃria, a w-rule, que estabiliza a consistÃncia da aritmÃtica de primeira-ordem atravÃs de uma perspectiva de teoria da prova. Motivados por esta prova, iremos propor aqui um sistema infinitÃrio de prova para LFP que nos permitirà investigar propriedades em teoria da prova. Com tal sistema dedutivo infinito, pretendemos apresentar uma teoria da prova para uma lÃgica tradicionalmente definida no escopo de FMT. Permanece aberto um caminho alternativo de provar resultados jà obtidos com FMT e tambÃm novos resultados do ponto de vista da teoria da prova. AlÃm disso, iremos propor um procedimento de normalizaÃÃo com restriÃÃes para este sistema dedutivo, que pode ser usado em um provador de teoremas para computar consultas em banco de dados relacionais
The notion of the least fixed-point of an operator is widely applied in computer science as, for instance, in the context of query languages for relational databases. Some extensions of FOL with _xed-point operators on finite structures, as the least fixed-point logic (LFP), were proposed to deal with problem problems related to the expressivity of FOL. LFP captures the complexity class PTIME over the class of _nite ordered structures. The descriptive characterization of computational classes is a central issue within _nite model theory (FMT). Trakhtenbrot's theorem, considered the starting point of FMT, states that validity over finite models is not recursively enumerable, that is, completeness fails over finite models. This result is based on an underlying assumption that any deductive system is of finite nature. However, we can relax such assumption as done in the scope of proof theory for arithmetic. Proof theory has roots in the Hilbert's programme. Proof theoretical consequences are, for instance, related to normalization theorems, consistency, decidability, and complexity results. The proof theory for arithmetic is also motivated by Godel incompleteness theorems. It aims to o_er an example of a true mathematically meaningful principle not derivable in first-order arithmetic. One way of presenting this proof is based on a definition of a proof system with an infinitary rule, the w-rule, that establishes the consistency of first-order arithmetic through a proof-theoretical perspective. Motivated by this proof, here we will propose an in_nitary proof system for LFP that will allow us to investigate proof theoretical properties. With such in_nitary deductive system, we aim to present a proof theory for a logic traditionally defined within the scope of FMT. It opens up an alternative way of proving results already obtained within FMT and also new results through a proof theoretical perspective. Moreover, we will propose a normalization procedure with some restrictions on the rules, such this deductive system can be used in a theorem prover to compute queries on relational databases.
APA, Harvard, Vancouver, ISO, and other styles
22

Curiel, Diaz Arturo Tlacaélel. "Using formal logic to represent sign language phonetics in semi-automatic annotation tasks." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30308/document.

Full text
Abstract:
Cette thèse présente le développement d'un framework formel pour la représentation des Langues de Signes (LS), les langages des communautés Sourdes, dans le cadre de la construction d'un système de reconnaissance automatique. Les LS sont de langues naturelles, qui utilisent des gestes et l'espace autour du signeur pour transmettre de l'information. Cela veut dire que, à différence des langues vocales, les morphèmes en LS ne correspondent pas aux séquences de sons; ils correspondent aux séquences de postures corporelles très spécifiques, séparés par des changements tels que de mouvements. De plus, lors du discours les signeurs utilisent plusieurs parties de leurs corps (articulateurs) simultanément, ce qui est difficile à capturer avec un système de notation écrite. Cette situation difficulté leur représentation dans de taches de Traitement Automatique du Langage Naturel (TALN). Pour ces raisons, le travail présenté dans ce document a comme objectif la construction d'une représentation abstraite de la LS; plus précisément, le but est de pouvoir représenter des collections de vidéo LS (corpus) de manière formelle. En générale, il s'agit de construire une couche de représentation intermédiaire, permettant de faire de la reconnaissance automatique indépendamment des technologies de suivi et des corpus utilisés pour la recherche. Cette couche corresponde à un système de transition d'états (STE), spécialement crée pour représenter la nature parallèle des LS. En plus, elle peut-être annoté avec de formules logiques pour son analyse, à travers de la vérification de modèles. Pour représenter les propriétés à vérifier, une logique multi-modale a été choisi : la Logique Propositionnelle Dynamique (PDL). Cette logique a été originalement crée pour la spécification de programmes. De manière plus précise, PDL permit d'utilise des opérateurs modales comme [a] et , représentant <> et <>, respectivement. Une variante particulaire a été développée pour les LS : la PDL pour Langue de Signes (PDLSL), qui est interprété sur des STE représentant des corpus. Avec PDLSL, chaque articulateur du corps (comme les mains et la tête) est vu comme un agent indépendant; cela veut dire que chacun a ses propres actions et propositions possibles, et qu'il peux les exécuter pour influencer une posture gestuelle. L'utilisation du framework proposé peut aider à diminuer deux problèmes importantes qui existent dans l'étude linguistique des LS : hétérogénéité des corpus et la manque des systèmes automatiques d'aide à l'annotation. De ce fait, un chercheur peut rendre exploitables des corpus existants en les transformant vers des STE. Finalement, la création de cet outil à permit l'implémentation d'un système d'annotation semi-automatique, basé sur les principes théoriques du formalisme. Globalement, le système reçoit des vidéos LS et les transforme dans un STE valide. Ensuite, un module fait de la vérification formelle sur le STE, en utilisant une base de données de formules crée par un expert en LS. Les formules représentent des propriétés lexicales à chercher dans le STE. Le produit de ce processus, est une annotation qui peut être corrigé par des utilisateurs humains, et qui est utilisable dans des domaines d'études tels que la linguistique
This thesis presents a formal framework for the representation of Signed Languages (SLs), the languages of Deaf communities, in semi-automatic recognition tasks. SLs are complex visio-gestural communication systems; by using corporal gestures, signers achieve the same level of expressivity held by sound-based languages like English or French. However, unlike these, SL morphemes correspond to complex sequences of highly specific body postures, interleaved with postural changes: during signing, signers use several parts of their body simultaneously in order to combinatorially build phonemes. This situation, paired with an extensive use of the three-dimensional space, make them difficult to represent with tools already existent in Natural Language Processing (NLP) of vocal languages. For this reason, the current work presents the development of a formal representation framework, intended to transform SL video repositories (corpus) into an intermediate representation layer, where automatic recognition algorithms can work under better conditions. The main idea is that corpora can be described with a specialized Labeled Transition System (LTS), which can then be annotated with logic formulae for its study. A multi-modal logic was chosen as the basis of the formal language: the Propositional Dynamic Logic (PDL). This logic was originally created to specify and prove properties on computer programs. In particular, PDL uses the modal operators [a] and to denote necessity and possibility, respectively. For SLs, a particular variant based on the original formalism was developed: the PDL for Sign Language (PDLSL). With the PDLSL, body articulators (like the hands or head) are interpreted as independent agents; each articulator has its own set of valid actions and propositions, and executes them without influence from the others. The simultaneous execution of different actions by several articulators yield distinct situations, which can be searched over an LTS with formulae, by using the semantic rules of the logic. Together, the use of PDLSL and the proposed specialized data structures could help curb some of the current problems in SL study; notably the heterogeneity of corpora and the lack of automatic annotation aids. On the same vein, this may not only increase the size of the available datasets, but even extend previous results to new corpora; the framework inserts an intermediate representation layer which can serve to model any corpus, regardless of its technical limitations. With this, annotations is possible by defining with formulae the characteristics to annotate. Afterwards, a formal verification algorithm may be able to find those features in corpora, as long as they are represented as consistent LTSs. Finally, the development of the formal framework led to the creation of a semi-automatic annotator based on the presented theoretical principles. Broadly, the system receives an untreated corpus video, converts it automatically into a valid LTS (by way of some predefined rules), and then verifies human-created PDLSL formulae over the LTS. The final product, is an automatically generated sub-lexical annotation, which can be later corrected by human annotators for their use in other areas such as linguistics
APA, Harvard, Vancouver, ISO, and other styles
23

Daly, Helen. "Vagueness and Borderline Cases." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145428.

Full text
Abstract:
Vagueness is ubiquitous in natural language. It seems incompatible with classical, bivalent logic, which tells us that every statement is either true or false, and none is vaguely true. Yet we do manage to reason using vague natural language. In fact, the majority of our day-to-day reasoning involves vague terms and concepts. There is a puzzle here: how do we perform this remarkable feat of reasoning? I argue that vagueness is a kind of semantic indecision. In short, that means we cannot say exactly who is bald and who is not because we have never decided the precise meaning of the word 'bald'--there are some borderline cases in the middle, which might be bald or might not. That is a popular general strategy for addressing vagueness. Those who use it, however, do not often say what they mean by 'borderline case'. It is most frequently used in a loose way to refer to in-between items: those people who are neither clearly bald nor clearly not bald. But under that loose description, the notion of borderline cases is ambiguous, and some of its possible meanings create serious problems for semantic theories of vagueness.Here, I clarify the notion of a borderline case, so that borderline cases can be used profitably as a key element in a successful theory of vagueness. After carefully developing my account of borderline cases, I demonstrate its usefulness by proposing a theory of vagueness based upon it. My theory, vagueness as permission, explains how classical logic can be used to model even vague natural language.
APA, Harvard, Vancouver, ISO, and other styles
24

Schwartzkopff, Robert. "The numbers of the marketplace : commitment to numbers in natural language." Thesis, University of Oxford, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Riedel, Sebastian. "Efficient prediction of relational structure and its application to natural language processing." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4167.

Full text
Abstract:
Many tasks in Natural Language Processing (NLP) require us to predict a relational structure over entities. For example, in Semantic Role Labelling we try to predict the ’semantic role’ relation between a predicate verb and its argument constituents. Often NLP tasks not only involve related entities but also relations that are stochastically correlated. For instance, in Semantic Role Labelling the roles of different constituents are correlated: we cannot assign the agent role to one constituent if we have already assigned this role to another. Statistical Relational Learning (also known as First Order Probabilistic Logic) allows us to capture the aforementioned nature of NLP tasks because it is based on the notions of entities, relations and stochastic correlations between relationships. It is therefore often straightforward to formulate an NLP task using a First Order probabilistic language such as Markov Logic. However, the generality of this approach comes at a price: the process of finding the relational structure with highest probability, also known as maximum a posteriori (MAP) inference, is often inefficient, if not intractable. In this work we seek to improve the efficiency of MAP inference for Statistical Relational Learning. We propose a meta-algorithm, namely Cutting Plane Inference (CPI), that iteratively solves small subproblems of the original problem using any existing MAP technique and inspects parts of the problem that are not yet included in the current subproblem but could potentially lead to an improved solution. Our hypothesis is that this algorithm can dramatically improve the efficiency of existing methods while remaining at least as accurate. We frame the algorithm in Markov Logic, a language that combines First Order Logic and Markov Networks. Our hypothesis is evaluated using two tasks: Semantic Role Labelling and Entity Resolution. It is shown that the proposed algorithm improves the efficiency of two existing methods by two orders of magnitude and leads an approximate method to more probable solutions. We also give show that CPI, at convergence, is guaranteed to be at least as accurate as the method used within its inner loop. Another core contribution of this work is a theoretic and empirical analysis of the boundary conditions of Cutting Plane Inference. We describe cases when Cutting Plane Inference will definitely be difficult (because it instantiates large networks or needs many iterations) and when it will be easy (because it instantiates small networks and needs only few iterations).
APA, Harvard, Vancouver, ISO, and other styles
26

Dingerson, Lynne M. "Predicting Future Shoreline Condition Based on Land Use Trends, Logistic Regression, and Fuzzy Logic." W&M ScholarWorks, 2005. http://www.vims.edu/library/Theses/Dingerson05.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kouri, Teresa. "Logical Instrumentalism." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1472751856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Cheng. "Development of Intelligent Energy Management System Using Natural Computing." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1341375203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Vaillette, Nathan. "Logical specification of finite-state transductions for natural language processing." Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1072058657.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xv, 253 p.; also includes graphics. Includes abstract and vita. Advisor: Chris Brew, Dept. of Linguistics. Includes bibliographical references (p. 245-253).
APA, Harvard, Vancouver, ISO, and other styles
30

Hallett, Joseph. "Capturing mobile security policies precisely." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31341.

Full text
Abstract:
The security policies of mobile devices that describe how we should use these devices are often informally specified. Users have preferences for some apps over others. Some users may avoid apps which can access large amounts of their personal data, whilst others may not care. A user is unlikely to write down these policies or describe them using a formal policy language. This is unfortunate as without a formal description of the policy we cannot precisely reason about them. We cannot help users to pick the apps they want if we cannot describe their policies. Companies have mobile security policies that definehowan employee should use smart phone devices and tablet computers from home at work. A company might describe the policy in a natural language document for employees to read and agree to. They might also use some software installed on employee's devices to enforce the company rules. Without a link between the specification of the policy in the natural language document and the implementation of the policy with the tool, understanding how they are related can be hard. This thesis looks at developing an authorisation logic, called AppPAL, to capture the informal security policies of the mobile ecosystem, which we define as the interactions surrounding the use of mobile devices in a particular setting. This includes the policies of the users, the devices, the app stores, and the environments the users bring the devices into. Whilst earlier work has looked on checking and enforcing policies with low-level controls, this work aims to capture these informal policy's intents and the trust relationships within them separating the policy specification from its enforcement. This allows us to analyse the informal policies precisely, and reason about how they are used. We show how AppPAL instantiates SecPAL, a policy language designed for access control in distributed environments. We describe AppPAL's implementation as an authorisation logic for mobile ecosystems. We show how we can check AppPAL policies for common errors. Using AppPAL we show that policies describing users privacy preferences do not seem to match the apps users install. We explore the differences between app stores and how to create new ones based on policy. We look at five BYOD policies and discover previously unexamined idioms within them. This suggests aspects of BYOD policies not managed by current BYOD tools.
APA, Harvard, Vancouver, ISO, and other styles
31

Palacios, Pastrana Florencio Edmundo. "Etude des rapports entre linguistique et logique concernant la dimension temporelle : un modèle de transition." Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10273.

Full text
Abstract:
Le but general de cette these est de developper un langage formel susceptible de modeliser certains traits du langage naturel ayant une relation forte avec le temps. En particulier nous sommes interesse par la notion linguistique de l'aspect et de ses consequences logiques possibles. Nous basons notre analyse sur deux perspectives : linguistique et logique. Pour la premiere nous analysons les concepts pertinents lies a la categorie grammaticale de l'aspect, qui, avec la categorie du temps grammatical, a une relation directe avec la notion de temps. Pour la seconde perspective, nous analysons les notions logiques mises en jeu dans des systemes formels deductifs et leur relations avec le temps : la logique temporelle. Comme il est etabli, les langages formels bases sur les notions definies par frege ne sont pas suffisants pour exprimer toutes les composantes temporelles du langage naturel. Toutefois il y a d'autres formalismes etendus qui prennent en compte certains concepts linguistiques comme l'aspect. Une telle proposition a ete faite par galton qui introduit des operateurs pour certaines des notions aspectuelles les plus courantes en anglais comme la perfectivite et la progressivite. Notre proposition introduit des notions topologiques pour representer la structure de l'ensemble dans lequel un enonce prend une certaine valeur de verite. De plus nous traitons aussi du concept de sigma-signification pour representer certains concepts theoriques non ensemblistes en rapport avec la signification des enonces.
APA, Harvard, Vancouver, ISO, and other styles
32

Patel, Purvag. "MODELING AND IMPLEMENTATION OF Z-NUMBER." OpenSIUC, 2015. https://opensiuc.lib.siu.edu/dissertations/995.

Full text
Abstract:
Computing with words (CW) provides symbolic and semantic methodology to deal with imprecise information associated with natural language. The CW paradigm rooted in fuzzy logic, when coupled with an expert system, offers a general methodology for computation with fuzzy variables and a fusion of natural language propositions for this purpose. Fuzzy variables encode the semantic knowledge, and hence, the system can understand the meaning of the symbols. The use of words not only simplifies the knowledge acquisition process, but can also eliminate the need of a human knowledge engineer. CW encapsulates various fuzzy logic techniques developed in past decades and formalizes them. Z-number is an emerging paradigm that has been utilized in computing with words among other constructs. The concept of Z-number is intended to provide a basis for computation with numbers that deals with reliability and likelihood. Z-numbers are confluence of the two most prominent approaches to uncertainty, probability and possibility, that allow computations on complex statements. Certain computations related to Z-numbers are ambiguous and complicated leading to their slow adaptation into areas such as computing with words. Moreover, as acknowledged by Zadeh, there does not exist a unique solution to these problems. The biggest contributing factor to the complexity is the use of probability distributions in the computations. This dissertation seeks to provide an applied model of Z-number based on certain realistic assumptions regarding the probability distributions. Algorithms are presented to implement this model and integrate it into an expert system shell for computing with words called CWShell. CWShell is a software tool that abstracts the underlying computation required for computing with words and provides a convenient way to represent and reason on a unstructured natural language.
APA, Harvard, Vancouver, ISO, and other styles
33

Torres, Parra Jimena Cecilia. "A Perception Based Question-Answering Architecture Derived from Computing with Words." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967797581&sid=1&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hjálmarsson, Guðmundur Andri. "What if? : an enquiry into the semantics of natural language conditionals." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/949.

Full text
Abstract:
This thesis is essentially a portfolio of four disjoint yet thematically related articles that deal with some semantic aspect or another of natural language conditionals. The thesis opens with a brief introductory chapter that offers a short yet opinionated historical overview and a theoretical background of several important semantic issues of conditionals. The second chapter then deals with the issue of truth values and conditions of indicative conditionals. So-called Gibbard Phenomenon cases have been used to argue that indicative conditionals construed in terms of the Ramsey Test cannot have truth values. Since that conclusion is somewhat incredible, several alternative options are explored. Finally, a contextualised revision of the Ramsey Test is offered which successfully avoids the threats of the Gibbard Phenomenon. The third chapter deals with the question of where to draw the so-called indicative/ subjunctive line. Natural language conditionals are commonly believed to be of two semantically distinct types: indicative and subjunctive. Although this distinction is central to many semantic analyses of natural conditionals, there seems to be no consensus on the details of its nature. While trying to uncover the grounds for the distinction, we will argue our way through several plausible proposals found in the literature. Upon discovering that none of these proposals seem entirely suited, we will reconsider our position and make several helpful observations into the nature of conditional sentences. And finally, in light of our observations, we shall propose and argue for plausible grounds for the indicative/subjunctive distinction.distinction. The fourth chapter offers semantics for modal and amodal natural language conditionals based on the distinction proposed in the previous chapter. First, the nature of modal and amodal suppositions will be explored. Armed with an analysis of modal and amodal suppositions, the corresponding conditionals will be examined further. Consequently, the syntax of conditionals in English will be uncovered for the purpose of providing input for our semantics. And finally, compositional semantics in generative grammar will be offered for modal and amodal conditionals. The fifth and final chapter defends Modus Ponens from alleged counterexamples. In particular, the chapter offers a solution to McGee’s infamous counterexamples. First, several solutions offered to the counterexamples hitherto are all argued to be inadequate. After a couple of observations on the counterexamples’ nature, a solution is offered and demonstrated. the solution suggests that the semantics of embedded natural language conditionals is more sophisticated than their surface syntax indicates. The heart of the solution is a translation function from the surface form of natural language conditionals to their logical form. Finally, the thesis ends with a conclusion that briefly summarises the main conclusions drawn in its preceding chapters.
APA, Harvard, Vancouver, ISO, and other styles
35

Ricou, Charles. "Conception d’un indicateur prédictif évaluant les effets des pratiques agricoles sur la diversité floristique et ses services en grandes cultures à l’échelle de la bordure de champ." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0107/document.

Full text
Abstract:
Développer des systèmes de culture respectant voire favorisant la biodiversité est un enjeu important pour les agronomes dans le contexte sociétal actuel. Pour ce faire, ceux-ci ont besoin de méthodes prédictives d’évaluation des effets des pratiques agricoles sur la biodiversité pour caractériser et évaluer les systèmes étudiés. Les écologues reconnaissent la nécessité d’aborder la biodiversité non seulement comme la richesse spécifique mais aussi par le biais des services écosystémiques qu’elle peut rendre à l’homme. Il existe de nombreuses propositions d’indicateurs de biodiversité mais ceux-ci reposent, soit sur des mesures de diversité au sein de groupes taxonomiques, soit sur des variables de pratiques et ne sont pas prédictifs. L’objectif de la thèse a été de concevoir un indicateur prédictif des effets des pratiques agricoles sur la biodiversité et ses services. Pour effectuer ce travail, nous avons choisi de nous baser sur la diversité floristique en bordure de champ et sur les services qui lui sont liés : la valeur patrimoniale, enjeu sociétal et la pollinisation, enjeu majeur pour l’agriculture. Nous avons structuré ce travail en trois étapes. En premier lieu, nous avons sélectionné les pratiques agricoles connues pour leurs effets combinés sur la biodiversité et ses services, identifié leurs natures et estimé l’ampleur de leurs effets à l'échelle de la bordure de champ. Dans une seconde étape, nous avons intégré par expertise la connaissance sur ces effets dans un modèle opérationnel.A la suite de ce travail de conception, nous avons évalué la sensibilité, et la qualité prédictive du modèle en le confrontant à un jeu de données mesurées sur le terrain dans le cadre de la thèse ou acquises dans le cadre d’autres travaux. Enfin, nous avons transformé les sorties du modèle (probabilité de présence pour 338 espèces) en un indicateur en les agrégeant en une valeur synthétique. Nous avons positionné ensuite celle-ci par rapport à des références que nous avons sélectionnées, et sur une échelle de notation lisible. Le développement de cet indicateur prédictif opérationnel permettra aux agronomes d’évaluer les effets positifs et négatifs des pratiques agricoles, et d’identifier des pratiques innovantes respectueuses de la biodiversité et de ses services. L’indicateur pourra être utilisé avec d’autres indicateurs environnementaux, économiques et sociaux dans la perspective de l’étude de la durabilité des exploitations agricoles en grandes cultures
Developing cropping systems supporting biodiversity is an important goal for agronomists in the current context of society. To achieve this goal, they need predictive methods assessing the effect of cropping practices on biodiversity to characterize and evaluate cropping systems. Among ecologists, a growing agreement exists to address biodiversity not only as species richness but also as ecosystemic services. There are numerous proposals of biodiversity indicators but those are based on diversity measurement within taxonomic groups or on management variables, and are not predictive. The objective of the thesis is to design a predictive indicator to assess the effects of cropping practices on biodiversity and its services. To achieve this, we decided to address plant diversity and its associated services, conservation value, a stake for society, and pollination, important stake for agriculture. We structured the thesis in three steps. First we selected cropping practices having combined effects on biodiversity and its services, identified their nature and assess the range of their effects at the field margin scale. In a second step, we integrated by expertise, this knowledge in an operational model. Following the design, we evaluate the sensitivity and the predictive quality of the model by comparing model outputs with field measurements carried out during the thesis or outside. Last we transformed outputs of the model (in form of presence probability for 338 species) into an indicator by aggregating them into synthetic value. Then, this was calibrated to selected references values on a scale between 0 (unfavorable) and 10 (favorable) easy to understand. The development of this operational predictive indicator will enable agronomists to assess positive and negative effects of cropping practices and to identify innovative practices supporting biodiversity and its services. The indicator can be used with other environmental, economic, and social indicators to assess sustainability of arable farming systems
APA, Harvard, Vancouver, ISO, and other styles
36

Ibeh, Lawrence [Verfasser], and Wolfram [Akademischer Betreuer] Mauser. "A transdisciplinary-based coupled approach for vulnerability assessment in the context of natural resource-based conflicts using remote sensing, spatial statistics and fuzzy logic adapted models. / Lawrence Ibeh ; Betreuer: Wolfram Mauser." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1194835597/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Teske, Alexander. "Automated Risk Management Framework with Application to Big Maritime Data." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38567.

Full text
Abstract:
Risk management is an essential tool for ensuring the safety and timeliness of maritime operations and transportation. Some of the many risk factors that can compromise the smooth operation of maritime activities include harsh weather and pirate activity. However, identifying and quantifying the extent of these risk factors for a particular vessel is not a trivial process. One challenge is that processing the vast amounts of automatic identification system (AIS) messages generated by the ships requires significant computational resources. Another is that the risk management process partially relies on human expertise, which can be timeconsuming and error-prone. In this thesis, an existing Risk Management Framework (RMF) is augmented to address these issues. A parallel/distributed version of the RMF is developed to e ciently process large volumes of AIS data and assess the risk levels of the corresponding vessels in near-real-time. A genetic fuzzy system is added to the RMF's Risk Assessment module in order to automatically learn the fuzzy rule base governing the risk assessment process, thereby reducing the reliance on human domain experts. A new weather risk feature is proposed, and an existing regional hostility feature is extended to automatically learn about pirate activity by ingesting unstructured news articles and incident reports. Finally, a geovisualization tool is developed to display the position and risk levels of ships at sea. Together, these contributions pave the way towards truly automatic risk management, a crucial component of modern maritime solutions. The outcomes of this thesis will contribute to enhance Larus Technologies' Total::Insight, a risk-aware decision support system successfully deployed in maritime scenarios.
APA, Harvard, Vancouver, ISO, and other styles
38

BUENO, REGIS C. "Detecção de contornos em imagens de padrões de escoamento bifásico com alta fração de vazio em experimentos de circulação natural com o uso de processamento inteligente." reponame:Repositório Institucional do IPEN, 2016. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26817.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-11-11T13:03:47Z No. of bitstreams: 0
Made available in DSpace on 2016-11-11T13:03:47Z (GMT). No. of bitstreams: 0
Este trabalho desenvolveu um novo método para a detecção de contornos em imagens digitais que apresentam objetos de interesse muito próximos e que contêm complexidades associadas ao fundo da imagem como variação abrupta de intensidade e oscilação de iluminação. O método desenvolvido utiliza lógicafuzzy e desvio padrão da declividade (Desvio padrão da declividade fuzzy - FuzDec) para o processamento de imagens e detecção de contorno. A detecção de contornos é uma tarefa importante para estimar características de escoamento bifásico através da segmentação da imagem das bolhas para obtenção de parâmetros como a fração de vazio e diâmetro de bolhas. FuzDec foi aplicado em imagens de instabilidades de circulação natural adquiridas experimentalmente. A aquisição das imagens foi feita utilizando o Circuito de Circulação Natural (CCN) do Instituto de Pesquisas Energéticas e Nucleares (IPEN). Este circuito é completamente constituído de tubos de vidro, o que permite a visualização e imageamento do escoamento monofásico e bifásico nos ciclos de circulação natural sob baixa pressão.Os resultados mostraram que o detector proposto conseguiu melhorar a identificação do contorno eficientemente em comparação aos detectores de contorno clássicos, sem a necessidade de fazer uso de algoritmos de suavização e sem intervenção humana.
t
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
39

Andersson, Andreas. "State and Process Tomography : In Spekkens' Toy Model." Thesis, Linköpings universitet, Informationskodning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163156.

Full text
Abstract:
In 2004 Robert W. Spekkens introduced a toy theory designed to make a case for the epistemic view of quantum mechanics. But how does Spekkens’ toy model differ from quantum theory? While some differences are well-established, we attempt to approach this question from a tomographic point of view. More specifically, we provide experimentally viableprocedureswhichenablesustocompletelycharacterizethestatesandgatesthatare available in the toy model. We show that, in contrast to quantum theory, decompositions of transformations in the toy model must be done in a non-linear fashion.
APA, Harvard, Vancouver, ISO, and other styles
40

Nyström, Thomas. "Adaptive Design for Circular Business Models in the Automotive Manufacturing Industry." Licentiate thesis, Viktoria, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-40566.

Full text
Abstract:
The vision of a circular economy (CE) promises both profitability and eco-sustainability to industries, and can, from a material and energy resource flow perspective, be operationalized by combining three business and design strategies: closing loops; narrowing and slowing down resource flows by material recycling, improving resource efficiency; and by extending product life by reuse, upgrades and remanufacturing. These three strategies are straightforward ways for industries to radically reduce their use of virgin resources. From a product design perspective, it is doable. However, from a business perspective, it is no less than a revolution that is asked for, as most Original Equipment Manufacturers (OEMs) have, over time, designed their organizations for capturing value from selling goods in linear, flow-based business models. This thesis aims to contribute to the discourse about CE by exploring practical routes for operationalizing circular product design in a “stock-based” CBM. The approach is three-fold. Firstly, the role of design as a solution provider for existing business models is explored and illustrated by case studies and interviews from the automotive industry. Secondly, challenges and possibilities for manufacturing firms to embrace all three strategies for circularity are explored. Thirdly, implications for designing products suitable to stock-based CBMs are discussed. In spite of the vast interest in business model innovation, a circular economy, and how to design for a circular economy, there are still many practical, real-life barriers preventing adoption. This is especially true for designing products that combine all three of the circular strategies, and with regard to the risk of premature obsolescence of products owned by an OEM in a stock-based business model. Nevertheless, if products are designed to adapt to future needs and wants, business risks could be reduced. The main findings are that CE practices already have been implemented in some respects in the automotive industry, but those practices result in very low resource productivity. Substantial economic and material values are being lost due to the dominant business and design logic of keeping up resource flows into products sold. The primary challenge for incumbent OEMs is to manage, in parallel, both a process for circular business model innovation and a design process for future adaptable products.

This licentiate studies have been financed by the Swedish EnergyAgency. The Appended Paper I is a part of the research project:Future-adaptivity for more energy-efficient vehicles, a collaborationbetween RISE VIKTORIA and Academy of Design & Crafts,University of Gothenburg.

APA, Harvard, Vancouver, ISO, and other styles
41

Santos, Rafael Messias. "Fundamentos de lógica, conjuntos e números naturais." Universidade Federal de Sergipe, 2015. https://ri.ufs.br/handle/riufs/6488.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The present work has as main objective to approach the fundaments of logic and the notions of sets in a narrow and elementary way, culminating in the construction of natural numbers. We present and advance, as far as possible, natural and intuitively, the concepts of propositions and open propositions, and the use of these in the speci cation sets, according with the axiom of the speci cation. We also present the logic connectives of open propositions and logic equivalences, relating them to the sets. We showed the concept of Theorem, as well as some forms of writing and demonstrations in the scope of the sets, and we used properties and relations of sets in the demonstration techniques. Our study ended with the construction of natural numbers and some of its properties, for example, the Relation Order.
O presente trabalho tem como principal objetivo abordar os fundamentos de lógica e as noções de conjuntos de maneira estreita e elementar, culminando na constru- ção dos números naturais. Apresentamos, e progredimos na medida do possível, de forma natural e/ou intuitiva, os conceitos de proposições e proposições abertas, e o uso destes nas especi cações de conjuntos, de acordo com o axioma da especi cação. Apresentamos também os conectivos lógicos de proposições abertas e as equivalências lógicas, relacionando-os aos conjuntos. Mostramos o conceito de Teorema, bem como algumas formas de escritas e demonstrações no âmbito dos conjuntos, e utilizamos propriedades e relações de conjuntos nas técnicas de demonstração. Encerramos nosso estudo com a construção dos números naturais e algumas das suas principais propriedades, como por exemplo, a Relação de Ordem.
APA, Harvard, Vancouver, ISO, and other styles
42

Moyse, Gilles. "Résumés linguistiques de données numériques : interprétabilité et périodicité de séries." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066526/document.

Full text
Abstract:
Nos travaux s'inscrivent dans le domaine des résumés linguistiques flous (RLF) qui permettent la génération de phrases en langage naturel, descriptives de données numériques, et offrent ainsi une vision synthétique et compréhensible de grandes masses d'information. Nous nous intéressons d'abord à l'interprétabilité des RLF, capitale pour fournir une vision simplement appréhendable de l'information à un utilisateur humain et complexe du fait de sa formulation linguistique. En plus des travaux existant à ce sujet sur les composants élémentaires des RLF, nous proposons une approche globale de l'interprétabilité des résumés vus comme un ensemble de phrases et nous intéressons plus spécifiquement à la question de leur cohérence. Afin de la garantir dans le cadre de la logique floue standard, nous introduisons une formalisation originale de l'opposition entre phrases de complexité croissante. Ce formalisme nous permet de démontrer que les propriétés de cohérence sont vérifiables par le choix d'un modèle de négation spécifique. D'autre part, nous proposons sur cette base un cube en 4 dimensions mettant en relation toutes les oppositions possibles entre les phrases d'un RLF et montrons que ce cube généralise plusieurs structures d'opposition logiques existantes. Nous considérons ensuite le cas de données sous forme de séries numériques et nous intéressons à des résumés linguistiques portant sur leur périodicité : les phrases que nous proposons indiquent à quel point une série est périodique et proposent une formulation linguistique appropriée de sa période. La méthode d’extraction proposée, nommée DPE pour Detection of Periodic Events, permet de segmenter les données de manière adaptative et sans paramètre utilisateur, en utilisant des outils issus de la morphologie mathématique. Ces segments sont ensuite utilisés pour calculer la période de la série temporelle ainsi que sa périodicité, calculée comme un degré de qualité sur le résultat renvoyé mesurant à quel point la série est périodique. Enfin, DPE génère des phrases comme « Environ toutes les 2 heures, l'afflux de client est important ». Des expériences sur des données artificielles et réelles confirment la pertinence de l'approche. D’un point de vue algorithmique, nous proposons une implémentation incrémentale et efficace de DPE, basée sur l’établissement de formules permettant le calcul de mises à jour des variables. Cette implémentation permet le passage à l'échelle de la méthode ainsi que l'analyse en temps réel de flux de données. Nous proposons également une extension de DPE basée sur le concept de périodicité locale permettant d'identifier les sous-séquences périodiques d'une série temporelle par l’utilisation d’un test statistique original. La méthode, validée sur des données artificielles et réelles, génère des phrases en langage naturel permettant d’extraire des informations du type « Toutes les deux semaines sur le premier semestre de l'année, les ventes sont élevées »
Our research is in the field of fuzzy linguistic summaries (FLS) that allow to generate natural language sentences to describe very large amounts of numerical data, providing concise and intelligible views of these data. We first focus on the interpretability of FLS, crucial to provide end-users with an easily understandable text, but hard to achieve due to its linguistic form. Beyond existing works on that topic, based on the basic components of FLS, we propose a general approach for the interpretability of summaries, considering them globally as groups of sentences. We focus more specifically on their consistency. In order to guarantee it in the framework of standard fuzzy logic, we introduce a new model of oppositions between increasingly complex sentences. The model allows us to show that these consistency properties can be satisfied by selecting a specific negation approach. Moreover, based on this model, we design a 4-dimensional cube displaying all the possible oppositions between sentences in a FLS and show that it generalises several existing logical opposition structures. We then consider the case of data in the form of numerical series and focus on linguistic summaries about their periodicity: the sentences we propose indicate the extent to which the series are periodic and offer an appropriate linguistic expression of their periods. The proposed extraction method, called DPE, standing for Detection of Periodic Events, splits the data in an adaptive manner and without any prior information, using tools from mathematical morphology. The segments are then exploited to compute the period and the periodicity, measuring the quality of the estimation and the extent to which the series is periodic. Lastly, DPE returns descriptive sentences of the form ``Approximately every 2 hours, the customer arrival is important''. Experiments with artificial and real data show the relevance of the proposed DPE method. From an algorithmic point of view, we propose an incremental and efficient implementation of DPE, based on established update formulas. This implementation makes DPE scalable and allows it to process real-time streams of data. We also present an extension of DPE based on the local periodicity concept, allowing the identification of local periodic subsequences in a numerical series, using an original statistical test. The method validated on artificial and real data returns natural language sentences that extract information of the form ``Every two weeks during the first semester of the year, sales are high''
APA, Harvard, Vancouver, ISO, and other styles
43

Arruda, Alexandre Matos. "Um sistema infinitário para a lógica de menor ponto fixo." reponame:Repositório Institucional da UFC, 2007. http://www.repositorio.ufc.br/handle/riufc/16927.

Full text
Abstract:
ARRUDA, Alexandre Matos. Um sistema infinitário para a lógica de menor ponto fixo. 2007. 91 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Departamento de Computação, Fortaleza-CE, 2007.
Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-05-20T15:28:27Z No. of bitstreams: 1 2007_dis_amarruda.pdf: 427889 bytes, checksum: b0a54f14f17ff89b515a4101e02f5b58 (MD5)
Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-05-20T15:29:23Z (GMT) No. of bitstreams: 1 2007_dis_amarruda.pdf: 427889 bytes, checksum: b0a54f14f17ff89b515a4101e02f5b58 (MD5)
Made available in DSpace on 2016-05-20T15:29:23Z (GMT). No. of bitstreams: 1 2007_dis_amarruda.pdf: 427889 bytes, checksum: b0a54f14f17ff89b515a4101e02f5b58 (MD5) Previous issue date: 2007
The notion of the least fixed-point of an operator is widely applied in computer science as, for instance, in the context of query languages for relational databases. Some extensions of FOL with _xed-point operators on finite structures, as the least fixed-point logic (LFP), were proposed to deal with problem problems related to the expressivity of FOL. LFP captures the complexity class PTIME over the class of _nite ordered structures. The descriptive characterization of computational classes is a central issue within _nite model theory (FMT). Trakhtenbrot's theorem, considered the starting point of FMT, states that validity over finite models is not recursively enumerable, that is, completeness fails over finite models. This result is based on an underlying assumption that any deductive system is of finite nature. However, we can relax such assumption as done in the scope of proof theory for arithmetic. Proof theory has roots in the Hilbert's programme. Proof theoretical consequences are, for instance, related to normalization theorems, consistency, decidability, and complexity results. The proof theory for arithmetic is also motivated by Godel incompleteness theorems. It aims to o_er an example of a true mathematically meaningful principle not derivable in first-order arithmetic. One way of presenting this proof is based on a definition of a proof system with an infinitary rule, the w-rule, that establishes the consistency of first-order arithmetic through a proof-theoretical perspective. Motivated by this proof, here we will propose an in_nitary proof system for LFP that will allow us to investigate proof theoretical properties. With such in_nitary deductive system, we aim to present a proof theory for a logic traditionally defined within the scope of FMT. It opens up an alternative way of proving results already obtained within FMT and also new results through a proof theoretical perspective. Moreover, we will propose a normalization procedure with some restrictions on the rules, such this deductive system can be used in a theorem prover to compute queries on relational databases.
A noção de menor ponto-fixo de um operador é amplamente aplicada na ciência da computação como, por exemplo, no contexto das linguagens de consulta para bancos de dados relacionais. Algumas extensões da Lógica de Primeira-Ordem (FOL)1 com operadores de ponto-fixo em estruturas finitas, como a lógica de menor ponto-fixo (LFP)2, foram propostas para lidar com problemas relacionados á expressividade de FOL. A LFP captura as classes de complexidade PTIME sobre a classe das estruturas finitas ordenadas. A caracterização descritiva de classes computacionais é uma abordagem central em Teoria do Modelos Finitos (FMT)3. O teorema de Trakhtenbrot, considerado o ponto de partida para FMT, estabelece que a validade sobre modelos finitos não é recursivamente enumerável, isto é, a completude falha sobre modelos finitos. Este resultado é baseado na hipótese de que qualquer sistema dedutivo é de natureza finita. Entretanto, nos podemos relaxar tal hipótese como foi feito no escopo da teoria da prova para aritmética. A teoria da prova tem raízes no programa de Hilbert. Conseqüências teóricas da noção de prova são, por exemplo, relacionadas a teoremas de normalização, consistência, decidibilidade, e resultados de complexidade. A teoria da prova para aritmética também é motivada pelos teoremas de incompletude de Gödel, cujo alvo foi fornecer um exemplo de um princípio matemático verdadeiro e significativo que não é derivável na aritmética de primeira-ordem. Um meio de apresentar esta prova é baseado na definição de um sistema de prova com uma regra infinitária, a w-rule, que estabiliza a consistência da aritmética de primeira-ordem através de uma perspectiva de teoria da prova. Motivados por esta prova, iremos propor aqui um sistema infinitário de prova para LFP que nos permitirá investigar propriedades em teoria da prova. Com tal sistema dedutivo infinito, pretendemos apresentar uma teoria da prova para uma lógica tradicionalmente definida no escopo de FMT. Permanece aberto um caminho alternativo de provar resultados já obtidos com FMT e também novos resultados do ponto de vista da teoria da prova. Além disso, iremos propor um procedimento de normalização com restrições para este sistema dedutivo, que pode ser usado em um provador de teoremas para computar consultas em banco de dados relacionais
APA, Harvard, Vancouver, ISO, and other styles
44

Motezuki, Fabio Kenji. "Um estudo sobre a simulação computacional da ventilação cruzada em habitações e sua aplicação no projeto arquitetônico." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3146/tde-15092009-163547/.

Full text
Abstract:
Nos últimos anos, devido à crescente preocupação com a sustentabilidade, foram despendidos mundialmente grandes esforços para a redução do consumo de energia pelos sistemas prediais. Em países tropicais como o Brasil, a ventilação natural é uma maneira efetiva e econômica para melhorar o conforto térmico dentro de habitações. Ela contribui para diminuir o uso do condicionamento de ar e renovar o ar da edificação, ajudando a reduzir as chances de se ter a Síndrome do Edifício Doente (Sick Building Syndrome SBS) e melhorando a qualidade do ar interno (Indoor Air Quality IAQ). Para tirar proveito destas vantagens da ventilação natural, o comportamento do fluxo de ar dentro da edificação deve ser analisado considerando o clima local. Existem diversos códigos computacionais baseados na Dinâmica de Fluidos Computacional (Computational Fluid Dynamics CFD) que podem ser utilizados para esta finalidade, no entanto, o CFD é um campo que requer conhecimentos altamente especializados e experiência prática para se obter bons resultados e este conhecimento geralmente está além da formação da maioria dos engenheiros e arquitetos. Com base nas dificuldades listadas e na necessidade de complementar a formação de engenheiros e arquitetos nesta área do conhecimento, este trabalho está focado em dois objetivos: o primeiro é implementar um simulador numérico computacional baseado no algoritmo Solution Algorithm for Transient Fluid Flows SOLA e as condições de contorno necessárias para a simulação da ventilação, sendo que a validação do simulador foi realizada por comparação com resultados numéricos e experimentais existentes na literatura. O segundo objetivo é propor uma ferramenta prática para a análise da ventilação natural na fase de projeto, com uma abordagem baseada na teoria de sistemas nebulosos, para identificar as melhores configurações de aberturas para um dado leiaute. Para isto, adotou-se a idéia utilizada por Givoni em seu estudo experimental: o espaço interno de uma sala é dividido em subdomínios onde a velocidade média do ar, sob diversas configurações de aberturas, é registrada. Como as velocidades médias refletem bem a eficácia da ventilação no subdomínio, elas formam a base para a definição espacial da função de pertinência para boa circulação de ar dentro da sala, considerando cada configuração de abertura. No entanto, ao invés de usar resultados experimentais, uma série de simulações computacionais baseadas em CFD, foram executadas para compor um banco de dados para avaliação das funções de pertinência. Por outro lado, temos o leiaute, que é produzido durante a concepção do projeto. Na medida em que o leiaute provê as informações para elaborar os requisitos do fluxo de ar, a função de pertinência relacionada ao fluxo de ar em cada subdomínio deve ser avaliada baseada no leiaute e nos requisitos do usuário. Acertando os requisitos providos pelo leiaute com a eficácia do fluxo de ar provido pela configuração de abertura, pode ser identificada a configuração que melhor se adapta ao leiaute. Nos casos analisados neste trabalho, o método mostrou-se promissor, indicando a configuração típica que melhor atende aos requisitos de projeto com uma boa conformidade com os resultados obtidos pela simulação da sala completa, incluindo a mobília.
In the last years, due to the concerns on sustainability, a great effort in energy saving of building systems is being carried out worldwide. In tropical countries such as Brazil, the natural ventilation is an effective and economical option to improve thermal comfort inside the dwellings, to avoid the use of costly HVAC systems, and to renew the indoor air, contributing to mitigate the Sick Building Syndrome (SBS) and to improve the Indoor Air Quality (IAQ). In order to take advantage of the natural ventilation, the behaviors of the airflow inside the buildings must be analyzed considering the local climate. There are many computer simulation codes based on Computational Fluid Dynamics (CFD) that may be adopted for this purpose, however, CFD is a field that requires highly specialized knowledge and experience to achieve good results and this expertise, which is needed to obtain reliable numerical results, is generally beyond the formation of the most part of architects and engineers. Owing to these difficulties and on the necessity to form engineers and architects in this area of knowledge, this work is focused in two main objectives: The first one is to implement a numeric computational simulation program based on Solution Algorithm for Transient Fluid Flows (SOLA) and the boundary conditions needed to simulate ventilation. The validation of the code is made by comparing the numerical results with results obtained using numerical or experimental methods published by other authors. The second objective is to propose a practical tool for the analysis of natural ventilation in the design of dwellings, with an approach based on the concepts of the Fuzzy Systems Theory to identify the best configurations of the openings for a given layout. For this, the idea used by Givonis experimental study is adopted: the inner space of a room is divided in sub-domains whose mean air velocities under different opening configurations are recorded. As the mean velocities reflect very well the effectiveness of the ventilation in the sub-domain, they form the basis for the definition of spatial distribution of the membership function for good air circulation inside the room concerning each opening configuration. However, instead of the experimental ones, a series of computer simulations were carried out to build a database for the assessment of the membership functions. On the other hand, we have the sketch of the layout, which is produced during the conceptual stage of design. As the layout provides the information about the requirements for the airflow, the membership function regarding the desirable air flow for each sub-domain might be assessed based on the layout and considering the user\'s requirements. By matching the requirements provided by the layout with the effectiveness of the airflow provided by the opening configurations, the opening configuration that best fits the layout can be identified. In the cases analyzed in this work, the method shows promising results. The typical configuration that best fits the design requirements with a good conformity with the results was obtained by full room simulation, including the furniture.
APA, Harvard, Vancouver, ISO, and other styles
45

MacCartney, Bill. "Natural language inference /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

De, La Quintana Bruggemann Pablo Javier. "Automated reasoning for modal logics : a natural deduction based approach." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Rosa, João Luis Garcia. "Redes neurais e logica formal em processamento de linguagem natural." [s.n.], 1993. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259553.

Full text
Abstract:
Orientador: Marcio Luiz de Andrade Netto
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica
Made available in DSpace on 2018-07-18T14:25:22Z (GMT). No. of bitstreams: 1 Rosa_JoaoLuisGarcia_M.pdf: 10533866 bytes, checksum: eff7483f9919f4d2a0a8d1da0a8ad44d (MD5) Previous issue date: 1993
Resumo: Esta dissertação de mestrado é sobre Processamento de Linguagem Natural (PLN). O PLN consiste de uma série de tarefas que a máquina deve executar para analisar um texto. Na literatura existem vários trabalhos em diversas abordagens. Este trabalho faz uma combinação de abordagens baseadas em lógica e de abordagens conexionistas. O trabalho proposto tem três partes. A primeira parte faz a análise sintática de frases da língua portuguesa. É baseada em lógica. A segunda parte faz a análise semântica, ou a verificação do significado das palavras numa frase. Isto é feito através de redes neurais artificiais, que "aprendem" a representação binária das palavras (suas microcaracterísticas semânticas). Esta abordagem é chamada de conexionismo. Sua grande vantagem é a habilidade de generalização, ou seja, a rede é capaz de reconhecer uma palavra, mesmo que esta não tenha sido mostrada a ela. A terceira, e última, parte deste trabalho trata da utilização de redes recorrentes para análise de frases. Este tipo de rede serve para "ligar" as palavras em uma frase, pois a rede recorrente tem memória. Ela é capaz de "lembrar" da última palavra vista numa seqüência. É útil para ligar as palavras em uma sentença, por exemplo, o sujeito com o objeto, o objeto com o complemento, etc. Isto torna a frase uma entidade única a ser analisada
Abstract: This dissertation is about Natural Language Processing (NLP). NLP consists of a series of tasks the machine should carry out in analysing a texto In literature, there are papers having different approaches. This work combines two approaches: based on logic and connectionism. The proposed work is divided in three parts. The first makes the parsing, or the syntactic analysis of sentences in the Portuguese language, based on logic. The second part takes care of the semantic analysis, or the verification of the meaning of words in a sentence. This is achieved through artificial neural networks that "Iearn" the binary representation of the words (their semantic microfeatures). This approach is called connectionism. Its major advantage is the ability of generalizing, i. e., it is able to recognize a word even it is not presented to the nets. The third, and last, part of this work is about the use of recurrent networks in text analysis. This kind of network is to "Iink" the words in a sentence because the recurrent net is given memory, which makes it able to "remember" the last word seen in a sequence. This is useful to link the words in a sentence like the subject to the object, the object to the complement, etc. This makes a sentence an entire item to be analysed.
Mestrado
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
48

Hughes, Cameron A. "Epistemic Structures of Interrogative Domains." Youngstown State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1227285777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Beranová, Michaela. "Aspekty zásob v maloobchodě: modely přirozených úbytků zásob a ztratného." Doctoral thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-233730.

Full text
Abstract:
The dissertation thesis deals with the problem of relevant volume of natural shrinkage and accidental losses of retail stock quota calculation. In frame of the dissertation thesis, factors affecting an extent of accidental shortage of inventories in retail business are investigated here. Then, possible approaches to a calculation of relevant volume of such a quota are recognized as well. By its scope, the dissertation thesis reacts on a problem that exists within the income taxes law since 1995, but any conceptual solution of this problem is still missing. This current problem that is felt especially in retail business is right the problem of relevant volume of a quota of natural shrinkage and accidental loses calculation. The dissertation thesis is based on wide research that has been done in both, in retail businesses and on the side of tax administration too. On the basis of this research’s outcomes, the main factors affecting an extent of accidental losses of retail stock have been determined. Then these factors and evaluation of their influence became construction elements of two mathematic models for the calculation of relevant volume of a quota of natural shrinkage and accidental losses of inventories in retail business. These models are the model that is based on the statistic method of multiple regression and the model based on the fuzzy logic, respectively on the fuzzy mathematics. For the conclusion of the dissertation thesis, both models are discussed from the point of their relevance as well as from the view of their practical application. Theoretical and practical contributions of the dissertation thesis are also concluded here along with an outline of possible future research in this area.
APA, Harvard, Vancouver, ISO, and other styles
50

Shankar, Arunprasath. "ONTOLOGY-DRIVEN SEMI-SUPERVISED MODEL FOR CONCEPTUAL ANALYSIS OF DESIGN SPECIFICATIONS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1401706747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography