To see the other types of publications on this topic, follow the link: Parsnip.

Dissertations / Theses on the topic 'Parsnip'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parsnip.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chappell, Lauren. "Characterisation of parsnip canker pathogens and identification of plant resistance." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/91132/.

Full text
Abstract:
Parsnips (Pastinaca sativa) are a speciality crop, covering 3000 hectares across the UK, with a 93,000-tonne production and economic value of greater than £31M annually. Currently, the major constraints to production are losses associated with root canker disease, caused by a range of fungal pathogens. With no specific fungicides, development of long-term, sustainable resistance to parsnip canker is highly desirable. This work characterises the pathogens responsible, and develops tools to facilitate breeding for quantitative resistance to root canker diseases. Isolations and molecular characterisation of pathogens responsible for parsnip canker highlighted a range of fungal species, whilst canker symptoms were found to be clearly associated with certain pathogens. Cylindrocarpon destructans, Mycocentrospora acerina and to a lesser extent Itersonilia pastinacae were identified as the primary pathogens responsible for causing parsnip canker in the UK. Itersonilia spp. isolates from a range of hosts were found to infect parsnip roots and leaves, and produce both chlamydospores and ballistospores at a range of temperatures; furthermore, molecular characterisation failed to differentiate between species. For these reasons, Itersonilia should be described as a single species. For both C. destructans and M. acerina, isolates showed minimal variation in pathogenicity on parsnip roots and seedlings, and exhibited mycelial growth even at low temperatures. Phylogenetic analysis identified a species complex for both pathogens that could not be resolved by the ITS (Internal transcribed spacer) alone. Finally, parsnip root and seedling assays were developed to determine resistance to I. pastinacae, M. acerina and C. destructans within parsnip populations. QTL analysis of a parsnip genotyping population identified a significant QTL conferring resistance to M. acerina for use in a marker assisted breeding programme. The understanding of the pathology gained in this project will facilitate selection of resistant varieties, benefitting breeders, growers and through reduction in control mechanisms, society in general.
APA, Harvard, Vancouver, ISO, and other styles
2

Salim, N. "A study on virus diseases of parsnip (Pastinaca sativa L.)." Thesis, University of Bath, 1990. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sheppard, Andrew W. "Insect herbivore competition and the population dynamics of Heracleum sphondylium L. (Umbelliferae)." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/46560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kilby, Michael. "British literary travellers of the thirties : from Auden and Isherwood to Parsnip and Pimpernell." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Pawan. "INVESTIGATION ON THE UPTAKE AND PHYSIOLOGICAL EFFECTS OF CuO OR ZnO ENGINEERED NANOPARTICLES IN CARROT, PARSNIP AND WHEAT." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1574.

Full text
Abstract:
Rapid production and usage of engineered nanomaterials (ENPs) in the recent years has highlighted the need to address their potential impact to biological systems. The interaction of ENPs with plant systems is not well known or understood. As plants are the base of all ecosystems, there is a possible threat of ENPs entering food webs and the human food supply. In addition, the mechanism of nanoparticle uptake and toxicity in plant systems is not fully understood so there is considerable opportunity for investigation in this area. To provide an assessment of the potential risk of ENPs to plants, the proposed research has examined the interaction of CuO and ZnO ENPs with carrot, parsnip and wheat. This research was completed in three phases in which different aspects of nanoparticle uptake and toxicity were addressed. Phase I analyzed the uptake and accumulation of CuO and ZnO ENPs in carrot and parsnip and gave information regarding physiological impact. Phase II was aimed at the evaluation of CuO and ZnO ENPs as potential micronutrient fertilizers using nutritionally altered carrots. Phase III was focused on comparing the uptake characteristics of CuO or ZnO ENPs to Cu2+ or Zn2+ ions using specific membrane transport inhibitors in wheat. To accomplish goals of phase I, a preliminary experiment was performed to evaluate the behavior of CuO and ZnO ENPs in aqueous suspensions and get information on the extent of dissolution at increasing concentrations of ENPs. For the main experiment, 192 d old hydroponically grown carrot or parsnip plants were treated for 10 d with increasing concentrations of one of the two ENPs with increasing concentrations of the corresponding metal (i.e. 0, 10, 100, 500, or 1000 mg L-1) in deionized water. An ionic treatment was also included, based on preliminary experiment, to distinguish between ion related and nanoparticle-specific toxicity. Plants of both species accumulated the corresponding metals from ENPs in their storage organs. The metal concentration changed in a dose dependent manner and showed saturation at higher concentration treatments. The outer layer of storage organs or peels for both the plant species accumulated significantly higher concentrations of metal as compared to the flesh. There was a significant difference in the total amount of water transpired during the treatment period, suggesting an imbalance in water relations. Overall, this study has provided information describing the impact of ENPs on these two plants and the accumulation of Cu or Zn from CuO or ZnO ENPs. In Phase II, carrot plants were hydroponically grown for 158 d under normal conditions and subsequently subjected to nutrient regime change in a 30 d pre-treatment. Based on the initial level of Cu or Zn present in nutrient solution, plants were grown in similar nutrient solution except for increasing concentrations of CuSO4.5H2O (0, 0.5 and 2.5 μM) or ZnSO4.7H2O (0, 1 and 5 μM) to generate Deprived, Replete and Surplus sets of plants in pre-treatments. The plants from each of the sets were divided into three subsets to get control, ionic or ENP treated plants and treatments were enforced. Control plants received no Cu or Zn whereas ionic and ENP treated plants received 50 mg Cu L-1 or 100 mg Zn L-1 in sulfate salt and nanoparticle form during the 7d treatment period. The visual results show that ZnO ENPs had similar performance as compared to the ion group whereas in CuO ENPs showed a slightly greater phytotoxic effect(s) as compared to ion treatments. The plant’s Zn nutritional status had a significant effect on uptake of Zn in all analyzed tissues, whereas the effect on the plant’s Cu nutritional status had a mixed response to Cu source and trends showed significant variability. The presence of plants significantly promoted dissolution of ENPs as compared to controls, but the change in nutrient regime had no significant effect on the dissolution behavior of ENPs. Overall, our results show that use ZnO ENPs can be a promising way of providing Zn to plants, but the same may not be true in case of CuO ENPs for the supply of Cu. In Phase III, 25 d hydroponically grown wheat seedlings were exposed to specific membrane transport inhibitors (carbonylcyanide-m-chlorphenylhydrazole, vanadate, lanthanum) in the presence of either ions (Cu2+ or Zn2+) or ENPs (CuO or ZnO) in a 3 d treatment period. Separate treatments consisting of either membrane transport inhibitors or ions or ENPs were also included to serve as a reference along with controls which received deionized water only. In the Cu experiment, elemental analysis of whole plant showed significant difference only in case of CCCP and vanadate with a decrease in Cu concentration in CuO treatments which suggests the transport of CuO ENPs is reduced by the presence of a protonophore or P-type ATPase. The result of Zn experiment found significant differences in Zn accumulation only in case of CCCP in Zn2+ treatments with a decrease in Zn uptake which implies only Zn2+ ions were affected by the presence of a protonophore (CCCP). The results of these experiments provided a mechanistic insight to the uptake of ENPs and ions in the wheat plant. To sum up, the results gathered in this dissertation has provided a comprehensive understanding of the metal uptake characteristics from the corresponding ENPs, given information on the physiological impact of ENPs to selected plants and analyzed a potential use of ENPs to address a global nutritional problem. The results of this study have potential implications for agriculture, food safety, environment, and human health.
APA, Harvard, Vancouver, ISO, and other styles
6

Fanselow, Gisbert, Matthias Schlesewsky, Damir Cavar, and Reinhold Kliegl. "Optimal parsing: syntactic parsing preferences and optimality theory." Universität Potsdam, 1999. http://opus.kobv.de/ubp/volltexte/2011/5716/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kindermann, Jörg. "Experten-Parsing : Parsing und Wissensrepräsentation im texttheoretischen Rahmen /." Hamburg : H. Buske, 1987. http://catalogue.bnf.fr/ark:/12148/cb35566984z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Andrei, Ştefan. "Bidirectional parsing." [S.l. : s.n.], 2000. http://www.sub.uni-hamburg.de/disse/134/Disse.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aycock, John Daniel. "Faster Tomita parsing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ34479.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hocking, Ian. "Resources and parsing." Thesis, University of Exeter, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bhalerao, Rohit Dinesh. "Parallel XML parsing." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pan, Yinfei. "Parallel XML parsing." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Economopoulos, Giorgios Robert. "Generalised LR parsing algorithms." Thesis, Royal Holloway, University of London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Auli, Michael. "Integrated supertagging and parsing." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7636.

Full text
Abstract:
Parsing is the task of assigning syntactic or semantic structure to a natural language sentence. This thesis focuses on syntactic parsing with Combinatory Categorial Grammar (CCG; Steedman 2000). CCG allows incremental processing, which is essential for speech recognition and some machine translation models, and it can build semantic structure in tandem with syntactic parsing. Supertagging solves a subset of the parsing task by assigning lexical types to words in a sentence using a sequence model. It has emerged as a way to improve the efficiency of full CCG parsing (Clark and Curran, 2007) by reducing the parser’s search space. This has been very successful and it is the central theme of this thesis. We begin by an analysis of how efficiency is being traded for accuracy in supertagging. Pruning the search space by supertagging is inherently approximate and to contrast this we include A* in our analysis, a classic exact search technique. Interestingly, we find that combining the two methods improves efficiency but we also demonstrate that excessive pruning by a supertagger significantly lowers the upper bound on accuracy of a CCG parser. Inspired by this analysis, we design a single integrated model with both supertagging and parsing features, rather than separating them into distinct models chained together in a pipeline. To overcome the resulting complexity, we experiment with both loopy belief propagation and dual decomposition approaches to inference, the first empirical comparison of these algorithms that we are aware of on a structured natural language processing problem. Finally, we address training the integrated model. We adopt the idea of optimising directly for a task-specific metric such as is common in other areas like statistical machine translation. We demonstrate how a novel dynamic programming algorithm enables us to optimise for F-measure, our task-specific evaluation metric, and experiment with approximations, which prove to be excellent substitutions. Each of the presented methods improves over the state-of-the-art in CCG parsing. Moreover, the improvements are additive, achieving a labelled/unlabelled dependency F-measure on CCGbank of 89.3%/94.0% with gold part-of-speech tags, and 87.2%/92.8% with automatic part-of-speech tags, the best reported results for this task to date. Our techniques are general and we expect them to apply to other parsing problems, including lexicalised tree adjoining grammar and context-free grammar parsing.
APA, Harvard, Vancouver, ISO, and other styles
15

Van, Delden Sebastian Alexander. "Larger-first partial parsing." Doctoral diss., University of Central Florida, 2003. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/2038.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
Larger-first partial parsing is a primarily top-down approach to partial parsing that is opposite to current easy-fzrst, or primarily bottom-up, strategies. A rich partial tree structure is captured by an algorithm that assigns a hierarchy of structural tags to each of the input tokens in a sentence. Part-of-speech tags are first assigned to the words in a sentence by a part-of-speech tagger. A cascade of Deterministic Finite State Automata then uses this part-of-speech information to identify syntactic relations primarily in a descending order of their size. The cascade is divided into four specialized sections: (1) a Comma Network, which identifies syntactic relations associated with commas; (2) a Conjunction Network, which partially disambiguates phrasal conjunctions and llly disambiguates clausal conjunctions; (3) a Clause Network, which identifies non-comma-delimited clauses; and (4) a Phrase Network, which identifies the remaining base phrases in the sentence. Each automaton is capable of adding one or more levels of structural tags to the tokens in a sentence. The larger-first approach is compared against a well-known easy-first approach. The results indicate that this larger-first approach is capable of (1) producing a more detailed partial parse than an easy first approach; (2) providing better containment of attachment ambiguity; (3) handling overlapping syntactic relations; and (4) achieving a higher accuracy than the easy-first approach. The automata of each network were developed by an empirical analysis of several sources and are presented here in detail.
Ph.D.
Doctorate;
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering and Computer Science
215 p.
xiv, 212 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
16

Tepper, Jonathan Andrew. "Corpus-based connectionist parsing." Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Ngoc Luyen. "French language DRS parsing." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0202.

Full text
Abstract:
Dans l’essor d’internet, les contenus générés par les utilisateurs à partir des services de réseaux sociaux deviennent une source géante d’informations qui peuvent être utile aux entreprises sur l’aspect où les utilisateurs sont considérés comme des clients ou des clients potentiels pour les entreprises. L’exploitation des textes générés par les utilisateurs peut aider à identifier leurs sentiments, leurs intentions, ou réduire l’effort des agents qui sont responsables de recueillir ou de recevoir des informations sur les services de réseaux sociaux. Dans le cadre de cette thèse, les contenues de textes tels que discours, énoncés, conversations issues de la communication interactive sur les plateformes de réseaux sociaux deviennent l’objet de données principales de notre étude. Nous approfondissons une analyse de structures et composants des phrases dans les textes sur la base de la Grammaire Catégoriel Combinatoire (GCC) et la théorie des représentations du discours. Nous proposons une méthode pour l’extraction d’un arbre de GCC à partir de l’arbre dépendante de la phrase, et une architecture générale pour construire un pont de relation entre les syntaxes et les sémantiques des phrases françaises. Par conséquent, notre étude obtient de la représentation de textes de la langue naturel sous une nouvelle forme de la logique du premier ordre ou la boîte de la structure des représentations du discours
In the rise of the internet, user-generated content from social networking services is becoming a giant source of information that can be useful to businesses on the aspect where users are viewed as customers or potential customers for companies. Exploitation of user-generated texts can help identify their feelings, intentions, or reduce the effort of the agents who are responsible for collecting or receiving information on social networking services. As part of this thesis, the content of texts such as speeches, statements, conversations from interactive communication on social media platforms become the main data object of our study. We deepen an analysis of structures and components of sentences in texts on the basis of Combinatory Categorial Grammar (CCG) and the Discourse Representation Structure (DRS). We propose a method for extracting a CCG tree from the dependency structure of the sentence, and a general architecture to build a bridge of relationship between syntaxes and semantics of French sentences. As a result, our study achieves representations of natural language texts in a new form of first order logic or the box of DRS
APA, Harvard, Vancouver, ISO, and other styles
18

Pate, John Kenton. "Parsing with Local Context." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243880542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lewis, William. "Clitics, Scrambling and Parsing." University of Arizona Linguistics Circle, 2000. http://hdl.handle.net/10150/226611.

Full text
Abstract:
The purpose of this paper is to devise optimal algorithms for parsing linguistic structures that contain P2 (Wackernagel) clitics. Since many languages that have P2 clitics also allow scrambling, any algorithms for parsing P2 clitics must also contain algorithms for parsing scrambled structures. Most of the energy of this paper, however, will be focused on P2 parsing. Although many languages have P2 clitics. I have focused most of my attention on Native American languages (with some exceptions). There is one major reason for this: languages of the Americas are almost entirely ignored by the computational and parsing literature, which focuses on languages of the Indo-European language family (and almost always on English, at that). By doing so, researchers deprive themselves of data and linguistic structural diversity that can help in devising more widely applicable parsing algorithms. This is a computational paper, the intention of which is to develop parsing procedures. Little attention will be paid to a specific syntactic /morphological theory, nor will much attention be paid to the form of the output. These are concerns that can be addressed in a later stage of parser design. What is an "optimal" parsing algorithm? I shall define the optimality of a given solution by the criteria in (1) below: (1) 1) The optimal solution is one which uses devices and formalisms whose generative capacity is as low as possible on the Chomsky hierarchy. 2) The optimal solution uses as few "rules" or "devices" as possible. Obviously, it will be necessary to strike a balance between these two criteria. For this reason, the issue of optimality may be somewhat lexìbìe, depending on how much weight is given to each criterion. The most optimal solutions might require the power of context -sensitive rules, but these may be used in concert with context -free or even finite-state rules.
APA, Harvard, Vancouver, ISO, and other styles
20

Reddy, Goli Venkata Sivakumar. "Syntax-mediated semantic parsing." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29521.

Full text
Abstract:
Querying a database to retrieve an answer, telling a robot to perform an action, or teaching a computer to play a game are tasks requiring communication with machines in a language interpretable by them. Semantic parsing is the task of converting human language to a machine interpretable language. While human languages are sequential in nature with latent structures, machine interpretable languages are formal with explicit structures. The computational linguistics community have created several treebanks to understand the formal syntactic structures of human languages. In this thesis, we use these to obtain formal meaning representations of languages, and learn computational models to convert these meaning representations to the target machine representation. Our goal is to evaluate if existing treebank syntactic representations are useful for semantic parsing. Existing semantic parsing methods mainly learn domain-specific grammars which can parse human languages to machine representation directly. We deviate from this trend and make use of general-purpose syntactic grammar to help in semantic parsing. We use two syntactic representations: Combinatory Categorial Grammar (CCG) and dependency syntax. CCG has a well established theory on deriving meaning representations from its syntactic derivations. But there are no CCG treebanks for many languages since these are difficult to annotate. In contrast, dependencies are easy to annotate and have many treebanks. However, dependencies do not have a well established theory for deriving meaning representations. In this thesis, we propose novel theories for deriving meaning representations from dependencies. Our evaluation task is question answering on a knowledge base. Given a question, our goal is to answer it on the knowledge base by converting the question to an executable query. We use Freebase, the knowledge source behind Google’s search engine, as our knowledge base. Freebase contains millions of real world facts represented in a graphical format. Inspired from the Freebase structure, we formulate semantic parsing as a graph matching problem, i.e., given a natural language sentence, we convert it into a graph structure from the meaning representation obtained from syntax, and find the subgraph of Freebase that best matches the natural language graph. Our experiments on Free917, WebQuestions and GraphQuestions semantic parsing datasets conclude that general-purpose syntax is more useful for semantic parsing than induced task-specific syntax and syntax-agnostic representations.
APA, Harvard, Vancouver, ISO, and other styles
21

Reis, Leonardo Vieira dos Santos. "Adaptable parsing expression grammars." Universidade Federal de Minas Gerais, 2014. http://hdl.handle.net/1843/ESBF-9TELDQ.

Full text
Abstract:
Parser generators have been used for more than 50 years. Tools like YACC can automatically build a parser from a formal definition of the syntax of a language, usually based on context-free grammars (CFG). The main motivation for automatic parser generation is compiler correctness and recognition completeness, since with manual implementation it is very difficult to guarantee that all programs in a given language will be correctly analysed. Despite the advantagens mentioned above, the technology of automatic parser generation is still not available for languages that allow on-the-fly modifications on their own set of grammar rules. There is a lack of appropriate formal models for describing the syntax of these languages, therefore efficient parsers may not be automatically generated, requiring handwritting code. In this thesis, we present the design and formal definition of Adaptable Parsing Expression Grammars (APEG), an extension to the Parsing Expression Grammar (PEG) model that allows the modification of production rules during the analysis of an input string. We also show that APEG is capable to define languages that require on-the-fly modifications and allows automatic generation of parsers that are reasonably efficient to be used in practice.
Geradores automático de analisadores sintáticos tem sido usados por mais de 50 anos. Ferramentas tais como o YACC automaticamente geram um analisador sintático a partir de uma definição formal da sintaxe da linguagem, que usualmente é baseada em uma gramática livre do contexto. A principal motivação para geradores automáticos de analisadores sintáticos é garantir que o compilador está correto e que reconhece todas as sentenças da linguagem que se pretende especificar, visto que com uma implementação manual é muito difícil de garantir que todos os programas de uma linguagem serão corretamente analisados. Apesar das vantagens mencionadas acima, geradores automáticos de analisadores sintáticos ainda não suportam linguagens que permitem modificar o seu próprio conjunto de regras dinamicamente. Faltam modelos apropriados para descrever tais linguagens, assim como geradores automáticos de analisadores sintáticos eficientes. Portanto, os analisadores sintáticos dessas linguagens são manualmente implementados. Nessa tese é apresentado o projeto e modelo formal de Adaptable Parsing Expression Grammars (APEG), uma extensão de Parsing Expression Grammar (PEG) que permite a manipulação do conjunto de regras sintáticas durante a análise do programa de entrada. Mostramos, também, que APEG é poderoso o suficiente para definir linguagens que exigem a modificação de seu conjunto de regras dinamicamente e analisadores sintáticos gerados a partir do modelo são eficientes para serem usado na prática.
APA, Harvard, Vancouver, ISO, and other styles
22

Jaf, Sardar. "The application of constraint rules to data-driven parsing." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/the-application-of-constraint-rules-to-datadriven-parsing(fe7b983d-e5ec-4e86-8f97-05066c1455b1).html.

Full text
Abstract:
The process of determining the structural relationships between words in both natural and machine languages is known as parsing. Parsers are used as core components in a number of Natural Language Processing (NLP) applications such as online tutoring applications, dialogue-based systems and textual entailment systems. They have been used widely in the development of machine languages. In order to understand the way parsers work, we will investigate and describe a number of widely used parsing algorithms. These algorithms have been utilised in a range of different contexts such as dependency frameworks and phrase structure frameworks. We will investigate and describe some of the fundamental aspects of each of these frameworks, which can function in various ways including grammar-driven approaches and data-driven approaches. Grammar-driven approaches use a set of grammatical rules for determining the syntactic structures of sentences during parsing. Data-driven approaches use a set of parsed data to generate a parse model which is used for guiding the parser during the processing of new sentences. A number of state-of-the-art parsers have been developed that use such frameworks and approaches. We will briefly highlight some of these in this thesis. There are three specific important features that it is important to integrate into the development of parsers. These are efficiency, accuracy, and robustness. Efficiency is concerned with the use of as little time and computing resources as possible when processing natural language text. Accuracy involves maximising the correctness of the analyses that a parser produces. Robustness is a measure of a parser’s ability to cope with grammatically complex sentences and produce analyses of a large proportion of a set of sentences. In this thesis, we present a parser that can efficiently, accurately, and robustly parse a set of natural language sentences. Additionally, the implementation of the parser presented here allows for some trading-off between different levels of parsing performance. For example, some NLP applications may emphasise efficiency/robustness over accuracy while some other NLP systems may require a greater focus on accuracy. In dialogue-based systems, it may be preferable to produce a correct grammatical analysis of a question, rather than incorrectly analysing the grammatical structure of a question or quickly producing a grammatically incorrect answer for a question. Alternatively, it may be desirable that document translation systems translate a document into a different language quickly but less accurately, rather than slowly but highly accurately, because users may be able to correct grammatically incorrect sentences manually if necessary. The parser presented here is based on data-driven approaches but we will allow for the application of constraint rules to it in order to improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Adams, Allison. "Dependency Parsing and Dialogue Systems : an investigation of dependency parsing for commercial application." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324859.

Full text
Abstract:
In this thesis, we investigate dependency parsing for commercial application, namely for future integration in a dialogue system. To do this, we conduct several experiments on dialogue data to assess parser performance on this domain, and to improve this performance over a baseline. This work makes the following contributions: first, the creation and manual annotation of a gold-standard data set for dialogue data; second, a thorough error analysis of the data set, comparing neural network parsing to traditional parsing methods on this domain; and finally, various domain adaptation experiments show how parsing on this data set can be improved over a baseline.  We further show that dialogue data is characterized by questions in particular, and suggest a method for improving overall parsing on these constructions.
APA, Harvard, Vancouver, ISO, and other styles
24

Loftsson, Hrafn. "Tagging and parsing Icelandic text." Thesis, University of Sheffield, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487602.

Full text
Abstract:
~aturallanguageprocessing (~LP) is a very young discipline in Iceland. Therefore, there is a lack of publicly available basic tools for processing the morphologically complex Icelandic language. III this thesis, we investigate the effectiveness and viability of using (mainly) rule-based methods for analysing the synta.x of Icelandic text. For this purpose, and because our work has a practical focus, we develop a ~LP toolkit, IceNLP. The toolkit consists of a tokeniser, the morphological analyser IceMorphy, the part-ofspeech tagger IceTagger', and the shallow parser IcePan;er'. The task of the tokeniser is to split a sequence of characters into linguistic units and identify where one sentence ends and another one begins. IceMorphy is used for guessing part-of-speech tags for unknown words and filling in tag profile gaps ill a dictionary. Ice Tagger' is a linguistic rule-based tagger which achieves considerably higher tagging accuracy than previously reported results using taggers based on datadriven techniques. Furthermore, by using several tagger integration and combination methods. we increase substantially the tagging accuracy of Icelandic text, with regard to previous work. Our shallow parser, IceParser, is an incremental finite-state parser, the first parser puulished for the Icelandic language. It produces shallow syntactic annotation, using an annotation scheme specifically developed in this work. Furthermore, we create a grammar definition corpus, a representative collection of sentences annotated using the annotation scheme. The development of our toolkit is a step towards the goal of building a Basic Language Resource Kit (BLARK) for the Icelandic language. Our toolkit has been made available for use in the research community, and should therefore encourage further research and development of XLP tools.
APA, Harvard, Vancouver, ISO, and other styles
25

Bourdages, Johanne S. "Parsing gaps: Evidence from French." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5893.

Full text
Abstract:
This dissertation investigates the parsing of filler-gap dependencies in French, more specifically those found in the restrictive relative construction. The primary goal of this research is to examine a question raised by Stowe (1984; 1986) concerning whether the parser has access to islandhood knowledge. Stowe's (1984;1986) results are compatible with the view that this type of grammatical information is readily accessible for the parser's initial analysis. However, her results are ambiguous since in her study, islandhood was confounded with subject position, where she failed to find gap-location effect. This study will examine the CNP-constraint in object position. A second goal is to examine whether the parsing mechanism postulates a gap in any potential position or only when the position is also a potential end of the sentence. Most of the experiments in the field use sentences where false gaps are located in a position which can correspond to a potential end of a sentence. Thus, the "surprise" effect attributed to false gap filling could also be attributed to the parsing mechanism finding that it did not reach the end of the sentence as expected. This dissertation provides evidence compatible with the view that island constraints, in this case the Complex NP constraint, are readily available to the parser; however, it also provides indications that the potential end of the sentence effect is a factor which has to be considered in identifying a gap location.
APA, Harvard, Vancouver, ISO, and other styles
26

Ohno, Tomohiro, and Shigeki Matsubara. "Corpus-based Speech Monologue Parsing." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

MATSUBARA, Shigeki, and Yoshihide KATO. "Incremental Parsing with Adjoining Operation." Institute of Electronics, Information and Communication Engineers, 2009. http://hdl.handle.net/2237/15001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Honsberger, Michael. "Parsing memory structure with reconsolidation." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104703.

Full text
Abstract:
Neural populations in the basolateral amygdala (BLA) have been shown to be an essential substrate for associative fear memories. However, it remains unclear whether distinct associative memories are mediated by independent or overlapping populations of BLA neurons. The focus of this dissertation is to describe efforts undertaken to use reconsolidation to advance this issue. I used the fact that initiation of reconsolidation is dependant on the reactivation of a particular memory. Therefore, if two memories are stored by the same overlapping population within the BLA, then interference through blockade of reconsolidation of one should lead to impairments in both. Conversely, if the two memories are stored independently of each other within the BLA, then blockade of reconsolidation of one memory should leave the second intact. I have investigated this with two protocols that each result in two distinct fear memories. Using a single tone-shock, I investigated the relationship of auditory and contextual fear memory. My findings demonstrate a complex functional interaction between these two memories. I designed a novel 2-tone protocol with which I attempted to use the selective blockade of reconsolidation to test a model of fear memory acquisition. This attempt illuminated the need for further knowledge regarding the boundary conditions of reconsolidation. I also describe an unbiased approach to detect molecular mechanisms unique to either consolidation or reconsolidation. One candidate from this screen was validated for its role in consolidation in the BLA. The experiments described in this dissertation provide a unique view of how fear memory representations are organized in the amygdala and how manipulations of reconsolidation can be used to understand the structure of memory.
Les populations neuronales situées dans l'amygdale basolatérale (BLA) sont reconnues comme étant un substrat essentiel aux mémoires associés à la peur. Il n'est pas encore clairement établi si des souvenirs associatifs distincts sont reliés à des populations de neurones de la BLA indépendantes ou se chevauchant. Cette dissertation décritles efforts entrepris pour faire progresser nos connaissances sur cette dernière interrogation en utilisant la reconsolidation. Pour mon étude, je me suis basé sur le fait que la reconsolidation est dépendante de la réactivation d'un souvenir en particulier. Par conséquent, si deux souvenirs sont emmagasinés par la même population de neurones se chevauchant dans la BLA, alors l'interférence causée par le blocage de la reconsolidation devrait altérer les deux souvenirs. Inversement, si deux souvenirs sont emmagasinés de manière indépendante l'un de l'autre à l'intérieur de la BLA, alors le blocage de la reconsolidation d'un seul souvenir devrait laisser le second intact. J'ai examiné cette hypothèse à l'aide de deux protocoles qui ont chacun produit deux mémoires distinctes associées à la peur. En utilisant une seule association tonalité-décharge électrique, j'ai étudié la relation entre la mémoire auditive et contextuelle associée à la peur. Mes résultats ont démontrés une interaction fonctionnelle complexe entre ces deux types de souvenirs. J'ai conçu un nouveau protocole composé de deux tonalités avec lequel j'ai essayé de bloquer sélectivement la reconsolidation pour tester un modèle d'acquisition de mémoires associées à la peur. Cette expérience démontre le besoin d'en apprendre davantage sur les conditions limitant la reconsolidation. De plus, j'ai décris une approche impartiale pour détecter les mécanismes moléculaires uniques à la consolidation ou à la reconsolidation. Suite au criblage de plusieurs molécules, une molécule candidate a été reconnue pour son rôle dans la consolidation dans la BLA. Les expériences décrites dans cette dissertation amène une perspective nouvelle sur la façon dont les mémoires reliées à la peur sont organisées dans l'amygdale et démontre comment la manipulation de la reconsolidation peut être utilisée pour comprendre la structure de la mémoire.
APA, Harvard, Vancouver, ISO, and other styles
29

Çakici, Ruket. "Wide-coverage parsing for Turkish." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3807.

Full text
Abstract:
Wide-coverage parsing is an area that attracts much attention in natural language processing research. This is due to the fact that it is the first step tomany other applications in natural language understanding, such as question answering. Supervised learning using human-labelled data is currently the best performing method. Therefore, there is great demand for annotated data. However, human annotation is very expensive and always, the amount of annotated data is much less than is needed to train well-performing parsers. This is the motivation behind making the best use of data available. Turkish presents a challenge both because syntactically annotated Turkish data is relatively small and Turkish is highly agglutinative, hence unusually sparse at the whole word level. METU-Sabancı Treebank is a dependency treebank of 5620 sentences with surface dependency relations and morphological analyses for words. We show that including even the crudest forms of morphological information extracted from the data boosts the performance of both generative and discriminative parsers, contrary to received opinion concerning English. We induce word-based and morpheme-based CCG grammars from Turkish dependency treebank. We use these grammars to train a state-of-the-art CCG parser that predicts long-distance dependencies in addition to the ones that other parsers are capable of predicting. We also use the correct CCG categories as simple features in a graph-based dependency parser and show that this improves the parsing results. We show that a morpheme-based CCG lexicon for Turkish is able to solve many problems such as conflicts of semantic scope, recovering long-range dependencies, and obtaining smoother statistics from the models. CCG handles linguistic phenomena i.e. local and long-range dependencies more naturally and effectively than other linguistic theories while potentially supporting semantic interpretation in parallel. Using morphological information and a morpheme-cluster based lexicon improve the performance both quantitatively and qualitatively for Turkish. We also provide an improved version of the treebank which will be released by kind permission of METU and Sabancı.
APA, Harvard, Vancouver, ISO, and other styles
30

Schepman, Astrid Helena Baltina Catherina. "Prosody and on-line parsing." Thesis, University of Sussex, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

De, Brito e. Cunha Goncalo. "Incremental parsing of coordinate constructions." Thesis, University of Sussex, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Yuan Ph D. Massachusetts Institute of Technology. "Parsing with sparse annotated resources." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82180.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 67-73).
This thesis focuses on algorithms for parsing within the context of sparse annotated resources. Despite recent progress in parsing techniques, existing methods require significant resources for training. Therefore, current technology is limited when it comes to parsing sentences in new languages or new grammars. We propose methods for parsing when annotated resources are limited. In the first scenario, we explore an automatic method for mapping language-specific part of- speech (POS) tags into a universal tagset. Universal tagsets play a crucial role in cross-lingual syntactic transfer of multilingual dependency parsers. Our central assumption is that a high-quality mapping yields POS annotations with coherent linguistic properties which are consistent across source and target languages. We encode this intuition in an objective function. Given the exponential size of the mapping space, we propose a novel method for optimizing the objective over mappings. Our results demonstrate that automatically induced mappings rival their manually designed counterparts when evaluated in the context of multilingual parsing. In the second scenario, we consider the problem of cross-formalism transfer in parsing. We are interested in parsing constituency-based grammars such as HPSG and CCG using a small amount of data annotated in the target formalisms and a large quantity of coarse CFG annotations from the Penn Treebank. While the trees annotated in all of the target formalisms share a similar basic syntactic structure with the Penn Treebank CFG, they also encode additional constraints and semantic features. To handle this apparent difference, we design a probabilistic model that jointly generates CFG and target formalism parses. The model includes features of both parses, enabling transfer between the formalisms, and preserves parsing efficiency. Experimental results show that across a range of formalisms, our model benefits from the coarse annotations.
by Yuan Zhang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
33

Koo, Terry (Terry Y. ). "Advances in discriminative dependency parsing." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60102.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 167-176).
Achieving a greater understanding of natural language syntax and parsing is a critical step in producing useful natural language processing systems. In this thesis, we focus on the formalism of dependency grammar as it allows one to model important head modifier relationships with a minimum of extraneous structure. Recent research in dependency parsing has highlighted the discriminative structured prediction framework (McDonald et al., 2005a; Carreras, 2007; Suzuki et al., 2009), which is characterized by two advantages: first, the availability of powerful discriminative learning algorithms like log-linear and max-margin models (Lafferty et al., 2001; Taskar et al., 2003), and second, the ability to use arbitrarily-defined feature representations. This thesis explores three advances in the field of discriminative dependency parsing. First, we show that the classic Matrix-Tree Theorem (Kirchhoff, 1847; Tutte, 1984) can be applied to the problem of non-projective dependency parsing, enabling both log-linear and max-margin parameter estimation in this setting. Second, we present novel third-order dependency parsing algorithms that extend the amount of context available to discriminative parsers while retaining computational complexity equivalent to existing second-order parsers. Finally, we describe a simple but effective method for augmenting the features of a dependency parser with information derived from standard clustering algorithms; our semi-supervised approach is able to deliver consistent benefits regardless of the amount of available training data.
by Terry Koo.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
34

Ackerman, Lauren Marie. "In uences on Parsing Ambiguity." Thesis, Northwestern University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3741393.

Full text
Abstract:

The primary goal of this dissertation is to characterize the relative strength of two of the influences on the parser’s behavior during ambiguity resolution: coreference dependency formation and verb frame preference. I find that coreference dependency formation exerts a stronger influence on the parser than does verb frame preference, even when verb frame preference is maximized in transitively biased frames.

Previous studies have shown local attachment bias initially directs the parser to an embedded object analysis in sentences like (1), in which the DP Annie’s melody is locally ambiguous between the embedded object (EO)/matrix subject (MS) analyses (Ferreira and Henderson, 1990).

(1) Whenever she was trying to casually hum Annie’s melody was beautiful.

Additionally, (1) contains a cataphoric pronoun she which triggers an active search for an antecedent, whereby the parser seeks the antecedent only in grammatically sanctioned positions, such as where the antecedent is not c-commanded by the pronoun (Kazanina et al., 2007; van Gompel and Liversedge, 2003). In (1), the closest potential antecedent is Annie. However, it can be the antecedent only if the DP that contains it is analyzed as the MS, thus outside the whenever-clause and not c-commanded by she. A bias toward an early cataphoric dependency formation could lead the parser to analyze the ambiguous DP as the MS. In (1), there is a bias toward a MS analysis from the antecedent search in addition to a bias toward the local attachment EO analysis.

I find that, regardless of the transitivity bias of the verb in the position of hums, the parser forms a dependency between the pronoun she and Annie. This indicates that dependency formation can supersede verb frame preferences and any default preference the parser may have toward local attachment (Phillips and Gibson, 1997). Moreover, I also observe effects attributable to both the MS and EO parses. This suggests that the parser builds both alternatives and maintains them in parallel. From this, I conclude that the parser prioritizes information from an ongoing dependency search over lexical properties during ambiguity resolution.

APA, Harvard, Vancouver, ISO, and other styles
35

Rudmann, Emily. "Parsing the Streptococcus pneumoniae virulome." Thesis, Boston College, 2020. http://hdl.handle.net/2345/bc-ir:108795.

Full text
Abstract:
Thesis advisor: Tim van Opijnen
Streptococcus pneumoniae is a prominent gram-positive commensal and opportunistic pathogen which possesses a large pan-genome. Significant strain-to-strain variability in genomic content drives the use of varied pathways to perform similar processes between strains. Considering this variation, we employ a set of 36 strains, representative of 78% of total pan-genome diversity, with which to perform functional studies. We previously determined the set of genes required by 22 of the 36 strains to maintain successful infection in a host, or the virulome. In this work, we sought to parse from the virulome the genes required specifically for nasopharyngeal adhesion, a crucial step in S. pneumoniae colonization and transmission, and often a precursor to invasive disease, as well as gene requirements for subversion of the macrophage. We performed in vitro attachment Tn-seq in the 22 strains to D562 human nasopharyngeal epithelial cells, identifying thirteen factors that exhibit requirements for adhesion, and preliminarily validated a proposed universal requirement for survival of the macrophage by a killing assay using J774A.1 murine migratory macrophages
Thesis (BS) — Boston College, 2020
Submitted to: Boston College. College of Arts and Sciences
Discipline: A&S Honors
Discipline: Biology
APA, Harvard, Vancouver, ISO, and other styles
36

He, Haoyu. "Deep learning based human parsing." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24262.

Full text
Abstract:
Human parsing, or human body part semantic segmentation, has been an active research topic due to its wide potential applications. Although prior works have made significant progress by introducing large-scale datasets and deep learning to solve the problem, there are still two challenges remain unsolved. Firstly, to better exploit the existing parsing annotations, prior methods learn a knowledge-sharing mechanism to improve semantic structures in cross-dataset human parsing. However, the modeling for such mechanism remains inefficient for not considering classes' granularity difference in different domains. Secondly, the trained models are limited to parsing humans into classes pre-defined in the training data, which lacks the generalization ability to the unseen fashion classes. Targeting at improving feature representations from multi-domain annotations more efficiently, in this thesis, we propose a novel GRAph PYramid Mutual Learning (Grapy-ML) method to address the cross-dataset human parsing problem, where we model the granularity difference through a graph pyramid. Starting from the prior knowledge of the human body hierarchical structure, we devise a graph pyramid module (GPM) by stacking three levels of graph structures from coarse granularity to fine granularity subsequently. Specifically, the network weights of the first two levels are shared to exchange the learned coarse-granularity information across different datasets. At each level, GPM utilizes the self-attention mechanism to model the correlations between context nodes. Then, it adopts a top-down mechanism to progressively refine the hierarchical features through all the levels. GPM also enables efficient mutual learning. By making use of the multi-granularity labels, Grapy-ML learns a more discriminative feature representation and achieves state-of-the-art performance, which is demonstrated by extensive experiments on the three popular benchmarks, e.g., CIHP dataset. To bridge the generalizability gap, in this thesis, we propose a new problem named one-shot human parsing (OSHP) that requires to parse human into an open set of reference classes defined by any single reference example. During training, only base classes defined in the training set are exposed, which can overlap with part of reference classes. In this thesis, we devise a novel Progressive One-shot Parsing network (POPNet) to address two critical challenges in this problem, i.e., testing bias and small size. POPNet consists of two collaborative metric learning modules named Attention Guidance Module (AGM) and Nearest Centroid Module (NCM), which can learn representative prototypes for base classes and quickly transfer the ability to the unseen classes during testing, thereby reducing the testing bias. Moreover, POPNet adopts a progressive human parsing framework that can incorporate the learned knowledge of parent classes at the coarse granularity to help recognize the unseen descendant classes at the fine granularity, thereby handling the small size issue. Experiments on the ATR-OS benchmark tailoring for OSHP, demonstrate POPNet outperforms other representative one-shot segmentation models by large margins and establishes a strong baseline for the new problem.
APA, Harvard, Vancouver, ISO, and other styles
37

Perret, Jérémy. "Parsing dialogue and argumentative structures." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30350/document.

Full text
Abstract:
Le présent manuscrit présente de nouvelles techniques d'extraction des structures : du dialogue de groupe, d'une part; de textes argumentatifs, d'autre part. Déceler la structure de longs textes et de conversations est une étape cruciale afin de reconstruire leur signification sous-jacente. La difficulté de cette tâche est largement reconnue, sachant que le discours est une description de haut niveau du langage, et que le dialogue de groupe inclut de nombreux phénomènes linguistiques complexes. Historiquement, la représentation du discours a fortement évolué, partant de relations locales, formant des collections non-structurées, vers des arbres, puis des graphes contraints. Nos travaux utilisent ce dernier paradigme, via la Théorie de Représentation du Discours Segmenté. Notre recherche se base sur un corpus annoté de discussions en ligne en anglais, issues du jeu de société Les Colons de Catane. De par la nature stratégique des conversations, et la liberté que permet le format électronique des discussions, ces dialogues contiennent des Unités Discursives Complexes, des fils de discussion intriqués, parmi d'autres propriétés que la littérature actuelle sur l'analyse du discours ignore en général. Nous discutons de deux investigations liées à notre corpus. La première étend la définition de la contrainte de la frontière droite, une formalisation de certains principes de cohérence de la structure du discours, pour l'adapter au dialogue de groupe. La seconde fait la démonstration d'un processus d'extraction de données permettant à un joueur artificiel des Colons d'obtenir un avantage stratégique en déduisant les possessions de ses adversaires à partir de leurs négociations. Nous proposons de nouvelles méthodes d'analyse du dialogue, utilisant conjointement apprentissage automatisé, algorithmes de graphes et optimisation linéaire afin de produire des structures riches et expressives, avec une précision supérieure comparée aux efforts existants. Nous décrivons notre méthode d'analyse du discours par contraintes, d'abord sur des arbres en employant la construction d'un arbre couvrant maximal, puis sur des graphes orientés acycliques en utilisant la programmation linéaire par entiers avec une collection de contraintes originales. Nous appliquons enfin ces méthodes sur les structures de l'argumentation, avec un corpus de textes en anglais et en allemand, parallèlement annotés avec deux structures du discours et une argumentative. Nous comparons les trois couches d'annotation et expérimentons sur l'analyse de l'argumentation, obtenant de meilleurs résultats, relativement à des travaux similaires
This work presents novel techniques for parsing the structures of multi-party dialogue and argumentative texts. Finding the structure of extended texts and conversations is a critical step towards the extraction of their underlying meaning. The task is notoriously hard, as discourse is a high-level description of language, and multi-party dialogue involves many complex linguistic phenomena. Historically, representation of discourse moved from local relationships, forming unstructured collections, towards trees, then constrained graphs. Our work uses the latter framework, through Segmented Discourse Representation Theory. We base our research on a annotated corpus of English chats from the board game The Settlers of Catan. Per the strategic nature of the conversation and the freedom of online chat, these dialogues exhibit complex discourse units, interwoven threads, among other features which are mostly overlooked by the current parsing literature. We discuss two corpus-related experiments. The first expands the definition of the Right Frontier Constraint, a formalization of discourse coherence principles, to adapt it to multi-party dialogue. The second demonstrates a data extraction process giving a strategic advantage to an artificial player of Settlers by inferring its opponents' assets from chat negotiations. We propose new methods to parse dialogue, using jointly machine learning, graph algorithms and linear optimization, to produce rich and expressive structures with greater accuracy than previous attempts. We describe our method of constrained discourse parsing, first on trees using the Maximum Spanning Tree algorithm, then on directed acyclic graphs using Integer Linear Programming with a number of original constraints. We finally apply these methods to argumentative structures, on a corpus of English and German texts, jointly annotated in two discourse representation frameworks and one argumentative. We compare the three annotation layers, and experiment on argumentative parsing, achieving better performance than similar works
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Hang Ph D. Massachusetts Institute of Technology. "Visual and auditory scene parsing." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122101.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 121-132).
Scene parsing is a fundamental topic in computer vision and computational audition, where people develop computational approaches to achieve human perceptual system's ability in understanding scenes, e.g. group visual regions of an image into objects and segregate sound components in a noisy environment. This thesis investigates fully-supervised and self-supervised machine learning approaches to parse visual and auditory signals, including images, videos, and audios. Visual scene parsing refers to densely grouping and labeling of image regions into object concepts. First I build the MIT scene parsing benchmark based on a large scale, densely annotated dataset ADE20K. This benchmark, together with the state-of-the-art models we open source, offers a powerful tool for the research community to solve semantic and instance segmentation tasks. Then I investigate the challenge of parsing a large number of object categories in the wild. An open vocabulary scene parsing model which combines a convolutional neural network with a structured knowledge graph is proposed to address the challenge. Auditory scene parsing refers to recognizing and decomposing sound components in complex auditory environments. I propose a general audio-visual self-supervised learning framework that learns from a large amount of unlabeled internet videos. The learning process discovers the natural synchronization of vision and sounds without human annotation. The learned model achieves the capability to localize sound sources in videos and separate them from mixture. Furthermore, I demonstrate that motion cues in videos are tightly associated with sounds, which help in solving sound localization and separation problems.
by Hang Zhao.
Ph. D. in Mechanical Engineering and Computation
Ph.D.inMechanicalEngineeringandComputation Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Patterson, Jamie L. "Parsing of Natural Language Requirements." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1147.

Full text
Abstract:
The purpose of this thesis was to automate verification of the software requirements for an implantable cardioverter defibrillator with minimal manual rework. The requirements were written in plain English with only loose stylistic constraints. While full automation proved infeasible, many significant advances were made towards solving the problem, including a framework for storing requirements, a program which translates most of the natural language requirements into the framework, and a novel approach to parts of speech analysis.
APA, Harvard, Vancouver, ISO, and other styles
40

Ng, Dominick. "Evaluating Parsers with Dependency Constraints." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14550.

Full text
Abstract:
Many syntactic parsers now score over 90% on English in-domain evaluation, but the remaining errors have been challenging to address and difficult to quantify. Standard parsing metrics provide a consistent basis for comparison between parsers, but do not illuminate what errors remain to be addressed. This thesis develops a constraint-based evaluation for dependency and Combinatory Categorial Grammar (CCG) parsers to address this deficiency. We examine the constrained and cascading impact, representing the direct and indirect effects of errors on parsing accuracy. This identifies errors that are the underlying source of problems in parses, compared to those which are a consequence of those problems. Kummerfeld et al. (2012) propose a static post-parsing analysis to categorise groups of errors into abstract classes, but this cannot account for cascading changes resulting from repairing errors, or limitations which may prevent the parser from applying a repair. In contrast, our technique is based on enforcing the presence of certain dependencies during parsing, whilst allowing the parser to choose the remainder of the analysis according to its grammar and model. We draw constraints for this process from gold-standard annotated corpora, grouping them into abstract error classes such as NP attachment, PP attachment, and clause attachment. By applying constraints from each error class in turn, we can examine how parsers respond when forced to correctly analyse each class. We show how to apply dependency constraints in three parsers: the graph-based MSTParser (McDonald and Pereira, 2006) and the transition-based ZPar (Zhang and Clark, 2011b) dependency parsers, and the C&C CCG parser (Clark and Curran, 2007b). Each is widely-used and influential in the field, and each generates some form of predicate-argument dependencies. We compare the parsers, identifying common sources of error, and differences in the distribution of errors between constrained and cascaded impact. Our work allows us to contrast the implementations of each parser, and how they respond to constraint application. Using our analysis, we experiment with new features for dependency parsing, which encode the frequency of proposed arcs in large-scale corpora derived from scanned books. These features are inspired by and extend on the work of Bansal and Klein (2011). We target these features at the most notable errors, and show how they address some, but not all of the difficult attachments across newswire and web text. CCG parsing is particularly challenging, as different derivations do not always generate different dependencies. We develop dependency hashing to address semantically redundant parses in n-best CCG parsing, and demonstrate its necessity and effectiveness. Dependency hashing substantially improves the diversity of n-best CCG parses, and improves a CCG reranker when used for creating training and test data. We show the intricacies of applying constraints to C&C, and describe instances where applying constraints causes the parser to produce a worse analysis. These results illustrate how algorithms which are relatively straightforward for constituency and dependency parsers are non-trivial to implement in CCG. This work has explored dependencies as constraints in dependency and CCG parsing. We have shown how dependency hashing can efficiently eliminate semantically redundant CCG n-best parses, and presented a new evaluation framework based on enforcing the presence of dependencies in the output of the parser. By otherwise allowing the parser to proceed as it would have, we avoid the assumptions inherent in other work. We hope this work will provide insights into the remaining errors in parsing, and target efforts to address those errors, creating better syntactic analysis for downstream applications.
APA, Harvard, Vancouver, ISO, and other styles
41

Flodin, Daniel. "A Comparison Between Packrat Parsing and Conventional Shift-Reduce Parsing on Real-World Grammars and Inputs." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233403.

Full text
Abstract:
Packrat parsing is a top-down, recursive descent parsing technique that uses backtracking and has a guaranteed linear parse time. Conventional backtracking parsers suffer from exponential parse times in the worst case due to re-evaluating redundant results. This is avoided in packrat parsers with the use of memoization. However, memoization causes packrat parsers memory consumption to be linearly proportional to the input string, as opposed to linearly proportional to the maximum recursion depth for conventional parsing techniques. The objective of this thesis is to implement a packrat parser generator and compare it with an existing and well-known parser combination called Lex/Yacc which produces shift-reduce parsers. The comparison will consist of pure performance measurements such as memory consumption and parsing time, and also a more general comparison between the two parsing techniques. The conclusion made from the comparison is that packrat parsing can be a viable option due to its ability to compose modular and extendible grammars more easily than Lex/Yacc. However, from a performance perspective the Lex/Yacc combination proved superior. In addition, the results indicate that similar performance for a packrat parser is hard to achieve on grammars similar to those used in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
42

Gieseking, Kathrin. "Frequenzbasiertes Parsing als Modell menschlicher Syntaxanalyse." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=961123885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Schröder, Ingo. "Natural language parsing with gradet constraints." [S.l. : s.n.], 2002. http://www.sub.uni-hamburg.de/disse/754/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ononiwu, Gordon, and Twaha Mlwilo. "Parsing a Portable Stream Programming Language." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-658.

Full text
Abstract:

Portable stream programming language (PSPL) is a language for baseband application

programming on reconfigurable architectures. The first step in its development has been

completed. A parser has been provided for the front end of the PSPL compiler. The syntax

of the language has been fixed to allow for easy parses. The scanner and the parser

where generated using automatic tools (scanner and parser generators) which rely on

complex mathematical algorithms for their generation. Abstract syntax (data structures

that preserve the source program so that program structure is evident) was implemented

for the parser using a syntax separate from interpretation style of programming. Tests were

carried out to ensure that the correct data structures were generated. The final outcome

is a parser that other phases of the compiler can depend on for onward transmission of

the source program in an unambiguous manner. The development of subsequent phases

of the compiler will form the next logical step in the processes of transforming PSPL to

a stand alone language.

APA, Harvard, Vancouver, ISO, and other styles
45

Nilsson, Jens. "Tree Transformations in Inductive Dependency Parsing." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1205.

Full text
Abstract:

This licentiate thesis deals with automatic syntactic analysis, or parsing, of natural languages. A parser constructs the syntactic analysis, which it learns by looking at correctly analyzed sentences, known as training data. The general topic concerns manipulations of the training data in order to improve the parsing accuracy.

Several studies using constituency-based theories for natural languages in such automatic and data-driven syntactic parsing have shown that training data, annotated according to a linguistic theory, often needs to be adapted in various ways in order to achieve an adequate, automatic analysis. A linguistically sound constituent structure is not necessarily well-suited for learning and parsing using existing data-driven methods. Modifications to the constituency-based trees in the training data, and corresponding modifications to the parser output, have successfully been applied to increase the parser accuracy. The topic of this thesis is to investigate whether similar modifications in the form of tree transformations to training data, annotated with dependency-based structures, can improve accuracy for data-driven dependency parsers. In order to do this, two types of tree transformations are in focus in this thesis.

%This is a topic that so far has been less studied.

The first one concerns non-projectivity. The full potential of dependency parsing can only be realized if non-projective constructions are allowed, which pose a problem for projective dependency parsers. On the other hand, non-projective parsers tend, among other things, to be slower. In order to maintain the benefits of projective parsing, a tree transformation technique to recover non-projectivity while using a projective parser is presented here.

The second type of transformation concerns linguistic phenomena that are possible but hard for a parser to learn, given a certain choice of dependency analysis. This study has concentrated on two such phenomena, coordination and verb groups, for which tree transformations are applied in order to improve parsing accuracy, in case the original structure does not coincide with a structure that is easy to learn.

Empirical evaluations are performed using treebank data from various languages, and using more than one dependency parser. The results show that the benefit of these tree transformations used in preprocessing and postprocessing to a large extent is language, treebank and parser independent.

APA, Harvard, Vancouver, ISO, and other styles
46

Skrzypczak, Piotr. "Parallel parsing of context-free grammars." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2958.

Full text
Abstract:
During the last decade increasing interest in parallel programming can be observed. It is caused by a tendency of developing microprocessors as a multicore units, that can perform instructions simultaneously. Popular and widely used example of such platform is a graphic processing unit (GPU). Its ability to perform calculations simultaneously is being investigated as a way for improving performance of the complex algorithms. Therefore, GPU’s are now having the architectures that allows to use its computional power by programmers and software developers in the same way as CPU. One of these architectures is CUDA platform, developed by nVidia. Aim of this thesis is to implement the parallel CYK algorithm, which is one of the most popular parsing algorithms, for CUDA platform, that will gain a significant speed-up in comparison with the sequential CYK algorithm. The thesis presents review of existing parallelisations of CYK algorithm, descriptions of implemented algorithms (basic version and few modifications), and experimental stage, that includes testing these versions for various inputs in order to justify which version of algorithm is giving the best performance. There are three versions of algorithm presented, from which one was selected as the best (giving about 10 times better performance for the longest instances of inputs). Also, a limited version of algorithm, that gives best performance (even 100 times better in comparison with non-limited sequential version), but requires some conditions to be fulfilled by grammar, is presented. The motivation for the thesis is to use the developed algorithm in GCS.
APA, Harvard, Vancouver, ISO, and other styles
47

Mehl, Stephan. "Combining preferences in parsing corpus items." Gerhard-Mercator-Universitaet Duisburg, 2001. http://www.ub.uni-duisburg.de/ETD-db/theses/available/duett-12272001-090738/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Habib, Allen N. "Justification, reliabilism and the parsing problem." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0015/MQ47945.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Matsubara, Shigeki, and Yoshihide Kato. "Incremental Parsing with Monotonic Adjoining Operation." ACL(Association for computational linguistics), 2009. http://aclweb.org/anthology/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ross, Candace Cheronda. "Grounded semantic parsing using captioned videos." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118036.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 45-47).
We develop a semantic parser which is trained in a grounded setting using pairs of videos captioned with sentences. This setting is both data-efficient requiring little annotation and far more similar to the experience of children where they observe their environment and listen to speakers. The semantic parser recovers the meaning of English sentences despite not having access to any annotated sentences and despite the ambiguity inherent in vision where a sentence may refer to any combination of objects, object properties, relations or actions taken by any agent in a video. We introduce a new corpus for grounded language acquisition. Learning to understand language, turn sentences into logical forms, by using captioned video will significantly expand the range of data that parsers can be trained on, lower the effort of training a semantic parser, and ultimately lead to a better understanding of child language acquisition.
by Candace Cheronda Ross.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography