To see the other types of publications on this topic, follow the link: Parseren.

Dissertations / Theses on the topic 'Parseren'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parseren.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Willmer, David. "Theatricality, mediation, and public space : the legacy of Parsi theatre in South Asian cultural history /." Online version, 1999. http://bibpurl.oclc.org/web/21701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Preußer, Thomas. "The wisent Parser Generator." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-98686.

Full text
Abstract:
Objective: This document is not an introduction to parser generators. Readers should rather have a good understanding of them, especially of LALR parser generators. This document merely describes the main aspects of the implementation of a LALR parser generator named wisent. It is divided into a concise description of the programming interface and an overview over the data structures and specifics of the implementation of the wisent parser generator.
APA, Harvard, Vancouver, ISO, and other styles
3

Preußer, Thomas. "The wisent Parser Generator." Technische Universität Dresden, 2004. https://tud.qucosa.de/id/qucosa%3A26195.

Full text
Abstract:
Objective: This document is not an introduction to parser generators. Readers should rather have a good understanding of them, especially of LALR parser generators. This document merely describes the main aspects of the implementation of a LALR parser generator named wisent. It is divided into a concise description of the programming interface and an overview over the data structures and specifics of the implementation of the wisent parser generator.
APA, Harvard, Vancouver, ISO, and other styles
4

Odegaard, Gunn Pedersen. "Forbikjøringsspor på IC-parsellen Sørli-Brumunddal : Utredning av konsepter." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for bygg, anlegg og transport, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-21767.

Full text
Abstract:
Intercity-utbyggingen er en av landets største jernbane satsinger i nyere tid og i forbindelse med utbygningen ønsker Jernbaneverket å få utredet ulike konsepter for plassering av forbikjøringsspor. Denne oppgaven har gjennom et grundig litteratursøk laget en generell vurdering av tre ulike konsepter, samt utført en ?case-studie? av Stange forbikjøringsspor. De ulike konseptene er som følger:?Forbikjøringsspor på stasjon?Forbikjøringsspor ved stasjon ?Forbikjøringsspor vilkårlig plassering langs strekningenBehovet for utredningen er kommet som en konsekvens av at det i dag ikke eksisterer noen forbikjøringsspor i Norge. Det finnes da heller ingen retningslinje på hvordan de bør bli utformet eller hvilke krav man må ta hensyn til ved planleggingen av dem. Videre ble det klart at Jernbaneverket ønsker å plassere forbikjøringssporene på stasjonene, mens kommunene gjerne vil plassere dem utenfor sentrumskjernen, da disse arealene ofte er av stor verdi for kommunen. Oppgaven baserer seg i stor grad på bestemmelsene i konseptvalgutredningen for Intercity Oslo - Lillehammer (KVU?IC) som ble ferdigstilt 2012. I KVU-IC ble det bestemt at det skal være blandet trafikk på strekningen Oslo ? Lillehammer, og at hastigheten på den aktuelle parsellen Sørli ? Brumunddal skal være 250 km/h. Videre var det ønskelig at det skulle være forbikjøringsspor hver 10 km. I forbindelse med Intercity-utbyggingen er det bestemt at maksimal hastighet som tog kan passere plattform i, er 200 km/h (Nordli, 2013). Dette er lagt til grunn ved vurderingen av konseptene. Det ble utført et litteraturstudium for å undersøke hvordan andre land i samme eller ulik situasjon som i Norge løser utfordringene, samt en gjennomgang av ulike rapporter fra Norge. Hovedfokuset i litteraturstudiet var kapasitetsutfordringer, sikkerhet og tekniske løsninger for forbikjøringssporene. Basert på litteratursøket og på KVU-ICs bestemmelser ble det laget seks ulike krav for å evaluere konseptene: kapasitet og pålitelighet, reisetid, sikkerhet, arealer, kostander og teknisk utforming. Disse evalueringskriteriene ble lagt til grunn ved vurdering av de ulike konseptene. Da denne oppgaven baserer seg på KVU-IC, veide evalueringskravene kapasitet, reisetid og sikkerhet tyngst ved evalueringen. Resultatet fra analysen er vist i tabellen under. Fra tabellen ser man tydelig at det er konseptet på stasjon som kommer best ut. Hovedårsaken til dette er at man med de to andre konseptene får et reisetidstillegg på grunn av at man må kjøre med redusert hastigheten gjennom stasjonen. En hastighetsreduksjon gjennom stasjonen blir vurdert som en stor svakhet ved systemet når man skal bygge en ny jernbane for fremtiden. Dersom hastigheten var satt til 200 km/h ville resultatet blitt annerledes ut. Både kapasitet og reisetid ville blitt grønn i tabellen for konseptene ved stasjon og vilkårlig plassering. Dette viser betydningen av de kravene som er satt ved et prosjekt, og at små justeringer kan påvirke valg av konsept. Stange forbikjøringsspor ble prosjektert og to konsepter for plassering ble vurdert; På stasjon og ved stasjon. Fra analysen utpekte konseptet på stasjonen som det beste og ble anbefalt. Resultatet fra vurderingen er vist i tabellen under. På grunn av det reisetidstillegg man får ved å legge forbikjøringssporet ved stasjon, anbefales det å legge forbikjøringssporet på Stange stasjon. Gjennom prosjekteringen har det vist seg at forbikjøringssporene hovedsakelig vil ligge innenfor Jernbaneverkets eiendomsgrenser og tar da i liten grad av kommunens arealer. Gjennom arbeidet med oppgaven er det konkludert med at det er vanskelig å lage en generell retningslinje for hvor forbikjøringssporene skal plasseres. Alle prosjekter må vurderes individuelt basert på kravene til strekningen som helhet og lokal forhold på stedet. Men, gjennom oppgaven er det belyst viktige elementer som man bør ta hensyn til ved planleggingen av nye forbikjøringsspor. 
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Wei. "A dependency-based parser evaluation method." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq23555.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hiyakumoto, Laurie Satsue 1969. "PoEM : a parser of emotion metaphors." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/61537.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.<br>Includes bibliographical references (leaves 77-80).<br>Although metaphor is generally recognized as an integral component of everyday language, very few computational systems capable of understanding metaphoric utterances exist today. This thesis describes one approach to the problem and presents PoEM, a prototype system which recognizes and interprets metaphoric descriptions of emotions and mental states in single-sentence input. Building upon previous work in knowledge-based metaphor comprehension, this research adopts a goal-driven approach which assumes each metaphor is selected by a speaker for its aptness at serving a particular communicative goal. To identify these goals, an empirical analysis of metaphor distribution in song lyrics was performed, and typical communicative intentions and surface patterns were identified for the top five most frequently occurring metaphor groups. These intentions and surface patterns have been implemented as a set of metaphor templates and interpretation rules in PoEM, using the WordNet lexical database for supplemental semantic information. Evaluation of PoEM demonstrates fairly high accuracy but low recall.<br>Laurie Satsue Hiyakumoto.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Crowfoot, Norman C. "A visual aid for designing regular expression parsers /." Online version of thesis, 1988. http://hdl.handle.net/1850/10446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cerecke, Carl. "Locally least-cost error repair in LR parsers." Thesis, University of Canterbury. Computer Science, 2003. http://hdl.handle.net/10092/5492.

Full text
Abstract:
This thesis presents some methods for improving the efficiency and effectiveness of locally least-cost error repair algorithms for an LR-based parser. Three different algorithms for reducing the search space are described and compared using a collection of 59,643 incorrect Java programs collected from novice programmers. Two of the algorithms prove particularly effective at reducing the search space. Also presented is a more efficient priority queue implementation for storing transformations of the input string. The effect on repairs of different grammars describing the same language is investigated, and a comparison of different methods of assigning costs to edit operations is performed.
APA, Harvard, Vancouver, ISO, and other styles
9

Gross, Jason S. "An extensible framework for synthesizing efficient, verified parsers." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101581.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 73-75).<br>Parsers have a long history in computer science. This thesis proposes a novel approach to synthesizing efficient, verified parsers by refinement, and presents a demonstration of this approach in the Fiat framework by synthesizing a parser for arithmetic expressions. The benefits of this framework may include more flexibility in the parsers that can be described, more control over the low-level details when necessary for performance, and automatic or mostly automatic correctness proofs.<br>by Jason S. Gross.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Marti, Christof. "Automatic contract extraction developing a CIL parser /." Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Professur für Software Engineering /Chair of Software Engineering, 2003. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alvestad, Gaute Odin, Ole Martin Gausnes, and Ole-Jakob Kråkenes. "Development of a Demand Driven Dom Parser." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9311.

Full text
Abstract:
<p>XML is a tremendous popular markup language in internet applications as well as a storage format. XML document access is often done through an API, and perhaps the most important of these is the W3C DOM. The recommendation from W3C defines a number of interfaces for a developer to access and manipulate XML documents. The recommendation does not define implementation specific approaches used behind the interfaces. A problem with the W3C DOM approach however, is that documents often are loaded in to memory as a node tree of objects, representing the structure of the XML document. This tree is memory consuming and can take up to 4-10 times the document size. Lazy processing have been proposed, building the node tree as it accesses new parts of the document. But when the whole document has been accessed, the overhead compared to traditional parsers, both in terms of memory usage and performance, is high. In this thesis a new approach is introduced. With the use of well known indexing schemes for XML, basic techniques for reducing memory consumption, and principles for memoryhandling in operation systems, a new and alternative approach is introduced. By using a memory cache repository for DOM nodes and simultaneous utilize principles for lazy processing, the proposed implementation has full control over memory consumption. The proposed prototype is called Demand Driven Dom Parser, D3P. The proposed approach removes least recently used nodes from the memory when the cache has exceeded its memory limit. This makes the D3P able to process the document with low memory requirements. An advantage with this approach is that the parser is able to process documents that exceed the size of the main memory, which is impossible with traditional approaches. The implementation is evaluated and compared with other implementations, both lazy and traditional parsers that builds everything in memory on load. The proposed implementation performs well when the bottleneck is memory usage, because the user can set the desired amount of memory to be used by the XML node tree. On the other hand, as the coverage of the document increases, time spend processing the node tree grows beyond what is used by traditional approaches.</p>
APA, Harvard, Vancouver, ISO, and other styles
12

King, Laurie Anne Smith. "Code generation using a backtracking LR parser." W&M ScholarWorks, 1992. https://scholarworks.wm.edu/etd/1539623820.

Full text
Abstract:
Although the parsing phase of the modern compiler has been automated in a machine independent fashion, the diversity of computer architectures inhibits automating the code generation phase. During code generation, some intermediate representation of a source program is transformed into actual machine instructions. The need for portable compilers has driven research towards the automatic generation of code generators.;This research investigates the use of a backtracking LR parser that treats code generation as a series of tree transformations.
APA, Harvard, Vancouver, ISO, and other styles
13

Macias, Benjamin. "An incremental parser for government-binding theory." Thesis, University of Cambridge, 1991. https://www.repository.cam.ac.uk/handle/1810/251511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Medeiros, David P. "ULTRA: Universal Grammar as a Universal Parser." FRONTIERS MEDIA SA, 2018. http://hdl.handle.net/10150/627118.

Full text
Abstract:
A central concern of generative grammar is the relationship between hierarchy and word order, traditionally understood as two dimensions of a single syntactic representation. A related concern is directionality in the grammar. Traditional approaches posit process-neutral grammars, embodying knowledge of language, put to use with infinite facility both for production and comprehension. This has crystallized in the view of Merge as the central property of syntax, perhaps its only novel feature. A growing number of approaches explore grammars with different directionalities, often with more direct connections to performance mechanisms. This paper describes a novel model of universal grammar as a one-directional, universal parser. Mismatch between word order and interpretation order is pervasive in comprehension; in the present model, word order is language-particular and interpretation order (i.e., hierarchy) is universal. These orders are not two dimensions of a unified abstract object (e.g., precedence and dominance in a single tree); rather, both are temporal sequences, and UG is an invariant real-time procedure (based on Knuth's stack-sorting algorithm) transforming word order into hierarchical order. This shift in perspective has several desirable consequences. It collapses linearization, displacement, and composition into a single performance process. The architecture provides a novel source of brackets (labeled unambiguously and without search), which are understood not as part-whole constituency relations, but as storage and retrieval routines in parsing. It also explains why neutral word order within single syntactic cycles avoids 213-like permutations. The model identifies cycles as extended projections of lexical heads, grounding the notion of phase. This is achieved with a universal processor, dispensing with parameters. The empirical focus is word order in noun phrases. This domain provides some of the clearest evidence for 213-avoidance as a cross-linguistic word order generalization. Importantly, recursive phrase structure "bottoms out" in noun phrases, which are typically a single cycle (though further cycles may be embedded, e.g., relative clauses). By contrast, a simple transitive clause plausibly involves two cycles (vP and CP), embedding further nominal cycles. In the present theory, recursion is fundamentally distinct from structure-building within a single cycle, and different word order restrictions might emerge in larger domains like clauses.
APA, Harvard, Vancouver, ISO, and other styles
15

Szydlowski, Barbara. "Complexity analysis and monadic specification of memoized functional parsers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0009/MQ31006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Holm, Johan, and Mats Gustavsson. "XML Parsers - A comparative study with respect to adaptability." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15698.

Full text
Abstract:
Data migration is common as information needs to be moved and transformed between services and applications. Performance in the context of speed is important and may have a crucial impact on the handling of data. Information can be sent in varying formats and XML is one of the more commonly used. The information that is sent can change in structure from time to time and these changes needs to be handled. The parsers’ ability to handle these changes are described as the property “adaptability”. The transformation of XML files is done with the use of parsing techniques. The parsing techniques have different approaches, for example event-based or memory-based. Each approach has its pros and cons. The aim of this study is to research how three different parsers handle parsing XML documents with varying structures in the context of performance. The chosen parsing techniques are SAX, DOM and VTD. SAX uses an event-based approach while DOM and VTD uses a memory-based. Implementation of the parsers have been made with the purpose to extract information from XML documents an adding it to an ArrayList. The results from this study show that the parsers differ in performance, where DOM overall is the slowest and SAX and VTD perform somewhat equal. Although there are differences in the performance between the parsers depending on what changes are made to the XML document.
APA, Harvard, Vancouver, ISO, and other styles
17

Maneck, Susan Stiles. "The death of Ahriman: Culture, identity and theological change among the Parsis of India." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186760.

Full text
Abstract:
This study examines the history of the Parsi community from their arrival in India until the middle of the nineteenth century, giving special emphasis to the impact that other religious and social systems existing in India had on the formation of Parsi identity and also to the continuing influence of communication with co-religionists remaining in Iran. This study argues that Indian Zoroastrians, in attempting to formulate their identity in terms of the religious communities which surrounded them, have completely transformed those things that have generally been regarded as their religion's distinguishing features. The impact of modernity continued the transformations already in process. Hinduism primarily affected Parsi practices and social systems. Like Hindu castes, Parsis refused to interdine or intermarry with those outside their community and prohibited the conversion of outsiders. But caste never defined the Parsis' choice of vocation as it did those of other Indians. This versatility allowed Parsis to involve themselves in all phases of production and distribution and gave them an edge over other merchant groups. The impact of Islam was felt primarily in the area of theology. Parsis utilized Islamic terminologies and came to share Muslim monotheistic presuppositions. Zoroastrians down-played their veneration of the elements and even came to regard Ahriman, once seen as the malignant twin to God Himself, as a lesser being wholly dependent on God's sovereignty. Periods of economic prosperity created conflicts in which laymen sought to undermine priestly authority by appealing to Zoroastrian priests residing in Iran. When Iranian Zoroastrians proved unable to mediate these disputes any longer, the community began to turn towards other outsiders as final authorities, at first Islamic, and later European ones. Zoroastrianism's confrontation with Christianity, which coincided with the introduction of the printing press, brought further changes in Parsi beliefs. Parsis embraced Enlightenment thought, utilizing it to defend themselves against missionary attacks. Although rejecting Christian theology, they eventually adopted the methodologies of higher criticism by which Europeans studied Zoroastrianism and largely accepted their findings.
APA, Harvard, Vancouver, ISO, and other styles
18

Barnbrook, Geoff. "The language of definition : a Cobuild sublanguage parser." Thesis, University of Birmingham, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Randall, Belinda Jane. "Morphology in the lexicon and in the parser." Thesis, Birkbeck (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.411734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kuhlmann, Marco. "Towards a constraint parser for categorial type logics." [S.l. : s.n.], 2004. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11244178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lohnstein, Horst. "Projektion und linking : ein prinzipienbasierter Parser fürs Deutsche /." Tübingen : M. Niemeyer, 1993. http://catalogue.bnf.fr/ark:/12148/cb35599443b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Souter, C. "A corpus-trained parser for systemic-functional syntax." Thesis, University of Leeds, 1996. http://etheses.whiterose.ac.uk/1268/.

Full text
Abstract:
This thesis presents a language engineering approach to the development of a tool for the parsing of relatively unrestricted English text, as found in spoken natural language corpora. Parsing unrestricted English requires large-scale lexical and grammatical resources, and an algorithm for combining the two to assign syntactic structures to utterances of the language. The grammatical theory adopted for this purpose is systemic functional grammar (SFG), despite the fact that it is traditionally used for natural language generation. The parser will use a probabilistic systemic functional syntax (Fawcett 1981, Souter 1990), which was originally employed to hand-parse the Polytechnic of Wales corpus (Fawcett and Perkins 1980, Souter 1989), a 65,000 word transcribed corpus of children's spoken English. Although SFG contains mechanisms for representing semantic as well as syntactic choice in NL generation, the work presented here focuses on the parallel task of obtaining syntactic structures for sentences, and not on retrieving a full semantic interpretation. The syntactic language model can be extracted automatically from the Polytechnic of Wales corpus in a number of formalisms, including 2,800 simple context-free rules (Souter and Atwell 1992). This constitutes a very large formal syntax language, but still contains gaps in its coverage. Some of these are accounted for by a mechanism for expanding the potential for co-ordination and subordination beyond that observed in the corpus. However, at the same time the set of syntax rules can be reduced in size by allowing optionality in the rules. Alongside the context-free rules (which capture the largely horizontal relationships between the mother and daughter constituents in a tree), a vertical trigram model is extracted from the corpus, controlling the vertical relationships between possible grandmothers, mothers and daughters in the parse tree, which represent the alternating layers of elements of structure and syntactic units in SFG. Together, these two models constitute a quasi-context-sensitive syntax. A probabilistic lexicon also extracted from the POW corpus proved inadequate for unrestricted English, so two alternative part-of-speech tagging approaches were investigated. Firstly, the CELEX lexical database was used to provide a large-scale word tagging facility. To make the lexical database compatible with the corpus-based grammar, a hand-crafted mapping was applied to the lexicon's theory neutral grammatical description. This transformed the lexical tags into systemic functional grammar labels, providing a harmonised probabilistic lexicon and grammar. Using the CELEX lexicon, the parser has to do the work of lexical disambiguation. This overhead can be removed with the second approach: The Brill tagger trained on the POW corpus can be used to assign unambiguous labels (with over 92% success rate) to the words to be parsed. While tagging errors do compromise the success rate of the parser, these are outweighed by the search time saved by introducing only one tag per word. A probabilistic chart parsing program which integrated the reduced context-free syntax, the vertical trigram model, with either the SFG lexicon or the POW trained Brill tagger was implemented and tested on a sample of the corpus. Without the vertical trigram model and using CELEX lexical look-up, results were extremely poor, with combinatorial explosion in the syntax preventing any analyses being found for sentences longer than five words within a practical time span. The seemingly unlimited potential for vertical recursion in a context-free rule model of systemic functional syntax is a severe problem for a standard chart parser. However, with addition of the Brill tagger and vertical trigram model, the performance is markedly improved. The parser achieves a reasonably creditable success rate of 76%, if the criteria for success are liberally set at at least one legitimate SF syntax tree in the first six produced for the given test data. While the resulting parser is not suitable for real-time applications, it demonstrates the potential for the use of corpus-derived probabilistic syntactic data in parsing relatively unrestricted natural language, including utterances with ellipted elements, unfinished constituents, and constituents without a syntactic head. With very large syntax models of this kind, the problem of multiple solutions is common, and the modified chart parser presented here is able to produce correct or nearly correct parses in the first few it finds. Apart from the implementation of a parser for systemic functional syntax, the re-usable method by which the lexical look-up, syntactic and parsing resources were obtained is a significant contribution to the field of computational linguistics.
APA, Harvard, Vancouver, ISO, and other styles
23

Dain, Julia Anne. "Automatic error recovery for LR parsers in theory and practice." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/4162/.

Full text
Abstract:
This thesis argues the need for good syntax error handling schemes in language translation systems such as compilers, and for the automatic incorporation of such schemes into parser-generators. Syntax errors are studied in a theoretical framework and practical methods for handling syntax errors are presented. The theoretical framework consists of a model for syntax errors based on the concept of a minimum prefix-defined error correction,a sentence obtainable from an erroneous string by performing edit operations at prefix-defined (parser defined) errors. It is shown that for an arbitrary context-free language, it is undecidable whether a better than arbitrary choice of edit operations can be made at a prefix-defined error. For common programming languages,it is shown that minimum-distance errors and prefix-defined errors do not necessarily coincide, and that there exists an infinite number of programs that differ in a single symbol only; sets of equivalent insertions are exhibited. Two methods for syntax error recovery are, presented. The methods are language independent and suitable for automatic generation. The first method consists of two stages, local repair followed if necessary by phrase-level repair. The second method consists of a single stage in which a locally minimum-distance repair is computed. Both methods are developed for use in the practical LR parser-generator yacc, requiring no additional specifications from the user. A scheme for the automatic generation of diagnostic messages in terms of the source input is presented. Performance of the methods in practice is evaluated using a formal method based on minimum-distance and prefix-defined error correction. The methods compare favourably with existing methods for error recovery.
APA, Harvard, Vancouver, ISO, and other styles
24

Archibald, Jacqueline M. "The use of deterministic parsers on sublanguage for machine translation." Thesis, Aston University, 1992. http://publications.aston.ac.uk/10675/.

Full text
Abstract:
For more than forty years, research has been on going in the use of the computer in the processing of natural language. During this period methods have evolved, with various parsing techniques and grammars coming to prominence. Problems still exist, not least in the field of Machine Translation. However, one of the successes in this field is the translation of sublanguage. The present work reports Deterministic Parsing, a relatively new parsing technique, and its application to the sublanguage of an aircraft maintenance manual for Machine Translation. The aim has been to investigate the practicability of using Deterministic Parsers in the analysis stage of a Machine Translation system. Machine Translation, Sublanguage and parsing are described in general terms with a review of Deterministic parsing systems, pertinent to this research, being presented in detail. The interaction between machine Translation, Sublanguage and Parsing, including Deterministic parsing, is also highlighted. Two types of Deterministic Parser have been investigated, a Marcus-type parser, based on the basic design of the original Deterministic parser (Marcus, 1980) and an LR-type Deterministic Parser for natural language, based on the LR parsing algorithm. In total, four Deterministic Parsers have been built and are described in the thesis. Two of the Deterministic Parsers are prototypes from which the remaining two parsers to be used on sublanguage have been developed. This thesis reports the results of parsing by the prototypes, a Marcus-type parser and an LR-type parser which have a similar grammatical and linguistic range to the original Marcus parser. The Marcus-type parser uses a grammar of production rules, whereas the LR-type parser employs a Definite Clause Grammar(DGC).
APA, Harvard, Vancouver, ISO, and other styles
25

Schemainda, Christoph. "Implementierung eines Earley-Parsergenerators." [S.l.] : Universität Stuttgart , Fakultät Informatik, 1998. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB6783606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kannapinn, Sönke. "Eine Rekonstruktion der LR-Theorie zur Elimination von Redundanz mit Anwendung auf den Bau von ELR-Parsern." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=962360651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Konitzer, Marius [Verfasser], and Eberhard [Gutachter] Bertsch. "Laufzeitanalyse und Optimierung von Parsern für LR-reguläre Grammatiken / Marius Konitzer ; Gutachter: Eberhard Bertsch ; Fakultät für Mathematik." Bochum : Ruhr-Universität Bochum, 2013. http://d-nb.info/1230628592/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Karol, Sven, and Steffen Zschaler. "Providing Mainstream Parser Generators with Modular Language Definition Support." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-79339.

Full text
Abstract:
The composition and reuse of existing textual languages is a frequently re-occurring problem. One possibility of composing textual languages lies on the level of parser specifications which are mainly based on context-free grammars and regular expressions. Unfortunately most mainstream parser generators provide proprietary specification languages and usually do not provide strong abstractions for reuse. New forms of parser generators do support modular language development, but they can often not be easily integrated with existing legacy applications. To support modular language development based on mainstream parser generators, in this paper we apply the Invasive Software Composition (ISC) paradigm to parser specification languages by using our Reuseware framework. Our approach is grounded on a platform independent metamodel and thus does not rely on a specific parser generator.
APA, Harvard, Vancouver, ISO, and other styles
29

Manasfi, Hassan. "An object-oriented parser generator for LL(1) grammars." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/MQ47849.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Yu, Henry 1961. "HPSIMC: AHPL SIMULATOR IMPLEMENTED IN C LANGUAGE (PARSER, SOFTWARE)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sayeed, Asad B. "Developing a minimalist parser for free word order languages." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/27031.

Full text
Abstract:
We propose a parser for free word order languages, based on ideas from the Minimalist Program. The parser simulates aspects of a human listener who necessarily begins sentence analysis before all the words have become available. We first sketch the problems that free word order languages pose. One such problem is discontinuous noun phrase constituency. Languages like Latin permit verbs, adjectives and so on to split noun phrases. We assume that the human parser assembles syntactic structures in the process of understanding a sentence; what happens to noun phrase fragments that arrive later in the derivation? Those that arrive earlier enter the existing syntactic structures, so they become less accessible. What mechanism best incorporates later fragments without undoing structures already built? We show how difficult it is to make existing frameworks for minimalist parsing work for free word order languages and simulate realistic syntactic conditions. We briefly describe a formalism and a parsing algorithm that elegantly overcome these difficulties, and we illustrate them with detailed Latin examples. Previous formalisms for both minimalist generation and parsing tended to use cancellation of features as the primary mechanism for checking whether syntactic structures are compatible for merging into larger units. This is how words and phrases are marked as compatible and added to a larger structure. Instead, our formalism uses feature sets and unification-based operations in order to allow larger structures to acquire features from the smaller structures within them. They can then expose these features to discontinuous elements that arrive later in the derivation. In addition to the examples we provide for Latin, we provide English examples to demonstrate that this parsing algorithm can also be used with languages that require a more fixed order. After that, we discuss an implementation of this parsing algorithm written in Prolog. We then discuss an extension to this formalism that allows it handle pro-drop languages, and we show how this can be elegantly extended to further enhance the scope of linguistic phenomena this parser can handle beyond pro-drop. Finally, we present a corpus study that justifies some of the limitations of this parser.
APA, Harvard, Vancouver, ISO, and other styles
32

Lopez-Rojas, Edgar Alonso. "OMCCp : A MetaModelica Based Parser Generator Applied to Modelica." Thesis, Linköpings universitet, PELAB - Laboratoriet för programmeringsomgivningar, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68863.

Full text
Abstract:
The OpenModelica Compiler-Compiler parser generator (OMCCp) is an LALR(1) parser generator implemented in the MetaModelica language with parsing tables generated by the tools Flex and GNU Bison. The code generated for the parser is in MetaModelica 2.0 language which is the OpenModelica compiler implementation language and is an extension of the Modelica 3.2 language. OMCCp uses as input an LALR(1) grammar that specifies the Modelica language. The generated Parser can be used inside the OpenModelica Compiler (OMC) as a replacement for the current parser generated by the tool ANTLR from an LL(k) Modelica grammar. This report explains the design and implementation of this novel Lexer and Parser Generator called OMCCp. Modelica and its extension MetaModelica are both languages used in the OpenModelica environment. Modelica is an Object-Oriented Equation-Based language for Modeling and Simulation.
APA, Harvard, Vancouver, ISO, and other styles
33

Peake, Ian David. "Enabling meta-level support for language design and implementation through modular parsers /." [St. Lucia, Qld.], 2000. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16313.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Scarlett, Elizabeth A. "An evaluation of a rule-based parser of English sentences." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0020/MQ58501.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Anderson, Alasdair J. "Analysis of musical structures : an approach utilising monadic parser combinators." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2353/.

Full text
Abstract:
The work of this thesis seeks to further the use of computation in musical analysis. To a lesser extent it is hoped that it will provide some little evidence of a new angle on creating analytic elements through inference, and cast light onto some areas where analysis may be used anew. Parsers for musical information are small in number, none have been implemented in functional languages, nor using monadic combination techniques. Few analytic systems are capable of, or even consider it necessary to, represent semantic ambiguity, and this is even more true of parsing systems. The work herein presented provides a system of unique monadic parsers built on combination that are capable of delivering several different types and depths of results. Many computational-analytic systems are based on theories of similarity. The work presented here provides for analytic structures to be created through inference i.e. in the absence of known structures. This is believed to be the first instance of this type of structure generation in the field of music.
APA, Harvard, Vancouver, ISO, and other styles
36

Brown, Michelle M. "Hardware study on the H.264/AVC video stream parser /." Online version of thesis, 2008. http://hdl.handle.net/1850/7766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Townsend, Duncan Clarke McIntire. "Using a symbolic language parser to Improve Markov language models." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100621.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 31-32).<br>This thesis presents a hybrid approach to natural language processing that combines an n-gram (Markov) model with a symbolic parser. In concert these two techniques are applied to the problem of sentence simplification. The n-gram system is comprised of a relational database backend with a frontend application that presents a homogeneous interface for both direct n-gram lookup and Markov approximation. The query language exposed by the frontend also applies lexical information from the START natural language system to allow queries based on part of speech. Using the START natural language system's parser, English sentences are transformed into a collection of structural, syntactic, and lexical statements that are uniquely well-suited to the process of simplification. After reducing the parse of the sentence, the resulting expressions can be processed back into English. These reduced sentences are ranked by likelihood by the n-gram model.<br>by Duncan Clarke McIntire Townsend.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
38

Watson, Rebecca Frances. "Optimising the speed and accuracy of a statistical GLR parser." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Pankaj. "The Design and Implementation of a Prolog Parser Using Javacc." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3251/.

Full text
Abstract:
Operatorless Prolog text is LL(1) in nature and any standard LL parser generator tool can be used to parse it. However, the Prolog text that conforms to the ISO Prolog standard allows the definition of dynamic operators. Since Prolog operators can be defined at run-time, operator symbols are not present in the grammar rules of the language. Unless the parser generator allows for some flexibility in the specification of the grammar rules, it is very difficult to generate a parser for such text. In this thesis we discuss the existing parsing methods and their modified versions to parse languages with dynamic operator capabilities. Implementation details of a parser using Javacc as a parser generator tool to parse standard Prolog text is provided. The output of the parser is an “Abstract Syntax Tree” that reflects the correct precedence and associativity rules among the various operators (static and dynamic) of the language. Empirical results are provided that show that a Prolog parser that is generated by the parser generator like Javacc is comparable in efficiency to a hand-coded parser.
APA, Harvard, Vancouver, ISO, and other styles
40

Shi, Lei. "A general purpose semantic parser using FrameNet and WordNet®." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4483/.

Full text
Abstract:
Syntactic parsing is one of the best understood language processing applications. Since language and grammar have been formally defined, it is easy for computers to parse the syntactic structure of natural language text. Does meaning have structure as well? If it has, how can we analyze the structure? Previous systems rely on a one-to-one correspondence between syntactic rules and semantic rules. But such systems can only be applied to limited fragments of English. In this thesis, we propose a general-purpose shallow semantic parser which utilizes a semantic network (WordNet), and a frame dataset (FrameNet). Semantic relations recognized by the parser are based on how human beings represent knowledge of the world. Parsing semantic structure allows semantic units and constituents to be accessed and processed in a more meaningful way than syntactic parsing, moving the automation of understanding natural language text to a higher level.
APA, Harvard, Vancouver, ISO, and other styles
41

Simard, Michel. "Integration of finite-state morphological models to top-down syntactic parsers in prolog." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59440.

Full text
Abstract:
Finite-state morphological models are formalisms for describing the set of valid word-forms of a natural language. Being well-suited for computer implementations, they have typically been used for creating systems that efficiently recognize and generate isolated word-forms. DCGs (Definite-clause Grammars) on the other hand, like many other syntactic formalisms closely connected with the programming language Prolog, are less suitable for describing morphological and orthographical phenomena (or phonological phenomena, in speech). This thesis describes a general method for combining the DCG syntactic formalism and a finite-state morphological model within a single Prolog implementation. The resulting system is a natural language processing environment that employs distinct formalisms for the descriptions of the syntax, morphology and orthography. The proposed method can readily be extended to other morphological models and syntactic formalisms (Metamorphosis Grammars, Extraposition Grammars). Modifications are discussed to improve the performance of the system, and a complete example illustrating the application of the proposed method is presented.
APA, Harvard, Vancouver, ISO, and other styles
42

Aufrant, Lauriane. "Training parsers for low-resourced languages : improving cross-lingual transfer with monolingual knowledge." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS089/document.

Full text
Abstract:
Le récent essor des algorithmes d'apprentissage automatique a rendu les méthodes de Traitement Automatique des Langues d'autant plus sensibles à leur facteur le plus limitant : la qualité des systèmes repose entièrement sur la disponibilité de grandes quantités de données, ce qui n'est pourtant le cas que d'une minorité parmi les 7.000 langues existant au monde. La stratégie dite du transfert cross-lingue permet de contourner cette limitation : une langue peu dotée en ressources (la cible) peut être traitée en exploitant les ressources disponibles dans une autre langue (la source). Les progrès accomplis sur ce plan se limitent néanmoins à des scénarios idéalisés, avec des ressources cross-lingues prédéfinies et de bonne qualité, de sorte que le transfert reste inapplicable aux cas réels de langues peu dotées, qui n'ont pas ces garanties. Cette thèse vise donc à tirer parti d'une multitude de sources et ressources cross-lingues, en opérant une combinaison sélective : il s'agit d'évaluer, pour chaque aspect du traitement cible, la pertinence de chaque ressource. L'étude est menée en utilisant l'analyse en dépendance par transition comme cadre applicatif. Le cœur de ce travail est l'élaboration d'un nouveau méta-algorithme de transfert, dont l'architecture en cascade permet la combinaison fine des diverses ressources, en ciblant leur exploitation à l'échelle du mot. L'approche cross-lingue pure n'étant en l'état pas compétitive avec la simple annotation de quelques phrases cibles, c'est avant tout la complémentarité de ces méthodes que souligne l'analyse empirique. Une série de nouvelles métriques permet une caractérisation fine des similarités cross-lingues et des spécificités syntaxiques de chaque langue, de même que de la valeur ajoutée de l'information cross-lingue par rapport au cadre monolingue. L'exploitation d'informations typologiques s'avère également particulièrement fructueuse. Ces contributions reposent largement sur des innovations techniques en analyse syntaxique, concrétisées par la publication en open source du logiciel PanParser, qui exploite et généralise la méthode dite des oracles dynamiques. Cette thèse contribue sur le plan monolingue à plusieurs autres égards, comme le concept de cascades monolingues, pouvant traiter par exemple d'abord toutes les dépendances faciles, puis seulement les difficiles<br>As a result of the recent blossoming of Machine Learning techniques, the Natural Language Processing field faces an increasingly thorny bottleneck: the most efficient algorithms entirely rely on the availability of large training data. These technological advances remain consequently unavailable for the 7,000 languages in the world, out of which most are low-resourced. One way to bypass this limitation is the approach of cross-lingual transfer, whereby resources available in another (source) language are leveraged to help building accurate systems in the desired (target) language. However, despite promising results in research settings, the standard transfer techniques lack the flexibility regarding cross-lingual resources needed to be fully usable in real-world scenarios: exploiting very sparse resources, or assorted arrays of resources. This limitation strongly diminishes the applicability of that approach. This thesis consequently proposes to combine multiple sources and resources for transfer, with an emphasis on selectivity: can we estimate which resource of which language is useful for which input? This strategy is put into practice in the frame of transition-based dependency parsing. To this end, a new transfer framework is designed, with a cascading architecture: it enables the desired combination, while ensuring better targeted exploitation of each resource, down to the level of the word. Empirical evaluation dampens indeed the enthusiasm for the purely cross-lingual approach -- it remains in general preferable to annotate just a few target sentences -- but also highlights its complementarity with other approaches. Several metrics are developed to characterize precisely cross-lingual similarities, syntactic idiosyncrasies, and the added value of cross-lingual information compared to monolingual training. The substantial benefits of typological knowledge are also explored. The whole study relies on a series of technical improvements regarding the parsing framework: this work includes the release of a new open source software, PanParser, which revisits the so-called dynamic oracles to extend their use cases. Several purely monolingual contributions complete this work, including an exploration of monolingual cascading, which offers promising perspectives with easy-then-hard strategies
APA, Harvard, Vancouver, ISO, and other styles
43

Arrieta, Aitor. "FB-Environment in Wise-Shop Floor : Algorithm parser and code generation." Thesis, Högskolan i Skövde, Forskningscentrum för Virtuella system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-6594.

Full text
Abstract:
IEC (International Electrotechnical Commission) is the authority that publishes different standards in the  fields  of  electrical  and  electronics  engineering,  to  be  used  internationally.  In  the  area  of manufacturing, it has demanded a new standard to fulfil better solutions of dynamic requirements. The  IEC  61499  redacted  by  IEC  offers  interoperability,  portability,  configurability  and  distributed control applications for manufacturing processes. However, this standard is not a replacement of IEC 61131-3,  one  of  the  most  used  standards  in  industry;  instead,  it  is  a  complement  of  it.  The  basic software units of IEC 61499 are named Function Blocks (FBs), which can be described as blocks that encapsulate functionality. By combining FBs together, it is possible to solve complex problems.   The  objective  of  this  project  (in  close  cooperation  with  another  project)  is  to  develop  a  software environment in Java language. It follows the requirements of IEC 61499, and implement a Function Block  designer  and  a  runtime  execution  environment,  as  a  part  of  an  existing  Wise-ShopFloor framework. The scope of this project covers:     FB  algorithm  editor:  Each  FB  has  one  or  more  algorithms,  which  can  be  defined  in  the algorithm editor using IEC 61131-3 or Java.     FB serialization: Opening and saving the configuration of FBs in Java Class file is one of the tasks  of  this  project.  As  soon  as  the  configuration  is  saved,  the  Java  code  of  FB  can  be generated. Java code is generated because compiled Java allows execution of FB. Saving in Java  Class  file  permits  portability,  i.e.  the  saved  configuration  can  be  opened  in  any  JVM system, and vice versa.      Case study: A simulation of an assembly station using an ABB IRB 140 robot is studied and implemented using the runtime simulator of the Java platform, in which some basic FBs have been also created in a library. This project also includes: (1) implementation of user interface and (2) FB serialization in XML. It  is  anticipated  that  the  developed  environment  will  be  able  to  save  and  open  FBs  configurations either in XML or in Java Class, following the specification of IEC 61499. It will allow portability and reusability.  Because  of  the  portability,  the  so-designed  FBs  can  be  validated  using  another  FB environment such as FBDK (Function Block Development Kit).
APA, Harvard, Vancouver, ISO, and other styles
44

Kapfer, Jörg [Verfasser], and Roland [Akademischer Betreuer] Hausser. "Inkrementelles und oberflächenkompositionales Parsen von Koordinationsellipsen / Jörg Kapfer. Betreuer: Roland Hausser." Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2011. http://d-nb.info/1015474454/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bagherli, J. "High level techniques and software tools for custom VLSI parser design." Thesis, University of Kent, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bartolini, Filippo. "Analisi e implementazione di un parser neurale per la lingua italiana." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22898/.

Full text
Abstract:
Questo lavoro di tesi si focalizza sul task di dependency parsing per la lingua italiana attraverso l’utilizzo di UmBERTo: un recente word embedding contestuale derivato da BERT. In particolare, inizialmente vengono descritti e analizzati i risultati ottenuti grazie alle sole applicazioni delle rappresentazioni linguistiche di UmBERTo. Successivamente, si sono testati i benefici ottenibili dall’inserimento di UmBERTo all’interno del recente parser PaT (ParsingAsTagging), il quale tratta il problema del dependency parsing come uno di sequence tagging. Infine, prendendo ispirazione dal parser allo stato dell’arte per l’italiano, si sono sperimentate alcune modifiche al parser PaT integrato con UmBERTo, ottenendo alcuni, seppur modesti, miglioramenti. Per tutti gli esperimenti è stato utilizzato un corpus italiano di dominio generico presente nelle Universal Dependencies.
APA, Harvard, Vancouver, ISO, and other styles
47

Bertero, González Aldo Sebastián. "Incorporación de un Parser XML - XMI para modelamiento de procesos computacionales." Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/111950.

Full text
Abstract:
Ingeniero Civil en Computación<br>El Proyecto ADAPTE tiene como objetivo optimizar los procesos de negocio de las Pequeñas y Medianas Empresas. Para lograrlo es necesario conocer el proceso en sí y el contexto de cada empresa. Existe una herramienta de nombre Eclipse Process Framework Composer (EPF) que se usa para definir de manera estándar los procesos de desarrollo de software, que posee todas las funcionalidades necesarias para esto. Por otro lado, se utiliza Eclipse Modeling Tools (EMT) para modelar el contexto. Además en esta herramienta se ejecutan las trasformaciones necesarias para adaptar el proceso (genérico) a un contexto en particular. El problema en lo descrito anteriormente está en que EPF entrega el proceso en un documento XML haciendo uso del metamodelo UMA, mientras que en EMT se reconoce como válido un archivo de extensión XMI, que usa además un metamodelo diferente denominado SPEM. En este documento se propone realizar una aplicación que sirva para convertir archivos XML con metamodelo UMA a XMI con metamodelo SPEM, y viceversa. Se proponen dos soluciones a la problemática, la primera de estas es hacer una aplicación Java que realice la conversión en ambos sentidos. En esta se utiliza programación orientada a objetos, junto con librerías para manejar documentos con estructura XML. La segunda solución es utilizar TCS para definir las sintaxis concretas de ambos metamodelos, y luego utilizar ATL para hacer la transformación entre dichos lenguajes. Se realizaron tres experimentos por solución, con tres documentos que representan procesos definidos por empresas reales. Para la primera solución, se ingresa el documento XML y se ejecuta la aplicación para obtener el documento XMI, este se visualiza en EMT para comprobar su correctitud, y luego, se realiza la conversión para volver a XML, abriendo el documento con EPF y realizar nuevamente esta comprobación. Para la segunda solución se abre el documento con TCS y se espera que la aplicación reconozca las sintaxis válidas. Los resultados obtenidos presentan a la solución en Java como la más completa de las dos, debido a que se logró hacer la conversión XML a XMI, perdiendo una cantidad importante de información, posteriormente al realizar la conversión XMI a XML se recuperó esta información con el archivo XML original, identificando que estos no hayan sido incorporados originalmente al XMI, obteniendo un proceso que contiene información congruente con la original (no es necesariamente idéntico, porque el XMI es adaptado al contexto) que permite visualizar toda la información que fue ingresada originalmente por el usuario. Se concluye que los objetivos propuestos originalmente se cumplen a cabalidad, si bien la solución en TCS no logró satisfacer una parte de ellos, si lo hizo la solución en Java, que logró resolver el problema de compatibilidad entre aplicaciones, y a su vez, perfecciona la solución entregada por ADAPTE al problema de optimización de procesos organizacionales.
APA, Harvard, Vancouver, ISO, and other styles
48

Pasalic, Senad, and Anders Olsson. "Utveckling av en parser för integrering av verktyg för testning och kravhantering." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-98699.

Full text
Abstract:
Det här är ett högskoleingenjörs-examensarbete som har utförts på Enea/Xdin. Arbetet kan ses som en förstudie och syftar till att beskriva hur olika utvecklingsverktyg kan integreras med varandra. Eftersom verktygsintegration förenklar användandet av verktygskedjor är detta ett aktuellt och intressant ämne för dagens utvecklare. Examensarbetet är på 15hp och pågick under 10 veckor. Arbetet är en del av forskningsprojektet ARTEMIS/iFEST vilket syftar till att effektivisera produktutveckling genom att skapa en standardiserad verktygskedja, vilket sparar både tid och pengar. Målet för detta projekt är visa upp en fungerande kedja av verktyg. Som ett konkret exempel används Eneas befintliga testverktyg Farkle samt ett godtyckligt verktyg. Generellt pratar verktyg i olika språk, detta skapar problem. En översättare, en s.k. parser implementeras mellan verktygen för att etablera en kommunikation. Ett testfall vars uppgift blir att ”vandra”igenom hela kedjan och påverka ett target under test används för att visa principen för verktygsintegration. Examensarbetet resulterade i en lyckad verktygsintegration och det finns nu en möjlighet till att använda vilket verktyg som helst i samband med testverktyget Farkle.
APA, Harvard, Vancouver, ISO, and other styles
49

Pulli, V. (Ville). "Design of a process parser for a system assisting people with dementia." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201512082272.

Full text
Abstract:
Dementia is a growing problem in today’s society. Life expectancies increase, while the population is skewing towards older age groups due to reduced birth rates. Dementia resulting from diseases such as Alzheimer’s disease becomes much more likely as age increases. These factors combined means that the amount of people with dementia is increasing constantly. This thesis looks at the automated generation of process models, which are detailed instruction sets for cooking tasks sourced from freely available cooking recipes. The process models would be used in an ambient assisted living environment to aid the independent functioning of people with dementia. This would also lessen the workload of caregivers, while also providing some mental and physical exercise for the dementia sufferer. A process model was created by hand as a simulation of the automated process. This process model was then validated and used as the instruction set for user tests with dementia patients in a cooking task. The generated instructions were relayed with verbal prompts in a controlled environment. Tests were recorded and transcribed. Test results were used to further assess the validity of the simulated process model conversion. Results were used in determining issues that should be acknowledged during the development of an automated process model generation system. A basic architecture for the system is discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

RODRIGUES, C. A. S. "Descrição de Formalização de Verbos de Ação-Processo para Elaboração de Parser." Universidade Federal do Espírito Santo, 2009. http://repositorio.ufes.br/handle/10/3706.

Full text
Abstract:
Made available in DSpace on 2016-08-29T15:08:39Z (GMT). No. of bitstreams: 1 tese_3447_Dissertação Carlos Rodrigues.pdf: 5099942 bytes, checksum: b851f71871ce262bfad43557c2685b8a (MD5) Previous issue date: 2009-03-07<br>Chafe (1970) elaborou um programa de pesquisa que deu origem a seis subcategorias semânticas para a classificação dos verbos, entre elas os verbos de ação-processo. No entanto, a literatura que versa sobre o assunto fornece um referencial teórico-metodológico bastante conciso, tanto com relação às propriedades semânticas da subcategoria em questão, quanto com relação a suas propriedades sintáticas. Com o intuito de ampliar a quantidade de informações sintático-semânticas sobre os verbos de ação-processo, pautou-se a presente pesquisa num programa de pesquisa que visa à identificação das valências verbais, proposto por Borba (1996) e Welker (2005). Assim sendo, foram investigados quatro tipos de valência verbal: a lógica (Tesnière, 1959; Helbig e Schenkel, 1975); a sintática (Borba, 1996; Ignácio, 2001); a semântica e a sintático-semântica (Fillmore, 1968; Travaglia, 1985; Dik, 1989; Dowty, 1989). Ao final dessa etapa de investigação lingüística, foi possível confirmar a heterogeneidade da subcategoria dos verbos de ação-processo, que puderam ser divididos em nove subgrupos. Além disso, pôde-se explicitar tanto as estruturas argumentais pertencentes aos subgrupos, quanto os elementos que representam os actantes que compõem tais configurações sintáticas. Ao final, o conhecimento lingüístico obtido nesta pesquisa possibilitou a construção de três recursos lingüísticos que fornecem base para a construção de recursos computacionais para processamento de linguagem natural: (i) uma tábua de léxico-gramática, contendo as propriedades morfossintático-semânticas dos verbos e de seus actantes; (ii) uma base de dados lexicais, com as propriedades morfossintático-semânticas dos verbos analisados; e (ii) as estruturas argumentais identificadas em cada subgrupo.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography