Siga este enlace para ver otros tipos de publicaciones sobre el tema: Generative lexicon.

Tesis sobre el tema "Generative lexicon"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 48 mejores tesis para su investigación sobre el tema "Generative lexicon".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Thalji, Abdullah Abdel-Majeed. "Systematic polysemy in Arabic : a generative lexicon-based account". Thesis, University of Essex, 2018. http://repository.essex.ac.uk/22121/.

Texto completo
Resumen
This thesis is the first of its kind to study the (linguistic) phenomenon of systematic polysemy and examine its pervasiveness in Arabic (both Modern Standard Arabic (MSA) and Jordanian Arabic (JA)). Systematic polysemy in this study is defined as the case where a lexeme has more than one distinct sense and the relationship between the senses is predictable by rules in language. In the narrow sense, however, this phenomenon refers only to the productive type of regular polysemy, which is defined vis-à-vis Apresjan’s (1974) notion of totality of scope (e.g. the content/container type). The integral function of this research is to (i) identify the major (as well as the minor) patterns of regular polysemy in Arabic in the major lexical categories of nouns, verbs, and adjectives; (ii) determine the extent to which these patterns converge with or diverge from the already explored patterns, mainly in English; and (iii) test the applicability of Pustejovsky’s (1995) Generative Lexicon (the GL) in accounting for the various Arabic data on polysemy. The study found that nearly every regular polysemous pattern observed in English was also present in Arabic, albeit with a few attested differences. For example, the regular pattern of the mass-to-count alternation (e.g. coffee—a coffee) is very rarely encountered in Arabic. In addition, the animal/meat alternation in English behaves rather differently in Arabic in the way the language elicits a non-countable (mass) meaning from a countable counterpart. With respect to lexicography, this study adds to the already studied patterns in Atkins and Rundell (2008). The dissertation also raises additional questions for the GL framework with respect to property nominalizations, nominalized adjectives, and generic collective nouns.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Martinez, Jorge Matadamas. "AXEL : a framework to deal with ambiguity in three-noun compounds". Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4774.

Texto completo
Resumen
Cognitive Linguistics has been widely used to deal with the ambiguity generated by words in combination. Although this domain offers many solutions to address this challenge, not all of them can be implemented in a computational environment. The Dynamic Construal of Meaning framework is argued to have this ability because it describes an intrinsic degree of association of meanings, which in turn, can be translated into computational programs. A limitation towards a computational approach, however, has been the lack of syntactic parameters. This research argues that this limitation could be overcome with the aid of the Generative Lexicon Theory (GLT). Specifically, this dissertation formulated possible means to marry the GLT and Cognitive Linguistics in a novel rapprochement between the two. This bond between opposing theories provided the means to design a computational template (the AXEL System) by realising syntax and semantics at software levels. An instance of the AXEL system was created using a Design Research approach. Planned iterations were involved in the development to improve artefact performance. Such iterations boosted performance-improving, which accounted for the degree of association of meanings in three-noun compounds. This dissertation delivered three major contributions on the brink of a so-called turning point in Computational Linguistics (CL). First, the AXEL system was used to disclose hidden lexical patterns on ambiguity. These patterns are difficult, if not impossible, to be identified without automatic techniques. This research claimed that these patterns can assist audiences of linguists to review lexical knowledge on a software-based viewpoint. Following linguistic awareness, the second result advocated for the adoption of improved resources by decreasing electronic space of Sense Enumerative Lexicons (SELs). The AXEL system deployed the generation of “at the moment of use” interpretations, optimising the way the space is needed for lexical storage. Finally, this research introduced a subsystem of metrics to characterise an ambiguous degree of association of three-noun compounds enabling ranking methods. Weighing methods delivered mechanisms of classification of meanings towards Word Sense Disambiguation (WSD). Overall these results attempted to tackle difficulties in understanding studies of Lexical Semantics via software tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Romeo, Lauren Michele. "The Structure of the lexicon in the task of the automatic acquisition of lexical information". Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/325420.

Texto completo
Resumen
La información de clase semántica de los nombres es fundamental para una amplia variedad de tareas del procesamiento del lenguaje natural (PLN), como la traducción automática, la discriminación de referentes en tareas como la detección y el seguimiento de eventos, la búsqueda de respuestas, el reconocimiento y la clasificación de nombres de entidades, la construcción y ampliación automática de ontologías, la inferencia textual, etc. Una aproximación para resolver la construcción y el mantenimiento de los léxicos de gran cobertura que alimentan los sistemas de PNL, una tarea muy costosa y lenta, es la adquisición automática de información léxica, que consiste en la inducción de una clase semántica relacionada con una palabra en concreto a partir de datos de su distribución obtenidos de un corpus. Precisamente, por esta razón, se espera que la investigación actual sobre los métodos para la producción automática de léxicos de alta calidad, con gran cantidad de información y con anotación de clase como el trabajo que aquí presentamos, tenga un gran impacto en el rendimiento de la mayoría de las aplicaciones de PNL. En esta tesis, tratamos la adquisición automática de información léxica como un problema de clasificación. Con este propósito, adoptamos métodos de aprendizaje automático para generar un modelo que represente los datos de distribución vectorial que, basados en ejemplos conocidos, permitan hacer predicciones de otras palabras desconocidas. Las principales preguntas de investigación que planteamos en esta tesis son: (i) si los datos de corpus proporcionan suficiente información para construir representaciones de palabras de forma eficiente y que resulten en decisiones de clasificación precisas y sólidas, y (ii) si la adquisición automática puede gestionar, también, los nombres polisémicos. Para hacer frente a estos problemas, realizamos una serie de validaciones empíricas sobre nombres en inglés. Nuestros resultados confirman que la información obtenida a partir de la distribución de los datos de corpus es suficiente para adquirir automáticamente clases semánticas, como lo demuestra un valor-F global promedio de 0,80 aproximadamente utilizando varios modelos de recuento de contextos y en datos de corpus de distintos tamaños. No obstante, tanto el estado de la cuestión como los experimentos que realizamos destacaron una serie de retos para este tipo de modelos, que son reducir la escasez de datos del vector y dar cuenta de la polisemia nominal en las representaciones distribucionales de las palabras. En este contexto, los modelos de word embedding (WE) mantienen la “semántica” subyacente en las ocurrencias de un nombre en los datos de corpus asignándole un vector. Con esta elección, hemos sido capaces de superar el problema de la escasez de datos, como lo demuestra un valor-F general promedio de 0,91 para las clases semánticas de nombres de sentido único, a través de una combinación de la reducción de la dimensionalidad y de números reales. Además, las representaciones de WE obtuvieron un rendimiento superior en la gestión de las ocurrencias asimétricas de cada sentido de los nombres de tipo complejo polisémicos regulares en datos de corpus. Como resultado, hemos podido clasificar directamente esos nombres en su propia clase semántica con un valor-F global promedio de 0,85. La principal aportación de esta tesis consiste en una validación empírica de diferentes representaciones de distribución utilizadas para la clasificación semántica de nombres junto con una posterior expansión del trabajo anterior, lo que se traduce en recursos léxicos y conjuntos de datos innovadores que están disponibles de forma gratuita para su descarga y uso.
La información de clase semántica de los nombres es fundamental para una amplia variedad de tareas del procesamiento del lenguaje natural (PLN), como la traducción automática, la discriminación de referentes en tareas como la detección y el seguimiento de eventos, la búsqueda de respuestas, el reconocimiento y la clasificación de nombres de entidades, la construcción y ampliación automática de ontologías, la inferencia textual, etc. Una aproximación para resolver la construcción y el mantenimiento de los léxicos de gran cobertura que alimentan los sistemas de PNL, una tarea muy costosa y lenta, es la adquisición automática de información léxica, que consiste en la inducción de una clase semántica relacionada con una palabra en concreto a partir de datos de su distribución obtenidos de un corpus. Precisamente, por esta razón, se espera que la investigación actual sobre los métodos para la producción automática de léxicos de alta calidad, con gran cantidad de información y con anotación de clase como el trabajo que aquí presentamos, tenga un gran impacto en el rendimiento de la mayoría de las aplicaciones de PNL. En esta tesis, tratamos la adquisición automática de información léxica como un problema de clasificación. Con este propósito, adoptamos métodos de aprendizaje automático para generar un modelo que represente los datos de distribución vectorial que, basados en ejemplos conocidos, permitan hacer predicciones de otras palabras desconocidas. Las principales preguntas de investigación que planteamos en esta tesis son: (i) si los datos de corpus proporcionan suficiente información para construir representaciones de palabras de forma eficiente y que resulten en decisiones de clasificación precisas y sólidas, y (ii) si la adquisición automática puede gestionar, también, los nombres polisémicos. Para hacer frente a estos problemas, realizamos una serie de validaciones empíricas sobre nombres en inglés. Nuestros resultados confirman que la información obtenida a partir de la distribución de los datos de corpus es suficiente para adquirir automáticamente clases semánticas, como lo demuestra un valor-F global promedio de 0,80 aproximadamente utilizando varios modelos de recuento de contextos y en datos de corpus de distintos tamaños. No obstante, tanto el estado de la cuestión como los experimentos que realizamos destacaron una serie de retos para este tipo de modelos, que son reducir la escasez de datos del vector y dar cuenta de la polisemia nominal en las representaciones distribucionales de las palabras. En este contexto, los modelos de word embedding (WE) mantienen la “semántica” subyacente en las ocurrencias de un nombre en los datos de corpus asignándole un vector. Con esta elección, hemos sido capaces de superar el problema de la escasez de datos, como lo demuestra un valor-F general promedio de 0,91 para las clases semánticas de nombres de sentido único, a través de una combinación de la reducción de la dimensionalidad y de números reales. Además, las representaciones de WE obtuvieron un rendimiento superior en la gestión de las ocurrencias asimétricas de cada sentido de los nombres de tipo complejo polisémicos regulares en datos de corpus. Como resultado, hemos podido clasificar directamente esos nombres en su propia clase semántica con un valor-F global promedio de 0,85. La principal aportación de esta tesis consiste en una validación empírica de diferentes representaciones de distribución utilizadas para la clasificación semántica de nombres junto con una posterior expansión del trabajo anterior, lo que se traduce en recursos léxicos y conjuntos de datos innovadores que están disponibles de forma gratuita para su descarga y uso.
Lexical semantic class information for nouns is critical for a broad variety of Natural Language Processing (NLP) tasks including, but not limited to, machine translation, discrimination of referents in tasks such as event detection and tracking, question answering, named entity recognition and classification, automatic construction and extension of ontologies, textual inference, etc. One approach to solve the costly and time-consuming manual construction and maintenance of large-coverage lexica to feed NLP systems is the Automatic Acquisition of Lexical Information, which involves the induction of a semantic class related to a particular word from distributional data gathered within a corpus. This is precisely why current research on methods for the automatic production of high- quality information-rich class-annotated lexica, such as the work presented here, is expected to have a high impact on the performance of most NLP applications. In this thesis, we address the automatic acquisition of lexical information as a classification problem. For this reason, we adopt machine learning methods to generate a model representing vectorial distributional data which, grounded on known examples, allows for the predictions of other unknown words. The main research questions we investigate in this thesis are: (i) whether corpus data provides sufficient distributional information to build efficient word representations that result in accurate and robust classification decisions and (ii) whether automatic acquisition can handle also polysemous nouns. To tackle these problems, we conducted a number of empirical validations on English nouns. Our results confirmed that the distributional information obtained from corpus data is indeed sufficient to automatically acquire lexical semantic classes, demonstrated by an average overall F1-Score of almost 0.80 using diverse count-context models and on different sized corpus data. Nonetheless, both the State of the Art and the experiments we conducted highlighted a number of challenges of this type of model such as reducing vector sparsity and accounting for nominal polysemy in distributional word representations. In this context, Word Embeddings (WE) models maintain the “semantics” underlying the occurrences of a noun in corpus data by mapping it to a feature vector. With this choice, we were able to overcome the sparse data problem, demonstrated by an average overall F1-Score of 0.91 for single-sense lexical semantic noun classes, through a combination of reduced dimensionality and “real” numbers. In addition, the WE representations obtained a higher performance in handling the asymmetrical occurrences of each sense of regular polysemous complex-type nouns in corpus data. As a result, we were able to directly classify such nouns into their own lexical-semantic class with an average overall F1-Score of 0.85. The main contribution of this dissertation consists of an empirical validation of different distributional representations used for nominal lexical semantic classification along with a subsequent expansion of previous work, which results in novel lexical resources and data sets that have been made freely available for download and use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Marruche, Vanessa de Sales. "Uma análise do verbo poder do português brasileiro à luz da HPSG e do léxico gerativo". Universidade Federal do Amazonas, 2012. http://tede.ufam.edu.br/handle/tede/2375.

Texto completo
Resumen
Made available in DSpace on 2015-04-11T13:49:03Z (GMT). No. of bitstreams: 1 vanessa.pdf: 2538261 bytes, checksum: 22a79ec6ec4a9c95e3c0d9514975ea7d (MD5) Previous issue date: 2012-08-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This study presents an analysis both syntactic and semantic of the verb poder in Brazilian Portuguese. To achieve this goal, we started with a literature review, which consisted of works dedicated to the study of auxiliarity and modality in order to determine what these issues imply and what is usually considered for classifying the verb under investigation as an auxiliary and/or modal verb. As foundations of this study, we used two theories, namely, HPSG (Head-Driven Phrase Structure Grammar Gramática de Estruturas Sintagmáticas Orientadas pelo Núcleo), a model of surface oriented generative grammar, which consists of a phonological, a syntactic and a semantic component, and GL (The Generative Lexicon O Léxico Gerativo), a lexicalist model of semantic interpretation of natural language, which is proposed to deal with problems such as compositionality, semantic creativity, and logical polysemy. Because these models are unable to handle the verb poder of the Brazilian Portuguese as they were originally proposed, it was necessary to use the GL to make some modifications in HPSG, in order to semantically enrich this model of grammar, so that it can cope with the logical polysemy of the verb poder, its behavior as a raising and a control verb, the saturation of its internal argument, as well as to identify when it is an auxiliary verb. The analysis showed that: (a) poder has four meanings inherent to it, namely, CAPACITY, ABILITY, POSSIBILITY and PERMISSION; (b) to saturate the internal argument of poder, the phrase candidate to saturate that argument must be of type [proposition] and the core of that phrase must be of type [event]. In case those types are not identical, the type coercion is applied in order to recover the requested type for that verb; (c) poder is a raising verb when it means POSSIBILITY, in such case it selects no external argument. That is, it accepts as its subject whatever the subject of its VP-complement is; (d) poder is a control verb when it means CAPACITY, ABILITY and/or PERMISSION and in this case it requires that the saturator of its internal argument be of type [entity] when poder means CAPACITY, or of type [animal] when it means ABILITY and/or PERMISSION; (e) poder is an auxiliary verb only when it is a raising verb, because only in this situation it does not impose any selectional restrictions on the external argument and (f ) poder is considered a modal verb because it can express an epistemic notion possibility and at least three non-epistemic notions of modality capacity, ability and permission.
Este trabalho apresenta uma análise tanto sintática quanto semântica do verbo poder do português brasileiro. Para alcançar esse objetivo, partiu-se de uma revisão de literatura, a qual compreendeu trabalhos dedicados ao estudo da auxiliaridade e da modalidade, a fim de verificar o que essas questões implicam e o que geralmente é levado em consideração para classificar o verbo investigado como auxiliar e/ou modal. Como alicerces deste trabalho, foram utilizadas duas teorias, quais sejam, a HPSG (Head-Driven Phrase Structure Grammar Gramática de Estruturas Sintagmáticas Orientadas pelo Núcleo), um modelo de gramática gerativa orientada pela superfície, a qual é constituída de um componente fonológico, um sintático e um semântico, e o GL (The Generative Lexicon O Léxico Gerativo), um modelo lexicalista de interpretação semântica de língua natural, que se propõe a lidar com problemas como a composicionalidade, a criatividade semântica e a polissemia lógica. Devido ao fato de esses modelos não conseguirem lidar com o verbo poder do português brasileiro como eles foram propostos originalmente, foi necessário utilizar o GL para fazer algumas modificações na HPSG, a fim de enriquecer semanticamente esse modelo de gramática, de modo que ele consiga dar conta da polissemia lógica do verbo poder, de seu comportamento como verbo de alçamento e de controle, da saturação de seu argumento interno, além de identificar quando ele é um verbo auxiliar. A análise mostrou que: (a) quatro são os significados inerentes ao verbo poder, quais sejam, CAPACIDADE, HABILIDADE, PERMISSÃO e POSSIBILIDADE; (b) para saturar o argumento interno do verbo poder, o sintagma candidato a saturador deve ser do tipo [proposição], e o núcleo desse sintagma deve ser do tipo [evento] e, não havendo essa identidade de tipos, recorre-se à aplicação da construção de coerção de tipo para recuperar o tipo solicitado por aquele verbo; (c) poder é verbo de alçamento quando significa POSSIBILIDADE e, nesse caso, não seleciona argumento externo. Ou seja, aceita como sujeito qualquer que seja o sujeito de seu VP-complemento; (d) poder é verbo de controle quando significa CAPACIDADE, HABILIDADE e/ou PERMISSÃO e, nesse caso, requer que o sintagma saturador de seu argumento interno seja ou do tipo [entidade], quando significa CAPACIDADE, ou do tipo [animal], quando significa HABILIDADE e/ou PERMISSÃO; (e) poder só é verbo auxiliar quando é um verbo de alçamento, pois só nessa situação não impõe restrições selecionais quanto ao argumento externo; e (f) poder é considerado um verbo modal porque pode expressar uma noção epistêmica possibilidade e pelo menos três noções não epistêmicas de modalidade capacidade, habilidade e permissão.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mangcunyana, Mteteleli Nelson. "Uhlalutyo lwesemantiki yelekhisikoni yesenzi sentshukumo u-hamba kwisiXhosa". Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/1684.

Texto completo
Resumen
Thesis (MA (African Languages))--University of Stellenbosch, 2007.
This study explores semantic analysis of motion verb-hamba in IsiXhosa. In chapter 1 I have stated the aim of the study. I have discussed properties related to the lexical semantic analysis of the verb-hamba as well as Pustejovsky’s theory of the Generative Lexicon. The theoretical framework and the organization of study are also discussed in this chapter. Chapter 2 addresses in more detail the type system for semantics. A generative theory of the lexicon includes multiple levels of representation for different types of lexical information needed. These levels include Argument Structure, Event Structure, Qualia Structure and Lexical Inherent Structure. In this chapter there is a more detailed structure of the qualia and the role they play in distributing the functional behavior of words and phrases in composition. In chapter 3 I have examined the lexical semantic analysis of the verb-hamba to account for the range of selectional properties of the NP phrase subject argument of the verb-hamba and various interpretations that arise in terms of composition with its complement arguments. The polysemous behavior of the verb-hamba is examined in sentence alternation constructions with respect to the properties of the event structure. I have also investigated the lexical representation in terms of argument structure and the event structure of the verb-hamba in different sentences. Chapter 4 is the conclusion, summarizing the findings of all the previous chapters in this study on lexical semantic analysis of the motion verb-hamba in IsiXhosa. This is followed by word lists that contain meanings of words in the context in which they are used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dias, Márcio de Souza. "Análise de nomes da química Orgânica à luz da teoria do Léxico Gerativo - da análise sintático-semântica à geração das estruturas químicas através dos combinadores de Parser". Universidade Federal de Uberlândia, 2006. https://repositorio.ufu.br/handle/123456789/12559.

Texto completo
Resumen
This work proposes an automatic system of analysis for Organic Chemistry compound names aiming to generate pictures of chemical structures. In order to accomplish this, the system receives an organic compound name, analyzes it syntactically and semantically and, if it represents a correct chemical compound, generates a visual output for the corresponding structure. An advance that the system shows in relation to other systems which deal with the same problem consists on being able to analyse both compound names that satisfy the current o±cial nomenclature constraints and those that, despite of do not respect them, represent correct organic compounds (in this case, the system will have solved a nomenclature ambiguity problem). The semantic and syntactic analysis are guided by the name component types of the chemical compounds, which motivated an implementation that fits into the Generative Lexicon Theory (GLT) formalism. Furthermore, the analysis guided by types justified the decision of implementing the system by using the parser combinators and the Clean Funcional Language as very adequate and eficient tools. The implemented system represents a significative utilitarian as an automatic Organic Chemistry instructor.
O presente trabalho propõe um sistema automático de análise de nomes de compostos da Química Orgânica visando a geração do desenho de suas estruturas químicas. Para tanto, o sistema recebe um nome de um composto orgânico, analisa-o sintática e semanticamente e, caso ele represente um composto quimicamente correto, gera uma saída visual para a estrutura química que lhe corresponda. Um avanço que o sistema apresenta com relação a outros que se propõem a efetuar tarefa semelhante e o fato de ele conseguir analisar tanto nomes de compostos que se enquadram nos padrões das nomenclaturas oficiais vigentes, quanto aqueles que, apesar de não se enquadrarem nos mesmos, representam compostos orgânicos verdadeiros (quando ocorrer tal situação, o sistema teria resolvido um problema de ambigüidade de nomenclatura). As análises sintática e semântica são guiadas pelos tipos dos componentes dos nomes químicos, fato que motivou a implementação do sistema nos moldes do formalismo da Teoria do Léxico Gerativo (TLG). Além disso, as análises guiadas pelo tipo motivaram a escolha dos combinadores de Parser e da Linguagem de Programação Funcional Clean como utilitários eficazes e adequados na execução das análises lingüísticas. O sistema implementado representa uma ferramenta muito útil como instrutor automático de Química Orgânica.
Mestre em Ciência da Computação
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Msibi, Phakamile Innocentia. "Ucwaningo lwesimantikhi yelekhizikhoni yesenzo u-phuma esizulwini". Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5377.

Texto completo
Resumen
Thesis (MA (African Languages))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: The main concern of this thesis relates to an investigation of the lexical-semantic nature of the motion verb –phuma (exit, go out) in isiZulu within the framework of Generative Lexicon Theory. In particular, the thesis explores the event structure and aspectual verb class properties in the locative-subject alternation with the verb –phuma in isiZulu. Chapter one presents a general introduction to the study, stating the purpose and aims of the research, giving a broad perspective of the theoretical framework adopted, and outlining the organisation of the investigation of the lexical-semantic properties of –phuma. Chapter two presents a detailed discussion of Generative Lexicon Theory, which centrally concerns accounting for polysemy phenomena across various nominal and verbal expressions. The four dimensions of lexical-semantic representation that constitute the central theoretical properties in Generative Lexicon Theory are reviewed, i.e. Argument structure, Event structure, Qualia structure and Lexical Inheritance structure. In addition, the various facets of meaning of Qualia structure namely Fomral, Constitutive, Telic and Agentive facets, are described in relation to their theoretical significance in accounting for word meaning and polysemy. Chapter three examines in a systematic and comprehensive way the range of locative-subject alternation possibilities with the verb –phuma. In particular the range of semantic types of the NP subject argument of –phuma taking a locative complement is explored to determine whether all these sentences permit a corresponding locative-alternation construction. In addition, the aspectual verb class properties of the two variants in the alternation are analysed with regard to a range of diagnositics associated with stative events, activity events, achievement events and accomplishments events. It is known that the two variants in the alternation can be distinguished in terms of their aspectual verb class properties. Chapter four summarises the main findings of the study and presents the conclusion.
AFRIKAANSE OPSOMMING: Die hoofbelang van hierdie tesis hou verband met die ondersoek van die leksikaal-semantiese aard van die bewegingswerkwoorde –phuma in isiZulu binne die raamwerk van Generatiewe Leksikon teorie soos uiteengesit deur Pustejovsky (1996). Die tesis ondersoek spesifiek die gebeurtenis ('event') struktuur en aspektuele werkwoordklas eienskappe in die lokatief-subjek alternasie met die werkwoord –phuma in isiZulu. Hoofstuk een gee 'n algemene oorsig van die studie, stel die doelstellings van die teoretiese raamwerk wat aanvaar word, en skets die organisasie van die studie oor die leksikaalsemantiese kenmerke van –phuma. Hoofstuk twee bied 'n detail bespreking van Generatiewe Leksikonteorie, wat sentraal verband hou met die verklaring van polisemieverskynsels van naamwoordelike en werkwoordelike uitdrukkings. Die vier dimensies van leksikaal-semantiese representasie wat die sentrale teoretiese eienskappe vorm in Generatiewe Leksikonteorie word beskou, naamlik argumentstruktuur, Gebeurtenis ('Event') struktuur, Qualiastruktuur en Leksikaleerwingstruktuur. Voorts word die verskillende fasette van betekenis van Qualiastruktuur, nl. Formeel, Konstitief, Doel ('Telic') en Agentief beskryf rakende die teoretiese belang daarvan vir die verklaring van woordbetekenis en polisemie. Hoofstuk drie ondersoek op 'n sistematiese wyse die verskeidenheid van lokatief-subjek alternasie moontlikhede met die werkwoord –phuma. In die besonder, word die semantiese tipes van die NP subjek argument van –phuma wat 'n lokatiewe komplement neem ondersoek om te bepaal watter van hierdie sinne neem 'n lokatiewe-alternasie konstruksie. Voorts word die aspektuele werkwoordklas kenmerke van die twee variante in die alternasie ontleed met verwysing na 'n reeks toetse vir die onderskeid van aspektuele werkwoordklasse. Daar word aangetoon dat die twee alternasies onderskei kan word in terme van aspektuele werkwoordklas. Hoofstuk vier gee die opsomming en konklusie van die studie.
UKUBUKEZA KAFUSHANE: Lesi sifundo sibheka ucwaningo lwesimantikhi yelekhizikhoni yezenzo ezikhethiweyo esiZulwini. Esahlukweni soku – 1, injongo yalesisifundo iyashiwo, imiphumela yocwaningo mayelana nesimathikhi yelekhizikhoni yesenzo u – phuma kanjalo nengqikithi yelekhizikhoni itshengiswe ngokukaPustejovosky (1996). Isimo sengqikithi kanye nokulungiselelwa kwesifundo kuzoxoxwa ngakho kulesisifundo. Isahluko sesi – 2 siveza uhlobo lwesimantikhi yethu. Ulwazi olucutshunguliwe lwelekhizikhoni lufaka amazinga amaningi amele izinhlobo ezahlukeneyo zolwazi lwesimantikhi. Kula mazinga singabala isakhiwo sempikiswano, isakhiwo sesigameko, isakhiwo sekhwaliya kanye nesakhiwo esisohlwini ololandelayo. Lesi sahluko sesibili sibuye siboniso ngokucace kakhulu ngokwesakhiwo sekhwaliya nangendima edlaliwe ekuqhubekiseni imisebenzi yamagama kanye namabinzana ahlanganisiwe. Isahluko sesi – 3 sihlola ucwaningo lwesimantikhi lwesenzo u – phuma esimayelana nezingxenye zezimpawu ezikhethiweyo zempikiswano yebinzana lebizo eliyinhloko yesenzo u – phuma kanye nezincazelo ezahiukahlukene ezivela emagameni ahlanganiswe ngokwempikiswano yemfezeko. Izindlela zezincazelo eziningi zesenzo u – phuma zihloliwe esakhiweni sokushintshana emishweni ngokubandakanye esakhiweni sesigameko. Incazelo yelekhizikhoni ngokwamagama esakhiwo sempikiswano kanye nesakhiwo sesigameko sesenzo u – phuma emishweni eyahlukahlukene icutshunguliwe. Isahluko sesi – 4 siyisiphetho esifingqa konke okutholakala ezahlukweni ezindlule esifundweni socwaningo lwelekhizikhoni yesimantikhi yezenzo ezikhethwe esiZulwini.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mirzapour, Mehdi. "Modeling Preferences for Ambiguous Utterance Interpretations". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS094/document.

Texto completo
Resumen
Le problème de représentation automatique de la signification logique des énoncés ambigus en langage naturel a suscité l'intérêt des chercheurs dans le domaine de la sémantique computationnelle et de la logique. L'ambiguïté dans le langage naturel peut se manifester au niveau lexical / syntaxique / sémantique de la construction de sens, ou elle peut être causée par d'autres facteurs tels que la grammaticalité et le manque de contexte dans lequel la phrase est effectivement prononcée. L'approche traditionnelle Montagovienne ainsi que ses extensions modernes ont tenté de capturer ce phénomène en fournissant quelques modèles qui permettent la génération automatique de formules logiques. Cependant, il existe un axe de recherche qui n'est pas encore profondément étudié: classer les interprétations d'énoncés ambigus en fonction des préférences réelles des utilisateurs de la langue. Ce manque suggère une nouvelle direction d'étude qui est partiellement explorée dans ce mémoire en modélisant des préférences de sens en alignement avec certaines des théories de performance préférentielles humaines bien étudiées disponibles dans la littérature linguistique et psycholinguistique.Afin d'atteindre cet objectif, nous suggérons d'utiliser / d'étendre les Grammaires catégorielles pour notre analyse syntaxique et les Réseaux catégoriels de preuve comme notre analyse syntaxique. Nous utilisons également le Lexique Génératif Montagovien pour dériver une formule logique multi-triée comme notre représentation de signification sémantique. Cela ouvrirait la voie à nos contributions à cinq volets, à savoir, (i) le classement de la portée du quantificateur multiple au moyen de l'opérateur epsilon de Hilbert sous-spécifié et des réseaux de preuve catégoriels; (ii) modéliser la gradation sémantique dans les phrases qui ont des coercitions implicites dans leurs significations. Nous utilisons un cadre appelé Montagovian Generative Lexicon. Notre tâche est d'introduire une procédure pour incorporer des types et des coercitions en utilisant des données lexicales produites par externalisation ouverte qui sont recueillies par un jeu sérieux appelé JeuxDeMots; (iii) l'introduction de nouvelles métriques sensibles au référent basées sur la localité pour mesurer la complexité linguistique au moyen de réseaux de preuve catégoriels; (iv) l'introduction d'algorithmes pour l'achèvement des phrases avec différentes mesures linguistiquement motivées pour sélectionner les meilleurs candidats; (v)l'intégration de différentes métriques de calcul pour les préférences de classement afin de faire d'elles un modèle unique
The problem of automatic logical meaning representation for ambiguous natural language utterances has been the subject of interest among the researchers in the domain of computational and logical semantics. Ambiguity in natural language may be caused in lexical/syntactical/semantical level of the meaning construction or it may be caused by other factors such as ungrammaticality and lack of the context in which the sentence is actually uttered. The traditional Montagovian framework and the family of its modern extensions have tried to capture this phenomenon by providing some models that enable the automatic generation of logical formulas as the meaning representation. However, there is a line of research which is not profoundly investigated yet: to rank the interpretations of ambiguous utterances based on the real preferences of the language users. This gap suggests a new direction for study which is partially carried out in this dissertation by modeling meaning preferences in alignment with some of the well-studied human preferential performance theories available in the linguistics and psycholinguistics literature.In order to fulfill this goal, we suggest to use/extend Categorial Grammars for our syntactical analysis and Categorial Proof Nets as our syntactic parse. We also use Montagovian Generative Lexicon for deriving multi-sorted logical formula as our semantical meaning representation. This would pave the way for our five-folded contributions, namely, (i) ranking the multiple-quantifier scoping by means of underspecified Hilbert's epsilon operator and categorial proof nets; (ii) modeling the semantic gradience in sentences that have implicit coercions in their meanings. We use a framework called Montagovian Generative Lexicon. Our task is introducing a procedure for incorporating types and coercions using crowd-sourced lexical data that is gathered by a serious game called JeuxDeMots; (iii) introducing a new locality-based referent-sensitive metrics for measuring linguistic complexity by means of Categorial Proof Nets; (iv) introducing algorithms for sentence completions with different linguistically motivated metrics to select the best candidates; (v) and finally integration of different computational metrics for ranking preferences in order to make them a unique model
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Salazar, Burgos Hada Rosabel. "Descripción y representación de los adjetivos deverbales de participio en el discurso especializado". Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/41720.

Texto completo
Resumen
El objetivo de esta tesis, es reunir información gramatical suficiente que permita determinar qué características deben reunir las bases verbales del español para ser capaces de originar un adjetivo deverbal de participio (ADP), y, basados en ello, poder describir cómo opera el proceso de activación de valor especializado en los términos N+ADP del dominio de la economía. Estas construcciones sintácticas mínimas son muy productivas en los discursos de ámbitos especializados, sin embargo la naturaleza híbrida de la forma participial acarrea muchos conflictos a la tarea de Procesamiento de Lenguaje Natural (PNL). Esta aproximación al análisis de los ADP es lingüística, está anclada teóricamente en la Teoría Comunicativa de la Terminología (TCT) e intenta ser el punto de contacto entre teoría y aplicación.
The goal of this thesis is to pinpoint the grammatical information that is necessary to determine which Spanish verb stems give rise to an adjectival participle (AP). This information will allow us to describe the linguistic indicators that, within the domain of economy, activate a specialized meaning in those terms that have the structure AP+noun. These syntactic minimal constructions are highly productive in specialized discourse. Nevertheless, the hybrid nature of the participial form invokes many conflicts in Natural Language Processing (NLP) applications. This descriptive approach to the adjectival participles is linguistic in nature, based on the Communicative Theory of Terminology (CTT), intends to be the point of contact between theory and application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Matos, Ely Edison da Silva. "LUDI: um framework para desambiguação lexical com base no enriquecimento da semântica de frames". Universidade Federal de Juiz de Fora, 2014. https://repositorio.ufjf.br/jspui/handle/ufjf/695.

Texto completo
Resumen
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-02-05T16:40:06Z No. of bitstreams: 1 elyedisondasilvamatos.pdf: 5520917 bytes, checksum: c9e7d798d96928a6ad4f2ee48d912531 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-02-26T11:51:46Z (GMT) No. of bitstreams: 1 elyedisondasilvamatos.pdf: 5520917 bytes, checksum: c9e7d798d96928a6ad4f2ee48d912531 (MD5)
Made available in DSpace on 2016-02-26T11:51:47Z (GMT). No. of bitstreams: 1 elyedisondasilvamatos.pdf: 5520917 bytes, checksum: c9e7d798d96928a6ad4f2ee48d912531 (MD5) Previous issue date: 2014-06-27
Enquanto no âmbito da Sintaxe, as técnicas, os algoritmos e as aplicações em Processamento da Língua Natural são bem estudados e já estão relativamente bem estabelecidos, no âmbito da Semântica não é possível observar ainda a mesma maturidade. Visando, então, contribuir para os estudos em Semântica Computacional, este trabalho busca maneiras de implementar algumas das ideias e dos insights propostos pela Linguística Cognitiva, que é, por si, uma alternativa à Linguística Gerativa. A tentativa é reunir algumas das ferramentas disponíveis, seja no viés computacional (Bancos de Dados, Teoria dos Grafos, Ontologias, Mecanismos de inferências, Modelos Conexionistas), seja no viés linguístico (Semântica de Frames e Teoria do Léxico Gerativo), seja no viés de aplicações (FrameNet e ontologia SIMPLE), a fim de abordar as questões semânticas de forma mais flexível. O objeto de estudo é o processo de desambiguação de Unidades Lexicais. O resultado da pesquisa realizada é corporificado na forma de uma aplicação computacional, chamada Framework LUDI (Lexical Unit Discovery through Inference), composta por algoritmos e estruturas de dados usados na desambiguação. O framework é uma aplicação de Compreensão da Língua Natural, que pode ser integrada em ferramentas para recuperação de informação e sumarização, bem como em processos de Etiquetagem de Papéis Semânticos (SRL - Semantic Role Labeling).
While in the field of Syntax techniques, algorithms and applications in Natural Language Processing are well known and relatively well established, the same situation does not hold for the field of Semantics. Aiming at contributing to the studies in Computational Semantics, this work implements ideas and insights offered by Cognitive Linguistics, which is itself an alternative to Generative Linguistics. We attempt to bring together contributions from the computational domain (Databases, Graph Theory, Ontologies, inference mechanisms, Connectionists Models), the linguistic domain (Frame Semantics and the Generative Lexicon), and the application domain (FrameNet and SIMPLE Ontology) in order to address the semantic issues more flexibly. The object of study is the process of disambiguation of Lexical Units. The results of the research are embodied in the form of a computer application, called Framework LUDI (Lexical Unit Discovery through Inference), and composed of algorithms and data structures used for Lexical Unit disambiguation. The framework is an application of Natural Language Understanding, which can be integrated into information retrieval and summarization tools, as well as into processes of Semantic Role Labeling (SRL).
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Mery, Bruno. "Modélisation de la Sémantique Lexicale dans le cadre de la théorie des types". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00627432.

Texto completo
Resumen
Le présent manuscrit constitue la partie écrite du travail de thèse réalisé par Bruno Mery sous la direction de Christian Bassac et Christian Retoré entre 2006 et 2011, portant sur le sujet "Modélisation de la sémantique lexicale dans la théorie des types". Il s'agit d'une thèse d'informatique s'inscrivant dans le domaine du traitement automatique des langues, et visant à apporter un cadre formel pour la prise en compte, lors de l'analyse sémantique de la phrase, d'informations apportées par chacun des mots. Après avoir situé le sujet, cette thèse examine les nombreux travaux l'ayant précédée et s'inscrit dans la tradition du lexique génératif. Elle présente des exemples de phénomènes à traiter, et donne une proposition de système de calcul fondée sur la logique du second ordre. Elle examine ensuite la validité de cette proposition par rapport aux exemples et aux autres approches déjà formalisées, et relate une implémentation de ce système. Enfin, elle propose une brève discussion des sujets restant en suspens.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Bandhakavi, Anil. "Domain-specific lexicon generation for emotion detection from text". Thesis, Robert Gordon University, 2018. http://hdl.handle.net/10059/3103.

Texto completo
Resumen
Emotions play a key role in effective and successful human communication. Text is popularly used on the internet and social media websites to express and share emotions, feelings and sentiments. However useful applications and services built to understand emotions from text are limited in effectiveness due to reliance on general purpose emotion lexicons that have static vocabulary and sentiment lexicons that can only interpret emotions coarsely. Thus emotion detection from text calls for methods and knowledge resources that can deal with challenges such as dynamic and informal vocabulary, domain-level variations in emotional expressions and other linguistic nuances. In this thesis we demonstrate how labelled (e.g. blogs, news headlines) and weakly-labelled (e.g. tweets) emotional documents can be harnessed to learn word-emotion lexicons that can account for dynamic and domain-specific emotional vocabulary. We model the characteristics of realworld emotional documents to propose a generative mixture model, which iteratively estimates the language models that best describe the emotional documents using expectation maximization (EM). The proposed mixture model has the ability to model both emotionally charged words and emotion-neutral words. We then generate a word-emotion lexicon using the mixture model to quantify word-emotion associations in the form of a probability vectors. Secondly we introduce novel feature extraction methods to utilize the emotion rich knowledge being captured by our word-emotion lexicon. The extracted features are used to classify text into emotion classes using machine learning. Further we also propose hybrid text representations for emotion classification that use the knowledge of lexicon based features in conjunction with other representations such as n-grams, part-of-speech and sentiment information. Thirdly we propose two different methods which jointly use an emotion-labelled corpus of tweets and emotion-sentiment mapping proposed in psychology to learn word-level numerical quantification of sentiment strengths over a positive to negative spectrum. Finally we evaluate all the proposed methods in this thesis through a variety of emotion detection and sentiment analysis tasks on benchmark data sets covering domains from blogs to news articles to tweets and incident reports.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Pereira, Dennis V. "Automatic Lexicon Generation for Unsupervised Part-of-Speech Tagging Using Only Unannotated Text". Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/10094.

Texto completo
Resumen
With the growing number of textual resources available, the ability to understand them becomes critical. An essential first step in understanding these sources is the ability to identify the parts-of-speech in each sentence. The goal of this research is to propose, improve, and implement an algorithm capable of finding terms (words in a corpus) that are used in similar ways--a term categorizer. Such a term categorizer can be used to find a particular part-of-speech, i.e. nouns in a corpus, and generate a lexicon. The proposed work is not dependent on any external sources of information, such as dictionaries, and it shows a significant improvement (~30%) over an existing method of categorization. More importantly, the proposed algorithm can be applied as a component of an unsupervised part-of-speech tagger, making it truly unsupervised, requiring only unannotated text. The algorithm is discussed in detail, along with its background, and its performance. Experimentation shows that the proposed algorithm performs within 3% of the baseline, the Penn-TreeBank Lexicon.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Abeyruwan, Saminda Wishwajith. "PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods". Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/28.

Texto completo
Resumen
An ontology is a formal, explicit specification of a shared conceptualization. Formalizing an ontology for a domain is a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck (KAB). There exists a large number of text corpora that can be used for classification in order to create ontologies with the intention to provide better support for the intended parties. In our research we provide a novel unsupervised bottom-up ontology generation method. This method is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. This process also provides evidence to domain experts to build ontologies based on top-down approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Kozlowski, Raymond. "Uniform multilingual sentence generation using flexible lexico-grammatical resources". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 0.93 Mb., 213 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3200536.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Caink, Andrew David. "The lexical interface : closed class items in south Slavic and English". Thesis, Durham University, 1998. http://etheses.dur.ac.uk/5026/.

Texto completo
Resumen
This thesis argues for a minimalist theory of dual lexicalization. It presents a unified analysis of South Slavic and English auxiliaries and accounts for the distribution of South Slavic clitic clusters. The analysis moves much minor cross-linguistic variation out of the syntax into the lexicon and the level of Phonological Form. Following a critique of various approaches to lexical insertion in Chomskyan models, we adapt Emonds' (1994, 1997) theory of syntactic and phonological lexicalization for a model employing bare phrase structure. We redefine 'extended projection' in this theory, and revise the mechanism of 'Alternative Realization', whereby formal features associated with (possibly null) XP may be realised on another node. Pronominal clitics are one example of Alternative Realization. We claim that the Serbian/Croatian/Bosnian clitic cluster is phonologically lexicalized on the highest head in the extended projection. The clitic auxiliaries in SCB are not auxiliaries, but the altemative realization of features in 1º without categorial specification, hence the distribution of the clitic cluster as a whole. We show how a verb's extended projection may be extended by 'restructuring' verbs, allowing clitic climbing. In Bulgarian/Macedonian, the clausal clitic cluster appears on the highest [+V] head in the extended projection, determined by the categorial specifications of the auxiliaries. In the DP, the possessive dative clitic forms a clitic cluster with the determmer, its distribution determined by the realization of the Dº feature. SCB and Bulgarian clitic clusters require a phonological host in the domain of lexicalization: phonological lexicalization into the Wackemagel Position occurs as a 'last resort'. The treatment of auxiliaries and restructuring verbs m English and South Slavic derives from their lexical entries. Dual lexicalization and bracketing of features in the lexicon allows variation in trace licensing, optional word orders, and minor language-specific phonological idiosyncrasies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Arapinis, Alexandra. "Le Mot et la Chose Revisités: le Cas de la Polysémie Systématique". Phd thesis, Université Panthéon-Sorbonne - Paris I, 2009. http://tel.archives-ouvertes.fr/tel-00614536.

Texto completo
Resumen
La polysémie systématique, qui occupe une place grandissante dans les débats de sémantique lexicale depuis les années 1990, semble remettre à l'ordre du jour la question fondamentale du rapport entre les mots et les choses. Partant du constat que ces phénomènes de multi-sens n'impliquent pas de réel changement de référence, mais semblent au contraire mettre en jeu différentes parties ou aspects d'un même référent, ce travail propose une relecture métaphysique de deux modèles typés de la polysémie systématique (le Generative Lexicon de Pustejovsky et la Type Composition Logic de Asher), visant à clarifier les notions d'aspect/partie/constituant d'un objet, mobilisées dans la formulation des règles compositionnelles de génération des significations contextuelles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Thwaites, Peter. "Lexical and distributional influences on word association response generation". Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/119182/.

Texto completo
Resumen
This thesis is the result of an attempt to investigate the determinants of word association responses. The aim of this work was to identify those properties of words - their frequency, grammatical class, and textual distribution, for example - which influence the generation of word association responses, and to align these effects with wider sycholinguistic views of the mental lexicon. The experimental work in the early chapters focuses on grammatical influences on wordassociation. In particular, it is demonstrated that both grammatical class and verb transitivity influence the type of response most likely to be selected by participants. The immediately following chapters ask why this would be so. The analysis of several models of word association suggests that the development of a clearer understanding of the way in which a word's textual distribution impacts upon associative response patterns may be an important stepping stone towards a coherent model of associative response generation. In the later part of the thesis, a series of novel experiments is conducted comparing word association response patterns with corpus-derived data. This work in turn lays the foundation for the development of a new usage-based model of word association, which is shown, in the penultimate chapter, to be capable of explaining a wide range of research findings, including not only the grammatical class and transitivity-related findings described above, but also earlier findings relating to the influence of lexical variables on the structure of the associative network, and to the discovery of individual and age-related response patterns in word association.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chiu, Pei-Wen Andy. "From Atoms to the Solar System: Generating Lexical Analogies from Text". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2943.

Texto completo
Resumen
A lexical analogy is two pairs of words (w1, w2) and (w3, w4) such that the relation between w1 and w2 is identical or similar to the relation between w3 and w4. For example, (abbreviation, word) forms a lexical analogy with (abstract, report), because in both cases the former is a shortened version of the latter. Lexical analogies are of theoretic interest because they represent a second order similarity measure: relational similarity. Lexical analogies are also of practical importance in many applications, including text-understanding and learning ontological relations.

This thesis presents a novel system that generates lexical analogies from a corpus of text documents. The system is motivated by a well-established theory of analogy-making, and views lexical analogy generation as a series of three processes: identifying pairs of words that are semantically related, finding clues to characterize their relations, and generating lexical analogies by matching pairs of words with similar relations. The system uses a dependency grammar to characterize semantic relations, and applies machine learning techniques to determine their similarities. Empirical evaluation shows that the system performs remarkably well, generating lexical analogies at a precision of over 90%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mullen, Dana Shirley. "Issues in the morphology and phonology of Amharic the lexical generation of pronominal clitics". Thesis, University of Ottawa (Canada), 1986. http://hdl.handle.net/10393/5402.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Schwanhäu[beta]er, Barbara. "Lexical tone perception and production the role of language and musical background /". View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/31791.

Texto completo
Resumen
Thesis (Ph.D.) -- University of Western Sydney, 2007.
"A thesis submitted to the University of Western Sydney, College of Arts, MARCS Auditory Laboratories in fulfilment of the requirements for the degree of Doctor of Philosophy." Includes bibliography.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Booth, Hannah. "Expletives and clause structure : syntactic change in Icelandic". Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/expletives-and-clause-structure-syntactic-change-in-icelandic(7907d61b-4404-4964-bf8d-ce304c0fab8d).html.

Texto completo
Resumen
This thesis examines the historical development of the expletive það in Icelandic, from the earliest texts to the present day. This development is set against the backdrop of Icelandic clause structure, with particular attention to verb-second, information structure and the left periphery. The study combines corpus linguistic data and quantitative techniques with theoretical analysis, conducted within Lexical Functional Grammar. I show that Icelandic underwent three syntactic developments in the period 1750-present and argue that these all reflect one overall change: the establishment of það as a structural placeholder for the topic position (the clause-initial prefinite position). I claim that það functions as a topic position placeholder in the earliest attested stage of Icelandic (1150-1350), but is restricted to a specific context: topicless subjectless constructions with a clausal object, where það has cataphoric reference. The three changes in the period 1750-present represent the establishment of this topic position placeholder in new contexts: (1) það generalises to all types of topicless subjectless construction, beyond those with a clausal object; (2) það emerges in presentational constructions (which inherently lack a topic), out-competing the earlier expletive form þar; (3) in cataphoric contexts with a clausal subject, það begins to transition from subject to topic position placeholder. The majority of these contexts exhibit at least a short period in which það - or alternatively þar - behaves like a subject. Icelandic thus exhibits the emergence of a topic position placeholder expletive from an earlier subject-like element. This shift towards prefinite expletives, which sets Icelandic apart from e.g. Mainland Scandinavian, happens relatively late in the diachrony (1750-present). Moreover, the Icelandic development challenges the standard claim in the literature on Germanic expletives, which assumes that subject expletives emerge from prefinite expletives.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Walter, Sebastian [Verfasser] y Philipp [Akademischer Betreuer] Cimiano. "Generation of multilingual ontology lexica with M-ATOLL : a corpus-based approach for the induction of ontology lexica / Sebastian Walter ; Betreuer: Philipp Cimiano". Bielefeld : Universitätsbibliothek Bielefeld, 2017. http://d-nb.info/1123723729/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hamed, Osama Amin [Verfasser] y Torsten [Akademischer Betreuer] Zesch. "Automatic generation of lexical recognition tests using natural language processing / Osama Amin Hamed ; Betreuer: Torsten Zesch". Duisburg, 2019. http://d-nb.info/1198111313/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Laroui, Abdellatif. "Le composant lexical mediateur entre le composant conceptuel et le composant linguistique dans le cadre de la generation multilingue". Paris 6, 1993. http://www.theses.fr/1993PA066142.

Texto completo
Resumen
Cette these s'inscrit dans le paradigme de l'intelligence artificielle et traite du probleme de la generation multilingue dans le cadre de la comprehension du langage naturel par la machine. Notre problematique est de monter l'interdependance entre un composant linguistique (construit autour de la theorie sens-texte). Nous realisons cette interdependance par la mediation d'un composant lexical. Pour cela, nous nous fondons, d'une part sur: l'organisation du lexique; l'utilisation de la notion de contexte pour preciser les conditions de selection du mot; l'incidence de la retroaction entre le composant linguistique; l'application des fonctions lexicales. D'autre part, nous optimisons le systeme par: le calcul du plus grand facteur discriminant; le principe holographique
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Cruz, Adilson Góis da. "A expressão do argumento dativo no português escrito: um estudo comparativo entre o português brasileiro e o português europeu". Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/8/8142/tde-27112009-140208/.

Texto completo
Resumen
Esta dissertação estuda, em uma perspectiva comparativa entre o português brasileiro (PB) e o português europeu (PE), a representação do argumento dativo de terceira pessoa em um corpus de língua escrita formal, constituído pelas traduções brasileira e lusitana feitas diretamente do espanhol do romance Cem anos de solidão de Gabriel Garcia Marques. A análise detém-se ao comportamento de três variantes do dativo o clítico lhe/lhes, os PPs a/para ele(s)/ela(s) e o pronome nulo nos contextos de predicados ditransitivos, inacusativos, causativos, incoativos e inergativos. Dentro do quadro teórico da Teoria Gerativa e da Teoria da Variação, pretende-se explicitar diferenças entre o PB e o PE que possam corroborar, ou não, a hipótese de que essas duas variedades do português apresentam gramáticas distintas.
This dissertation discusses, in a comparative perspective between Brazilian Portuguese (BP) and European Portuguese (EP), the expression of the dative argument of the third person in a formal writing corpus constituted by the Brazilian and European translations directly from Spanish of the book A Hundred Years of Solitude, by Gabriel Garcia Marques. The analysis considers the behaviour of three dative variants the clitic lhe/lhe, the PPs a/para ele(s)/ela(s) and the null pronoun in ditransitive, inaccusative, causative, incoative and inergative predicates. In the context of the Generative Theory and the Variation Theory, the goal is to show differences between BP and EP that can confirm, or not, the hypothesis that the two variants of Portuguese reveal distinct grammars.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

IRAQUI, (ép SINACEUR) ZAKIA. "Etude lexicale des parlers arabes marocains". Paris 3, 1986. http://www.theses.fr/1986PA030068.

Texto completo
Resumen
Etude lexicale des parlers arabes marocains a partir d'un important corpus, le fichier de g. S. Colin, qui contient plus de 5000 racines, et d'enquetes orales. La structure de l'arabe marocain repose sur le croisement scheme-racine et l'utilisation d'un ensemble de suffixes generatifs de formes nouvelles. Examen de la racine trilitere avec tous les schemes qui se sont degages du depouillement systematique de onze lettres du fichier. L'etude de chaque scheme se refere aux structures de l'arabe classique considere comme norme. Le lexique de l'arabe marocain est constitue de mots classiques qui ont obei a certaines lois d'evolution linguistique. Les contacts de ce parler avec d'autres langues ont entraine l'apparition de nombreux termes etrangers. Les emprunts berberes, turcs, espagnols et francais ont ete parfaitement assimiles, coules dans des moules arabes et plies aux lois morphologiques de la langue d'accueil: derivation, formation de pluriels et de diminutifs. Ils ont parfois donne naissance a de nouvelles racines. On peut degager du lexique des ensembles de schemes correspondants a des categories determinees: masdars, adjectifs, noms de metiers, pluriels, diminutifs. L'arabe marocain est en pleine evolution lexicale et meme phonologique sous l'influence des mass-media et de l'enseignement arabise
Lexical study of moroccan arabic on the basis of an important corpus, the g. S. Colin's file containing more than 5000 roots and some oral research. The structure of moroccan arabic is based on the root-pattern intercrossing and the use of a set of suffixes generating new forms. Examination of triliteral roots with all the patterns that have been detected by the systematic analysis of the eleven letters of the file. The study of each pattern refers to classical arabic considered as a standard. The lexicon of moroccan arabic consists of classical words which have followed some of laws of linguistic evolution. The contact of the dialect with other languages have resulted in the appearance of many foreign terms. Berber, turkish, spanish and french borrowings have been perfectly assimilated, cast into arabic moulds and submitted to the morphological laws of the receiving language: derivation, formation of plurals, diminutives. They have sometimes given birth to new roots. Sets of patterns corresponding to specific categories: masdars, adjectives, participles, trade nouns, plurals and diminutives can be brought to light in the lexicon. Moroccan arabic is undergoing a major change, not only lexical, but also phonological under the influence of mass media and arabicized education
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Rein, Kellyn [Verfasser]. "I believe it's possible it might be so.... Exploiting Lexical Clues for the Automatic Generation of Evidentiality Weights for Information Extracted from English Text / Kellyn Rein". Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1119803217/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Mazón, Larson Erik Ramón. "Diferencias léxicas entreinmigrantes de distintageneración : Un estudio piloto sobre el cambiointergeneracional de conocimientos de español". Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-61018.

Texto completo
Resumen
This pilot project studies the differences in knowledge about Spanish language between inmigrants from Spanish speaker countries and its descendants in Växjö, a small town insouthern Sweden. It has been noticed the extended use of Swedish terms and structuresamong inmigrants. This phenomenon has been studied by other researchers as an exampleof language displacement where users of a mimority language, in our case Spanish,gradually change their vocabulary and grammar for that of a present language of morecommon use in the community, which is Swedish in Växjö. Unlike most them, we suggestthat in the case of new users of the minority language, this is, descendants of originalinmigrants, there is no displacement but actually a lack of learning of the minoritylanguage, due to a limited acces to Spanish resources, which is compensated with the useof Swedish terms. This would results in an apparent language displacement.Based on different theories, among which we should name Bloomfield´s model oflanguage learning, Sarmiento´s division of identity in three aspects: language, traditionsand appearance, and Esquivel´s findings about adjacent languages; researchers havedeveloped these questions: what effect has over Spanish language the genealogicaldistance between first inmigrants and each successive generation? What other variablescan influence language differences? And where can these differences be found? Theresults of our study show a factual tendency to the impoverishment of the Spanishlanguage for every generation after the migration, although the results are not statisticallysignificant, what makes it interesting to continue the study at a bigger scale. We have alsofound out a correlation between Spanish language knowledge and language identity andthe perception of acceptance by the Swedish community. Among the studied areas ofknowledge, the ones that present with biggest lacks are technical subjects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Santos, Paola Junqueira Pinto dos. "Orações infinitivas : da seleção ao controle". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/21564.

Texto completo
Resumen
No português, o sujeito das orações infinitivas não-flexionadas é preenchido pela categoria vazia PRO, que tem, segundo a Teoria Gerativa, natureza mista, comportando-se ora como um pronome, com referenciação livre; ora como uma anáfora, com referenciação vinculada a algum argumento da oração imediatamente superior. Este trabalho objetiva estudar dois aspectos básicos das orações infinitivas: (1) quais verbos as selecionam, e (2) se as mesmas classes de verbos condicionam a forma com que acontece o controle sobre PRO. Para isso, fez-se necessário um estudo sobre a complementização no português a fim de observar quais são os verbos que selecionam infinitivo subordinado e como o fazem. Por fim, procura-se estabelecer se o controle é um fenômeno de ordem sintática, como afirma Chomsky (1981/1982), ou de ordem semântica, envolvendo a interpretação dos predicados básicos por trás dos verbos de controle, tal como observam Culicover e Jackendoff (2003/2005). Com este trabalho, objetiva-se, ainda, contribuir com os estudos lingüísticos através da descrição, análise e explicação de um fenômeno ainda pouco explorado no português do Brasil.
En el idioma portugués, el sujeto de las oraciones no flexionado es ocupado por la categoría vacía PRO, que tiene, de acuerdo a la Teoría Generativa, naturaleza mixta, comportándose como un pronombre, con referencia libre; o como una anáfora, con referencia vinculada a algún argumento de la oración inmediatamente superior. Esta investigación tiene por objeto estudiar dos aspectos básicos de las oraciones infinitivas: (1) cuáles verbos las seleccionan, y (2) si las mismas clases de verbos condicionan la forma con la cual ocurre el control del PRO. Para eso, fue necesario un estudio sobre la complementarización en portugués, a fin de observar cuáles son los verbos que seleccionan infinitivo subordinado y cómo lo hacen. Finalmente, se busca establecer si el control es un fenómeno de orden sintáctico, como afirma Chomsky (1981/1982), o de orden semántico, involucrando la interpretación de los predicados básicos detrás de los verbos de control, como observan Culicover e Jackendoff (2003/2005). Con esta investigación, si tiene por objeto, también, contribuir con los estudios lingüísticos a través de la descripción, análisis y explicación de un fenómeno aún poco explorado en el portugués de Brasil.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Kyjovská, Linda. "Syntaktická analýza založená na multigenerování". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235439.

Texto completo
Resumen
This work deals with syntax analysis problems based on multi-generation. The basic idea is to create computer program, which transforms one input string to n -1 output strings. An Input of this program is some plain text file created by user, which contains n grammar rules. Just one grammar from the input file is marked as an input grammar and others n -1 grammars are output grammars. This program creates list of used input grammar rules for an input string and uses corresponding output grammar rules for the creation of n -1 output strings. The program is written in C++ and Bison
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Dolíhal, Luděk. "Syntaktická analýza založená na řadě metod". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236688.

Texto completo
Resumen
p, li { white-space: pre-wrap; } The main goal of this work is to analyze the creation of the composite compiler. Composite compiler is in this case a szstem, which consists of more cooperating parts. My compiler is special, because its syntactic analyser consists of two parts. The work is focused on the construction of the parsers parts, on its cooperation and comunication. I will trys to scatch the teoretical backgroun of this solution. This is to be done by gramatical systems. Then I~will try to justify whether or not it is neccesary and suitable to create such a kind of parser. Last but not least I~will analyse the language, whose syntactic analyser is to be implemented by the chosen method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

LIU, CHIUNG-YI y 劉瓊怡. "Dynamic Generative Lexicon". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/26194439541953200466.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Duann, Ren-feng y 段人鳯. "When Embodiment Meets Generative Lexicon: The Human Body Part Metaphors in Taiwan Presidential Speeches". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/6v826m.

Texto completo
Resumen
博士
國立臺灣大學
語言學研究所
103
This dissertation integrates embodiment with generative lexicon. By analyzing the metaphorically/metonymically used human body part terminology in the Taiwan Presidential Corpus, a representative sample of the Taiwanese leadership rhetoric, we reveal how these two theories complement each other on the one hand, and disclose how the changing political context leads to the discriminated uses of the corporeal terms on the other hand. We argue that the two theories can complement each other: Embodiment strengthens generative lexicon by spelling out the cognitive reasons which motivate meaning generation; and generative lexicon, specifically the qualia structure, reinforces embodiment by accounting for the reason underlying the selection of a particular body part for metaphorization. Choosing to analyze how the four body parts—血 xie ‘blood’, 肉 rou ‘flesh’, 骨 gu ‘bone’, 脈 mai ‘meridian’—behave in the Taiwan Presidential Corpus, this dissertation aims to answer the following questions: (1) How do embodiment and generative lexicon interact? Does the qualia role influence the metaphorical/metonymical use of the body part terms? Or does the metaphorical/metonymical use of the body part terms facilitate the retrieval of the qualia role? (2) What is the significance of qualia structure in constraining the selection of body parts for metaphorical/metonymical use? (3) What is the significance of the qualia structure and the generative mechanisms in the formulation and comprehension of the conceptual pairings involving body parts? (4) How are political ideas conceptualized by the country leadership’s use of corporeal terminology? In other words, how can we establish the association between the activation of certain body parts and a certain political context? This dissertation, built on the potentiality to incorporate embodiment and generative lexicon, investigates the body part metaphors/metonyms used in the leadership rhetoric in Taiwan. We hypothesize that different body parts are activated in different ways in political speeches due to their distinctive features and functions, and the visibility and telicity of a body part are the major reasons why the body part is chosen for metaphorical/metonymical use. Moreover, different political agenda are likely to be reflected in the particular uses of corporeal terms, and the change of the socio-political context should lead to the diverging uses of an identical body part referred to in the speeches. This dissertation will contribute to research on conceptual metaphor, generative lexicon, as well as political discourse. Methodologically, this research, modifying the metaphor identification procedure (Pragglejaz Group 2007), provides a better solution for metaphor identification in Chinese data. With the incorporation of generative lexicon, it furthermore facilitates the researcher to more accurately formulate the conceptual mappings involving body part terms, and to better comprehend metaphorically used body parts. Theoretically, taking generative lexicon into consideration, it establishes correlation between qualia roles and the conceptual mappings. Based on the findings, it also predicts that the visibility and telicity of a body part are the most dominant reasons which activate the choice of a body part for metaphorical/metonymical use. In the light of political discourse, it systematically analyzes how the human body parts are interweaved in the country leadership rhetoric, revealing the influence exerted by political context upon the use of corporeal terminology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Šindlerová, Jana. "Slovesná valence v srovnávacím pohledu". Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-391348.

Texto completo
Resumen
Verbal Valency in a Cross-Linguistic Perspective Jana Šindlerová Abstract In the thesis, we look upon differences in argument structure of verbs considering the Czech language and the English language. In the first part, we describe the process of building the CzEngVallex lexicon. In the second part, based on the aligned data of the Prague Czech-English Dependency Treebank, we compare the valencies of verbal translation equivalents and comment of their differences. We classify the differences according to their underlying causes. The causes can be based in the linguistic structure of the languages, they can include translatological reasons, or they can be grounded in the character of the descriptive linguistic theory used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

SUN, CHONG-TENG y 孫崇騰. "Lexicon-driven generation in machine translation". Thesis, 1991. http://ndltd.ncl.edu.tw/handle/76322452626244669374.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Yen-JenTai y 戴延任. "Automatic Domain-Specific Sentiment Lexicon Generation with Label Propagation". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/02422995122929581401.

Texto completo
Resumen
碩士
國立成功大學
資訊工程學系碩博士班
101
Nowadays, the advance of social media has led to the explosive growth of opinion data. Therefore, sentiment analysis has attracted a lot of attentions. Currently, sentiment analysis applications are divided into two main approaches, the lexicon-based approach and the machine-learning approach. However, both of them face the challenge of obtaining a large amount of human-labeled training data and corpus. For the lexicon-based approach, it requires a sentiment lexicon (sentiment dictionary) to determine the opinion polarity. There are many existing benchmark sentiment lexicons, but they cannot cover all the domain-specific words meanings. Thus, automatic generation of a domain-specific sentiment lexicon becomes an important task. In this paper, we propose a framework to automatically generate sentiment lexicon. First, we determine the semantic similarity between two words in the entire unlabeled corpus. We treat the words as nodes and similarities as weighted edges to construct word graphs. A graph-based semi-supervised label propagation method finally assigns the polarity to unlabeled words through the proposed propagation process. Experiments conducted on the microblog data, Twitter, show that our approach leads to a better performance than baseline approaches and general-purpose sentiment dictionaries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Hsieh-WeiChen y 陳謝瑋. "Hierarchical Multi-Dimensional Subjectivity-Lexicon Generation Model for Opinion Analysis". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/96129427338689528486.

Texto completo
Resumen
碩士
國立成功大學
資訊工程學系碩博士班
98
Opinion mining and sentiment analysis, an emerging area of information retrieval and natural language processing aims to opinion retrieval and subjectivity classification and clustering, has been attracting more and more attention from the academy and industry recently. Traditional approaches mainly focus on polarity classification, which the limitations are addressed in this thesis. As the limitations of the well-studied polarity opinion analysis, the traditional approaches are not adequate for criticism analysis which requires more refined analysis techniques and modeling. The five major contributions of this thesis are: first, a Multi-Dimensional Opinion Analysis (MDOA) framework for criticism analysis; second, an unsupervised Multi-Dimensional Subjectivity-Lexicon (MDSL) generation scheme; third, a semi-supervised Hierarchical MDSL (H-MDSL) generation model; forth, a modified Semi-Supervised Kernel k-Means clustering algorithm; fifth, a non-human-intervention-required evaluation scheme based on constraint agreement and violation quantification. The MDOA framework consists of four major steps: first, creating a dataset by crawling blog posts of reviews; secondly, creating a “subjectivity-term to object” matrix, with each subjectivity-term is modeled as a vector in a high dimensional space; thirdly, transforming each subjectivity-term into a new feature-space to create the final MDSL in which the feature-space should well-represent the subjectivity-terms; and fourthly, employing the learned MDSL for opinion analysis. In the experiments, first, the limitations of traditional polarity opinion analysis are addressed. Second, the entropy analysis of the learned MDSL and H-MDSL in the transformed feature space is performed. It shows that the improvement by the feature transformation can be up to 31% in terms of the entropy of the learned features. Third, the constraint agreement and violation evaluation of the proposed models and algorithms are performed, which shows the proposed model outperforms the others by at least 21% in error rate and hit rate. Fourth, comparison with traditional polarity approaches is also presented. In such comparison, it shows that the proposed framework is not only capable of traditional polarity classification but also more capable of providing meaningful semantic information in criticism analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Dorr, Bonnie J. "Lexical Conceptual Structure and Generation in Machine Translation". 1989. http://hdl.handle.net/1721.1/6018.

Texto completo
Resumen
This report introduces an implemented scheme for generating target- language sentences using a compositional representation of meaning called lexical conceptual structure. Lexical conceptual structure facilitates two crucial operations associated with generation: lexical selection and syntactic realization. The compositional nature of the representation is particularly valuable for these two operations when semantically equivalent source-and-target-language words and phrases are structurally or thematically divergent. To determine the correct lexical items and syntactic realization associated with the surface form in such cases, the underlying lexical-semantic forms are systematically mapped to the target-language syntactic structures. The model described constitutes a lexical-semantic extension to UNITRAN.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Teixeira, Joana Alexandra Vaz. "L2 Acquisition at the interfaces: Subject-verb inversion in L2 English and its pedagogical implications". Doctoral thesis, 2018. http://hdl.handle.net/10362/54381.

Texto completo
Resumen
The present PhD thesis deals with two kinds of interfaces that have recently become key areas of interest in generative second language acquisition research (GenSLA): (i) linguistic interfaces – the syntax-discourse interface (our main focus of research) and the lexicon-syntax interface in adult second language (L2) acquisition –, and (ii) an interdisciplinary interface – the interface between the domains of GenSLA and L2 pedagogy. The thesis seeks to shed new light on four general questions which are still a matter of debate in GenSLA: (i) Are narrow syntactic and lexical-syntactic properties unproblematic at the end state of L2 acquisition, as the Interface Hypothesis (IH) (Sorace & Filiaci, 2006; Sorace, 2011b) predicts? (ii) Are properties at the syntax-discourse interface necessarily problematic at the end state of L2 acquisition, as the IH proposes? (iii) What are the roles of cross-linguistic influence, input and processing factors in L2 acquisition at the syntax-discourse interface? (iv) Can explicit instruction help L2 learners/speakers (L2ers) overcome persistent problems in the acquisition of syntactic and syntax-discourse properties? With a view to investigating these questions, the thesis focuses on a linguistic phenomenon that has been little researched in GenSLA: subject-verb inversion (SVI) in L2 English. Three types of SVI are considered here: (i) “free” inversion (and their correlation with null subjects), (ii) locative inversion and (iii) presentational there-constructions (i.e., there-constructions with verbs other than be). The first is ungrammatical in English due to a purely syntactic factor: this language fixes the null subject parameter at a negative value. The last two types of SVI, on the other hand, are possible in English under certain lexical, syntactic and discourse conditions. The thesis comprises two experimental studies: (i) a study on the acquisition of the lexical, syntactic and discourse properties of SVI by advanced and near-native L2ers of English who are native speakers of French (a language similar to English in the relevant respects) and European Portuguese (a language different from English in the relevant respects), and (ii) a study on the impact of explicit grammar instruction on the acquisition of “narrow” syntactic and syntax-discourse properties of SVI by intermediate and low advanced Portuguese L2ers of English. The former study tests participants by means of three types of tasks: untimed drag-and-drop tasks, syntactic priming tasks, and speeded acceptability judgement tasks. Their results confirm that, as predicted by the IH, the properties of SVI that are purely (lexical-)syntactic are unproblematic at the end state of L2 acquisition, but those which involve the interface between syntax and discourse are a locus of permanent optionality, even when the first language (L1) is similar to the L2. Results are, moreover, consistent with the prediction of the IH that the optionality found at the syntax-discourse interface is primarily caused by processing inefficiencies associated with bilingualism. In addition to presenting new experimental evidence in favour of the IH, this study reveals that the degree of optionality L2ers exhibit at the syntax-discourse interface is moderated by the following variables, which have not been (sufficiently) considered in previous work on the IH: (i) construction frequency (very rare construction → more optionality), (ii) the quantity and/or distance of the pieces of contextual information the speaker needs to process (many pieces of contextual information in an inter-sentential context → more optionality), (iii) the level of proficiency in the L2 (lower level of proficiency → more optionality), and (iv) the (dis)similarity between the L1 and the L2 (L1≠L2 → more optionality). The study which concentrates on the impact of explicit grammar instruction on L2 acquisition follows a pre-test, treatment, post-test and delayed post-test design and tests participants by means of speeded acceptability judgement tasks. This study shows that explicit grammar instruction results in durable gains for L2ers, but its effectiveness is moderated by two factors: (i) the type of linguistic domain(s) involved in the target structure and (ii) whether or not L2ers are developmentally ready to acquire the target structure. Regarding factor (i), research findings indicate that the area that has been found to be a locus of permanent optionality in L2 acquisition – the syntax-discourse interface – is much less permeable to instructional effects than “narrow” syntax. Regarding factor (ii), results suggest that explicit instruction only benefits acquisition when L2ers are developmentally ready to acquire the target property. As these findings are relevant not only to GenSLA theory, but also to L2 teaching, the thesis includes an analysis of the relevance and potential implications of its findings for L2 grammar teaching.
A presente tese aborda dois tipos de interfaces que se tornaram recentemente áreas de interesse centrais na investigação desenvolvida em aquisição de língua segunda (L2) numa perspetiva generativista: (i) interfaces linguísticas – a interface sintaxe-discurso (o nosso foco principal de investigação) e a interface léxico-sintaxe na aquisição de L2 por adultos –, e (ii) uma interface interdisciplinar – a interface entre os domínios de aquisição e didática de L2. A tese pretende lançar nova luz sobre quatro questões que continuam a gerar muito debate no domínio de aquisição de L2: (i) Serão as propriedades “puramente” (léxico-)sintáticas completamente adquiríveis no estádio final de aquisição de L2, como a Hipótese de Interface (HI) (Sorace & Filiaci, 2006, Sorace, 2011b) propõe? (ii) Serão as propriedades na interface entre sintaxe e discurso necessariamente um locus de opcionalidade no estádio final de aquisição de L2, como a HI prediz? (iii) Quais são os papéis da influência da língua materna (L1), do input e de fatores de processamento na aquisição de L2 na interface sintaxe-discurso? (iv) Será que o ensino explícito ajuda os falantes de L2 a ultrapassarem problemas persistentes na aquisição de propriedades sintáticas e de sintático-discursivas? A fim de investigar estas questões, a tese debruça-se sobre um fenómeno linguístico ainda pouco investigado no domínio de aquisição de L2: a inversão sujeito-verbo (ISV) em inglês L2. Três tipos de ISV são considerados aqui: (i) a inversão “livre” (e sua correlação com sujeitos nulos), (ii) a inversão locativa e (iii) construções com there com verbos que não be (‘ser/estar’). A primeira é agramatical em inglês por um fator estritamente sintático: esta língua fixa o valor negativo para o parâmetro do sujeito nulo. Os dois últimos tipos de ISV, por seu lado, são possíveis em inglês em certas condições (léxico-)sintáticas e discursivas. A tese compreende dois estudos experimentais: (i) um estudo sobre a aquisição das propriedades lexicais, sintáticas e discursivas da ISV por falantes avançados e quase nativos de inglês que têm como L1 o francês (uma língua semelhante ao inglês nos aspetos relevantes) e o português europeu (uma língua diferente do inglês nos aspetos relevantes) e (ii) um estudo sobre o impacto do ensino explícito de gramática na aquisição de propriedades “estritamente” sintáticas e sintático-discursivas da ISV por falantes de português europeu com um nível intermédio e avançado em inglês L2. No primeiro estudo, os participantes são testados através de três tipos de tarefas: tarefas drag and drop não temporizadas, tarefas de priming sintático e tarefas de juízos de aceitabilidade rápidos. Em conjunto, os resultados destas tarefas confirmam que, como predito pela HI, as propriedades da ISV que são puramente (léxico-)sintáticas não são problemáticas no estádio final da aquisição de L2, mas aquelas que envolvem a interface entre sintaxe e discurso são um locus de opcionalidade permanente, mesmo quando a L1 é semelhante à L2. Os resultados são, além disso, consistentes com a proposta da HI de que a opcionalidade encontrada na interface sintaxe-discurso é causada (principalmente) por ineficiências de processamento associadas ao bilinguismo. Além de apresentar nova evidência experimental a favor da HI, este estudo mostra que o grau de opcionalidade que os falantes de L2 exibem na interface sintaxe-discurso é moderado pelas seguintes variáveis, que não têm sido (suficientemente) consideradas na literatura sobre a HI: (i) a frequência da construção na língua alvo (construção muito rara → mais opcionalidade), (ii) a quantidade e/ou distância das informações contextuais que o falante precisa processar (muitas informações contextuais no contexto inter-frásico → mais opcionalidade), (iii) o nível de proficiência na L2 (menor nível de proficiência → mais opcionalidade), e (iv) a (dis)semelhança entre a L1 e a L2 (L1 ≠ L2 → mais opcionalidade). O estudo de intervenção didática compreende um pré-teste e dois pós-testes após a intervenção e testa os participantes através de tarefas de juízos de aceitabilidade rápidos. Este estudo mostra que o ensino explícito da gramática pode resultar em ganhos duradouros para os aprendentes de L2, mas a sua eficácia é moderada por dois fatores: (i) o tipo de domínio(s) linguístico(s) em que propriedade alvo se situa e (ii) o grau de developmental readiness dos aprendentes para adquirirem a propriedade alvo. Em relação ao fator (i), os resultados deste estudo indicam que a área que constitui um locus de opcionalidade permanente na aquisição de L2 – a interface sintaxe-discurso – é muito menos permeável a efeitos de ensino do que a sintaxe “pura”. Em relação ao fator (ii), os resultados sugerem que o ensino explícito facilita a aquisição de L2 apenas quando os aprendentes atingiram um estádio de desenvolvimento em que já lhes é possível adquirir a propriedade alvo. Como estes resultados são relevantes não só para a teoria de aquisição de L2, mas também para o ensino de L2, a tese inclui uma análise da relevância e potenciais implicações dos seus resultados para o ensino da gramática em L2.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Dorr, Bonnie J. "A Lexical Conceptual Approach to Generation for Machine Translation". 1988. http://hdl.handle.net/1721.1/6482.

Texto completo
Resumen
Current approaches to generation for machine translation make use of direct-replacement templates, large grammars, and knowledge-based inferencing techniques. Not only are rules language-specific, but they are too simplistic to handle sentences that exhibit more complex phenomena. Furthermore, these systems are not easily extendable to other languages because the rules that map the internal representation to the surface form are entirely dependent on both the domain of the system and the language being generated. Finally an adequate interlingual representation has not yet been discovered; thus, knowledge-based inferencing is necessary and syntactic cross-linguistic generalization cannot be exploited. This report introduces a plan for the development of a theoretically based computational scheme of natural language generation for a translation system. The emphasis of the project is the mapping from the lexical conceptual structure of sentences to an underlying or "base" syntactic structure called deep structure. This approach tackles the problems of thematic and structural divergence, i.e., it allows generation of target language sentences that are not thematically or structurally equivalent to their conceptually equivalent source language counterparts. Two other more secondary tasks, construction of a dictionary and mapping from dep structure to surface structure, will also be discussed. The generator operates on a constrained grammatical theory rather than on a set of surface level transformations. If the endeavor succeeds, there will no longer be a need for large, detailed grammars; general knowledge-based inferencing will not be necessary; lexical selection and syntactic realization will bw facilitated; and the model will be general enough for extension to other languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Rao, Leela A. "Verbal fluency as a measure of lexico-semantic access and cognitive control in bilingual aphasia". Thesis, 2018. https://hdl.handle.net/2144/31113.

Texto completo
Resumen
The research on bilingual language processing explores two main avenues of relevance to the present study: lexico-semantic access and cognitive control. Lexico-semantic access research investigates the manner in which bilingual individuals retrieve single words from their lexical system. Healthy bilingual individuals can manipulate their lexico-semantic access to accommodate settings in which code- or language-switching is expected. Alternatively, they can manipulate their lexico-semantic access to speak only their first (L1) or second (L2) languages. Cognitive control, also known as executive functioning, is closely related to lexico-semantic access. Specifically, bilingual individuals maintain and switch between their languages through a mechanism known as cognitive control. Both cognitive control and lexico-semantic access are important for language processing in healthy bilingual individuals as well as bilingual persons with aphasia (BPWA). However, the extent to which BPWA utilize each of these processes in the production of single words is still unknown. The present study used a method of verbal fluency in the form of a novel modified category generation task to assess the relative contributions of lexico-semantic access and cognitive control in bilingual healthy controls and BPWA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Schwanhäuβer, Barbara, University of Western Sydney, College of Arts y MARCS Auditory Laboratories. "Lexical tone perception and production : the role of language and musical background". 2007. http://handle.uws.edu.au:8081/1959.7/31791.

Texto completo
Resumen
This thesis is concerned with the perception and production of lexical tone. In the first experiment, categorical perception of asymmetric synthetic tone continua was examined in speakers of tonal (Thai, Mandarin, and Vietnamese) and non-tonal (Australian English) languages. It was observed that perceptual strategies for categorisation depend on language background. Specifically, Mandarin and Vietnamese listeners tended to use the central tone to divide the continuum, whereas Thai and Australian English listeners used a flat no-contour tone as a perceptual anchor; a split based not on tonal vs. non-tonal language background, but rather on the specific language. In the second experiment, tonal (Thai) and non-tonal (Australian English) language speaking musicians and non-musicians were tested on categorical perception of two differently shaped synthetic tone continua. Results showed that, independently of language background, musicians learn to identify tones more quickly, show steeper identification functions, and display higher discrimination accuracy than non-musicians. Experiment three concerns the influence of language aptitude, musical aptitude, musical memory, and musical training on Australian English speakers‟ perception and production of non-native (Thai) tones, consonants, and vowels. The results showed that musicians were better than non-musicians at perceiving and producing tones and consonants; a ceiling effect was observed for vowel perception. Musical training per se did not determine acquisition of novel speech sounds, rather, musicians‟ higher accuracy was explained by a combination of inherent abilities - language and musical aptitude for consonants, and musical aptitude and musical memory for tones. It is concluded that tone perception is language dependent and strongly influenced by musical expertise - musical aptitude and musical memory, not musical training as such.
Doctor of Philosophy (PhD)
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Chang, Ren-Fen y 張仁芬. "Lexical Selection and Sentence Generation in an English-Chinese Machine Translation System: A Corpus-Based Approach". Thesis, 1994. http://ndltd.ncl.edu.tw/handle/48999564144185957888.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

McKinney, Kellin Lee. "Lexical errors produced during category generation tasks by bilingual adults and bilingual typically developing and language-impaired seven to nine-year-old children". Thesis, 2009. http://hdl.handle.net/2152/ETD-UT-2009-12-562.

Texto completo
Resumen
The development of category knowledge is in part a function of one's experiences with the world. The types of errors produced during category generation tasks may reveal the boundaries of these experiences and the ways in which they are organized into lexical networks. Examining the errors made by bilingual children with and without language impairment (LI) and bilingual adults may help to distinguish the effects of ability versus experience on the development and organization of lexical-semantic categories. The purpose of this study was to examine the types of errors made by bilingual (Spanish-English) children with (n=37) and without (n=35) LI and bilingual adults (n=26) on category generation tasks in both their languages and at two category levels: taxonomic and slot-filler. Results revealed a main effect for level (taxonomic vs. slot-filler) and error type (semantic vs. other) and suggest that bilingual seven to nine-year-old children's and adults' proportions and types of errors produced on category generation tasks differ significantly based on ability (i.e., TD or LI) but not on experience (i.e., TD or Adults).
text
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Bílka, Ondřej. "Pattern matching in compilers". Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-305136.

Texto completo
Resumen
Title: Pattern matching in compilers Author: Ondřej Bílka Department: Department of Applied Mathematics Supervisor: Jan Hubička, Department of Applied Mathematics Abstract: In this thesis we develop tools for effective and flexible pattern matching. We introduce a new pattern matching system called amethyst. Amethyst is not only a generator of parsers of programming languages, but can also serve as an alternative to tools for matching regular expressions. Our framework also produces dynamic parsers. Its intended use is in the context of IDE (accurate syntax highlighting and error detection on the fly). Amethyst offers pattern matching of general data structures. This makes it a useful tool for implement- ing compiler optimizations such as constant folding, instruction scheduling, and dataflow analysis in general. The parsers produced are essentially top-down parsers. Linear time complexity is obtained by introducing the novel notion of structured grammars and reg- ularized regular expressions. Amethyst uses techniques known from compiler optimizations to produce effective parsers. Keywords: Packrat parsing, dynamic parsing, structured grammars, functional programming 1
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Goláňová, Hana. "Nářeční slovník jihozápadního Vsetínska". Doctoral thesis, 2013. http://www.nusl.cz/ntk/nusl-322634.

Texto completo
Resumen
There are only few dialect regions within Czech language territory, where a systematic dialect research can be made, since the traditional territorial dialect irreversibly disappears. In the northern part of the East-Moravian dialect region we would find Valachian region, where the dialect is still maintained alive, and where the lexicon collected in Dialect dictionary of southwest part of the Vsetin region (hereinafter as DDSV) originates. During the field survey I managed to collect a rich lexical material, on the basis of which the DDSV consisting of 2.027 basic and 427 referential lexical units was created. DDSV is a type of local, alphabetic and differential dictionary. The aim of this work is mainly to catch and elaborate the lexicon collected in DDSV, based on contemporary lexicographical methods. A differential approach was used at the selection of phraseology, it means, that achieved lexical material was thoroughly confronted with Czech explanatory dictionaries. The dialect research for DDSV was made among the members of the oldest generation (above 60 years). Among the youngest generation (under 30 years), a research for inter-generation comparison of the lexicon of substantive semantic register area of meals, drinks and smoking was led, focused at sub-groups poléfka, máčka, kaša, placka...
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Lambrey, Florie. "Implémentation des collocations pour la réalisation de texte multilingue". Thèse, 2016. http://hdl.handle.net/1866/18769.

Texto completo
Resumen
La génération automatique de texte (GAT) produit du texte en langue naturelle destiné aux humains à partir de données non langagières. L’objectif de la GAT est de concevoir des générateurs réutilisables d’une langue à l’autre et d’une application à l’autre. Pour ce faire, l’architecture des générateurs automatiques de texte est modulaire : on distingue entre la génération profonde qui détermine le contenu du message à exprimer et la réalisation linguistique qui génère les unités et structures linguistiques exprimant le message. La réalisation linguistique multilingue nécessite de modéliser les principaux phénomènes linguistiques de la manière la plus générique possible. Or, les collocations représentent un de ces principaux phénomènes linguistiques et demeurent problématiques en GAT, mais aussi pour le Traitement Automatique des Langues en général. La Théorie Sens-Texte analyse les collocations comme des contraintes de sélection lexicale. Autrement dit, une collocation est composée de trois éléments : (i) la base, (ii) le collocatif, choisi en fonction de la base et (iii) d’une relation sémantico-lexicale. Il existe des relations sémantico-lexicales récurrentes et systématiques. Les fonctions lexicales modélisent ces relations. En effet, des collocations telles que peur bleue ou pluie torrentielle instancient une même relation, l’intensification, que l’on peut décrire au moyen de la fonction lexicale Magn : Magn(PEUR) = BLEUE, Magn(PLUIE) = TORRENTIELLE, etc. Il existe des centaines de fonctions lexicales. Ce mémoire présente la méthodologie d’implémentation des collocations dans un réalisateur de texte multilingue, GÉCO, à l’aide des fonctions lexicales standard syntagmatiques simples et complexes. Le cœur de la méthodologie repose sur le regroupement des fonctions lexicales ayant un fonctionnement similaire dans des patrons génériques. Au total, plus de 26 000 fonctions lexicales ont été implémentées, représentant de ce fait une avancée considérable pour le traitement des collocations en réalisation de texte multilingue.
Natural Language Generation (NLG) produces text in natural language from non-linguistic content. NLG aims at developing generators that are reusable across languages and applications. In order to do so, these systems’ architecture is modular: while the deep generation module determines the content of the message to be expressed, the text realization module maps the message into its most appropriate linguistic form. Multilingual text realization requires to model the core linguistic phenomena that one finds in language. Collocations represent one of the core linguistic phenomena that remain problematic not only in NLG, but also in Natural Language Processing in general. The Meaning-Text theory analyses collocations as constraints on lexical selection. In other words, a collocation is made up of three constituents: (i) the base, (ii) the collocate, chosen according to (iii) a semantico-lexical relation. Some of these semantico-lexical relations are systematic and shared by many collocations. Lexical functions are a system for modeling these relations. In fact, collocations such as heavy rain or strong preference instantiate the same relation, intensity, can be described with the lexical function Magn: Magn(RAIN) = HEAVY, Magn(PREFERENCE) = STRONG, etc. There are hundreds of lexical functions. Our work presents a methodology for the implementation of collocations in a multilingual text realization engine, GÉCO, that relies on simple and complex syntagmatic standard lexical functions. The principal aspect of the methodology consists of regrouping lexical functions that show a similar behavior into generic patterns. As a result, 26 000 lexical functions have been implemented, which is a considerable progress in the treatment of collocations in multilingual text realization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía