To see the other types of publications on this topic, follow the link: Natural language texts.

Dissertations / Theses on the topic 'Natural language texts'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Natural language texts.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Eliasson, Christopher. "Natural Language Generation for descriptive texts in interactive games." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5651.

Full text
Abstract:
Context. Game development is a costly process and with today's advanced hardware the customers are asking for more playable content, and at higher quality. For many years providing this content procedurally has been done for level creation, modeling, and animation. However, there are games that require content in other forms, such as executable quests that progress the game forward. Quests have been procedurally generated to some extent, but not in enough detail to be usable for game development without providing a handwritten description of the quest. Objectives. In this study we combine a procedural content generation structure for quests with a natural language generation approach to generate a descriptive summarized text for quests, and examine whether the resulting texts are viable as quest prototypes for use in game development. Methods. A number of articles on the area of natural language generation is used to determine an appropriate way of validating the generated texts produced in this study, which concludes that a user case study is appropriate to evaluate each text for a set of statements. Results. 30 texts were generated and evaluated from ten different quest structures, where the majority of the texts were found to be good enough to be used for game development purposes. Conclusions. We conclude that quests can be procedurally generated in more detail by incorporating natural language generation. However, the quest structure used for this study needs to expand into more detail at certain structure components in order to fully support an automated system in a flexible manner. Furthermore due to semantics and grammatics being key components in the flow and usability of a text, a more sophisticated system needs to be implemented using more advanced techniques of natural language generation.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Michelle W. M. Eng Massachusetts Institute of Technology. "Comparison of natural language processing algorithms for medical texts." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100298.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Title as it appears in MIT Commencement Exercises program, June 5, 2015: Comparison of NLP systems for medical text. Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 57-58).<br>With the large corpora of clinical texts, natural language processing (NLP) is growing to be a field that people are exploring to extract useful patient information. NLP applications in clinical medicine are especially important in domains where the clinical observations are crucial to define and diagnose the disease. There are a variety of different systems that attempt to match words and word phrases to medical terminologies. Because of the differences in annotation datasets and lack of common conventions, many of the systems yield conflicting results. The purpose of this thesis project is (1) to create a visual representation of how different concepts compare to each other when using various annotators and (2) to improve upon the NLP methods to yield terms with better fidelity to what the clinicians are trying to express.<br>by Michelle W. Chen.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Marcu, Daniel. "The rhetorical parsing, summarization, and generation of natural language texts." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cramer, Marcos [Verfasser]. "Proof-checking mathematical texts in controlled natural language / Marcos Cramer." Bonn : Universitäts- und Landesbibliothek Bonn, 2013. http://d-nb.info/1045276626/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Avenberg, Anna. "Automatic language identification of short texts." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-421032.

Full text
Abstract:
The world is growing more connected through the use of online communication, exposing software and humans to all the world's languages. While devices are able to understand and share the raw data between themselves and with humans, the information itself is not expressed in a monolithic format. This causes issues both in the human to computer interaction and human to human communication. Automatic language identification (LID) is a field within artificial intelligence and natural language processing that strives to solve a part of these issues by identifying languages from text, sign language and speech. One of the challenges is to identify the short pieces of text that can be found online, such as messages, comments and posts on social media. This is due to the small amount of information they carry. The goal of this thesis has been to build a machine learning model that can identify the language for these short pieces of text. A long short-term memory (LSTM) machine learning model was built and benchmarked towards Facebook's fastText model. The results show how the LSTM model reached an accuracy of around 95% and the fastText model used as comparison reached an accuracy of 97%. The LSTM model struggled more when identifying texts shorter than 50 characters than with longer text. The classification performance of the LSTM model was also relatively poor in cases where languages were similar, like Croatian and Serbian. Both the LSTM model and the fastText model reached accuracy's above 94% which can be considered high, depending on how it is evaluated. There are however many improvements and possible future work to be considered; looking further into texts shorter than 50 characters, evaluating the model's softmax output vector values and how to handle similar languages.
APA, Harvard, Vancouver, ISO, and other styles
6

Wong, Ping-wai. "Semantic annotation of Chinese texts with message structures based on HowNet." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38212389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Antici, Francesco. "Advanced techniques for cross-language annotation projection in legal texts." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23884/.

Full text
Abstract:
Nowadays, the majority of the services we benefit from, are provided online and their use is regulated by the acceptance to the terms of service by the users. All our data are handled accordingly with the clauses of such document and all our behaviours must comply with it. Given so, it would be very useful to find automated techniques to ensure fairness of the document or inform the users about possible threats. The focus of this work, is to create resources aimed to the development of such tools in languages other than English, which may lack in linguistic resources and annotated corpus. The enormous breakthroughs of the last years in Natural Language Processing techniques made it possible the creation of such tools through automated and unsupervised process. One of the means to achieve that is through the annotation projection between two parallel corpora. The difficulties and costs of creating ad hoc resource for every language has brought the need to find another way for achieving the goal.\\ This work investigates the cross language annotation projection technique based on sentence embedding and similarity metrics to find matches between sentences. Several combination of methods and algorithms are compared, among which there are monolingual and multilingual embedding neural models. The experiments are conducted on two datasets, where the reference language is always English and the projection are evaluated on Italian, German and Polish. The results obtained provide a robust and reliable technique for the task and a good starting point to build multilingual tools.
APA, Harvard, Vancouver, ISO, and other styles
8

Gaona, Miguel Angel Rios. "Methods for measuring semantic similarity of texts." Thesis, University of Wolverhampton, 2014. http://hdl.handle.net/2436/346894.

Full text
Abstract:
Measuring semantic similarity is a task needed in many Natural Language Processing (NLP) applications. For example, in Machine Translation evaluation, semantic similarity is used to assess the quality of the machine translation output by measuring the degree of equivalence between a reference translation and the machine translation output. The problem of semantic similarity (Corley and Mihalcea, 2005) is de ned as measuring and recognising semantic relations between two texts. Semantic similarity covers di erent types of semantic relations, mainly bidirectional and directional. This thesis proposes new methods to address the limitations of existing work on both types of semantic relations. Recognising Textual Entailment (RTE) is a directional relation where a text T entails the hypothesis H (entailment pair) if the meaning of H can be inferred from the meaning of T (Dagan and Glickman, 2005; Dagan et al., 2013). Most of the RTE methods rely on machine learning algorithms. de Marne e et al. (2006) propose a multi-stage architecture where a rst stage determines an alignment between the T-H pairs to be followed by an entailment decision stage. A limitation of such approaches is that instead of recognising a non-entailment, an alignment that ts an optimisation criterion will be returned, but the alignment by itself is a poor predictor for iii non-entailment. We propose an RTE method following a multi-stage architecture, where both stages are based on semantic representations. Furthermore, instead of using simple similarity metrics to predict the entailment decision, we use a Markov Logic Network (MLN). The MLN is based on rich relational features extracted from the output of the predicate-argument alignment structures between T-H pairs. This MLN learns to reward pairs with similar predicates and similar arguments, and penalise pairs otherwise. The proposed methods show promising results. A source of errors was found to be the alignment step, which has low coverage. However, we show that when an alignment is found, the relational features improve the nal entailment decision. The task of Semantic Textual Similarity (STS) (Agirre et al., 2012) is de- ned as measuring the degree of bidirectional semantic equivalence between a pair of texts. The STS evaluation campaigns use datasets that consist of pairs of texts from NLP tasks such as Paraphrasing and Machine Translation evaluation. Methods for STS are commonly based on computing similarity metrics between the pair of sentences, where the similarity scores are used as features to train regression algorithms. Existing methods for STS achieve high performances over certain tasks, but poor results over others, particularly on unknown (surprise) tasks. Our solution to alleviate this unbalanced performances is to model STS in the context of Multi-task Learning using Gaussian Processes (MTL-GP) ( Alvarez et al., 2012) and state-of-the-art iv STS features ( Sari c et al., 2012). We show that the MTL-GP outperforms previous work on the same datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Ping-wai, and 黃炳蔚. "Semantic annotation of Chinese texts with message structures based on HowNet." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38212389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Frunza, Oana Magdalena. "Personalized Medicine through Automatic Extraction of Information from Medical Texts." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22724.

Full text
Abstract:
The wealth of medical-related information available today gives rise to a multidimensional source of knowledge. Research discoveries published in prestigious venues, electronic-health records data, discharge summaries, clinical notes, etc., all represent important medical information that can assist in the medical decision-making process. The challenge that comes with accessing and using such vast and diverse sources of data stands in the ability to distil and extract reliable and relevant information. Computer-based tools that use natural language processing and machine learning techniques have proven to help address such challenges. This current work proposes automatic reliable solutions for solving tasks that can help achieve a personalized-medicine, a medical practice that brings together general medical knowledge and case-specific medical information. Phenotypic medical observations, along with data coming from test results, are not enough when assessing and treating a medical case. Genetic, life-style, background and environmental data also need to be taken into account in the medical decision process. This thesis’s goal is to prove that natural language processing and machine learning techniques represent reliable solutions for solving important medical-related problems. From the numerous research problems that need to be answered when implementing personalized medicine, the scope of this thesis is restricted to four, as follows: 1. Automatic identification of obesity-related diseases by using only textual clinical data; 2. Automatic identification of relevant abstracts of published research to be used for building systematic reviews; 3. Automatic identification of gene functions based on textual data of published medical abstracts; 4. Automatic identification and classification of important medical relations between medical concepts in clinical and technical data. This thesis investigation on finding automatic solutions for achieving a personalized medicine through information identification and extraction focused on individual specific problems that can be later linked in a puzzle-building manner. A diverse representation technique that follows a divide-and-conquer methodological approach shows to be the most reliable solution for building automatic models that solve the above mentioned tasks. The methodologies that I propose are supported by in-depth research experiments and thorough discussions and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
11

Moncla, Ludovic. "Automatic Reconstruction of Itineraries from Descriptive Texts." Thesis, Pau, 2015. http://www.theses.fr/2015PAUU3029/document.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre du projet PERDIDO dont les objectifs sont l'extraction et la reconstruction d'itinéraires à partir de documents textuels. Ces travaux ont été réalisés en collaboration entre le laboratoire LIUPPA de l'université de Pau et des Pays de l'Adour (France), l'équipe IAAA de l'université de Saragosse (Espagne) et le laboratoire COGIT de l'IGN (France). Les objectifs de cette thèse sont de concevoir un système automatique permettant d'extraire, dans des récits de voyages ou des descriptions d’itinéraires, des déplacements, puis de les représenter sur une carte. Nous proposons une approche automatique pour la représentation d'un itinéraire décrit en langage naturel. Notre approche est composée de deux tâches principales. La première tâche a pour rôle d'identifier et d'extraire les informations qui décrivent l'itinéraire dans le texte, comme par exemple les entités nommées de lieux et les expressions de déplacement ou de perception. La seconde tâche a pour objectif la reconstruction de l'itinéraire. Notre proposition combine l'utilisation d'information extraites grâce au traitement automatique du langage ainsi que des données extraites de ressources géographiques externes (comme des gazetiers). L'étape d'annotation d'informations spatiales est réalisée par une approche qui combine l'étiquetage morpho-syntaxique et des patrons lexico-syntaxiques (cascade de transducteurs) afin d'annoter des entités nommées spatiales et des expressions de déplacement ou de perception. Une première contribution au sein de la première tâche est la désambiguïsation des toponymes, qui est un problème encore mal résolu en NER et essentiel en recherche d'information géographique. Nous proposons un algorithme non-supervisé de géo-référencement basé sur une technique de clustering capable de proposer une solution pour désambiguïser les toponymes trouvés dans les ressources géographiques externes, et dans le même temps proposer une estimation de la localisation des toponymes non référencés. Nous proposons un modèle de graphe générique pour la reconstruction automatique d'itinéraires, où chaque noeud représente un lieu et chaque segment représente un chemin reliant deux lieux. L'originalité de notre modèle est qu'en plus de tenir compte des éléments habituels (chemins et points de passage), il permet de représenter les autres éléments impliqués dans la description d'un itinéraire, comme par exemple les points de repères visuels. Un calcul d'arbre de recouvrement minimal à partir d'un graphe pondéré est utilisé pour obtenir automatiquement un itinéraire sous la forme d'un graphe. Chaque segment du graphe initial est pondéré en utilisant une méthode d'analyse multi-critère combinant des critères qualitatifs et des critères quantitatifs. La valeur des critères est déterminée à partir d'informations extraites du texte et d'informations provenant de ressources géographique externes. Par exemple, nous combinons les informations issues du traitement automatique de la langue comme les relations spatiales décrivant une orientation (ex: se diriger vers le sud) avec les coordonnées géographiques des lieux trouvés dans les ressources pour déterminer la valeur du critère "relation spatiale". De plus, à partir de la définition du concept d'itinéraire et des informations utilisées dans la langue pour décrire un itinéraire, nous avons modélisé un langage d'annotation d'information spatiale adapté à la description de déplacements, s'appuyant sur les recommendations du consortium TEI (Text Encoding and Interchange). Enfin, nous avons implémenté et évalué les différentes étapes de notre approche sur un corpus multilingue de descriptions de randonnées (Français, Espagnol et Italien)<br>This PhD thesis is part of the research project PERDIDO, which aims at extracting and retrieving displacements from textual documents. This work was conducted in collaboration with the LIUPPA laboratory of the university of Pau (France), the IAAA team of the university of Zaragoza (Spain) and the COGIT laboratory of IGN (France). The objective of this PhD is to propose a method for establishing a processing chain to support the geoparsing and geocoding of text documents describing events strongly linked with space. We propose an approach for the automatic geocoding of itineraries described in natural language. Our proposal is divided into two main tasks. The first task aims at identifying and extracting information describing the itinerary in texts such as spatial named entities and expressions of displacement or perception. The second task deal with the reconstruction of the itinerary. Our proposal combines local information extracted using natural language processing and physical features extracted from external geographical sources such as gazetteers or datasets providing digital elevation models. The geoparsing part is a Natural Language Processing approach which combines the use of part of speech and syntactico-semantic combined patterns (cascade of transducers) for the annotation of spatial named entities and expressions of displacement or perception. The main contribution in the first task of our approach is the toponym disambiguation which represents an important issue in Geographical Information Retrieval (GIR). We propose an unsupervised geocoding algorithm that takes profit of clustering techniques to provide a solution for disambiguating the toponyms found in gazetteers, and at the same time estimating the spatial footprint of those other fine-grain toponyms not found in gazetteers. We propose a generic graph-based model for the automatic reconstruction of itineraries from texts, where each vertex represents a location and each edge represents a path between locations. %, combining information extracted from texts and information extracted from geographical databases. Our model is original in that in addition to taking into account the classic elements (paths and waypoints), it allows to represent the other elements describing an itinerary, such as features seen or mentioned as landmarks. To build automatically this graph-based representation of the itinerary, our approach computes an informed spanning tree on a weighted graph. Each edge of the initial graph is weighted using a multi-criteria analysis approach combining qualitative and quantitative criteria. Criteria are based on information extracted from the text and information extracted from geographical sources. For instance, we compare information given in the text such as spatial relations describing orientation (e.g., going south) with the geographical coordinates of locations found in gazetteers. Finally, according to the definition of an itinerary and the information used in natural language to describe itineraries, we propose a markup langugage for encoding spatial and motion information based on the Text Encoding and Interchange guidelines (TEI) which defines a standard for the representation of texts in digital form. Additionally, the rationale of the proposed approach has been verified with a set of experiments on a corpus of multilingual hiking descriptions (French, Spanish and Italian)
APA, Harvard, Vancouver, ISO, and other styles
12

Dail, Mathias. "Clustering unstructured life sciences experiments with unsupervised machine learning : Natural language processing for unstructured life sciences texts." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-265549.

Full text
Abstract:
The purpose of this master’s thesis is to analyse different types of document representations in the context of improving, in an unsupervised manner, the searchability of unstructured textual life sciences experiments by clustering similar experiments together. The challenge is to produce, analyse and compare different representations of the life sciences data by using traditional and advanced unsupervised Machine learning models. The text data analysed in this work is noisy and very heterogeneous, as it comes from a real-world Electronic Lab Notebook. Clustering unstructured and unlabeled text experiments is challenging. It requires the creation of representations based only on the relevant information existing in an experiment. This work studies statistical and generative techniques, word embeddings and some of the most recent deep learning models in Natural Language Processing to create the various representation of the studied data. It explores the possibility of combining multiple techniques and using external life-sciences knowledge-bases to create richer representations before applying clustering algorithms. Different types of analysis are performed, including an assessment done by experts, to evaluate and compare the scientific relevance of the cluster of experiments created by the different data representations. The results show that traditional statistical techniques can still produce good baselines. Modern deep learning techniques have been shown to model the studied data well and create rich representations. Combining multiple techniques with external knowledge (biomedical and life-science-related ontologies) have been shown to produce the best results in grouping similar relevant experiments together. The different studied techniques enable to model different, and complementary aspects of a text, therefore combining them is a key to significantly improve the clustering of unstructured data.<br>Syftet med denna uppsats är att analysera olika typer av dokumentrepresentationer för att, på ett oövervakat sätt, förbättra sökbarheten hos ostrukturerade biomedicinska experiment genom att kluster-samla liknande experiment tillsammans. Arbetet innefattar att producera, analysera och jämföra textrepresenta- tioner med hjälp av olika traditionella och moderna maskininlärningsmetoder. Den data som analyserats är brusig och heterogen eftersom den kommer från manuellt skrivna experiment från ett elektroniskt labbokssystem. Att kluster-indela ostrukturerade och oannoterade experiment är en utmaning. Det kräver en representation av texten som enbart baseras på väsentlig information. I denna uppsats har statistiska och generativa tekniker som inbäddade ord samt de senaste framstegen inom djup maskininlärning inom området naturlig textbearbetning använts för att skapa olika textrepresentationer. Genom att kombinera olika tekniker samt att utnyttja externa biomedicinska kunskapskällor har möjligheten att skapa en bättre representation undersökts. Flera analyser har gjorts och dessa har kompletterats med en manuell utvärdering utförd av experter inom det biomedicinska kunskapsfältet. Resultatet visar att traditionella statistiska metoder kan skapa en rimlig basnivå. Moderna djupinlärningsalgoritmer har också visat sig fungera mycket väl och skapat rika representationer av innehållet. Kombinationer av flera tekniker samt användningen av externa biomedicinska kunskapskällor och ontologier har visat sig ge bäst resultat. De olika teknikerna verkar modellera olika och komplementära aspekter av en text, och att kombinera dem kan vara en nyckel till att signifikant förbättra sökbarheten hos ostrukturerad text.
APA, Harvard, Vancouver, ISO, and other styles
13

Lenas, Erik. "Prerequisites for Extracting Entity Relations from Swedish Texts." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281275.

Full text
Abstract:
Natural language processing (NLP) is a vibrant area of research with many practical applications today like sentiment analyses, text labeling, questioning an- swering, machine translation and automatic text summarizing. At the moment, research is mainly focused on the English language, although many other lan- guages are trying to catch up. This work focuses on an area within NLP called information extraction, and more specifically on relation extraction, that is, to ex- tract relations between entities in a text. What this work aims at is to use machine learning techniques to build a Swedish language processing pipeline with part-of- speech tagging, dependency parsing, named entity recognition and coreference resolution to use as a base for later relation extraction from archival texts. The obvious difficulty lies in the scarcity of Swedish annotated datasets. For exam- ple, no large enough Swedish dataset for coreference resolution exists today. An important part of this work, therefore, is to create a Swedish coreference solver using distantly supervised machine learning, which means creating a Swedish dataset by applying an English coreference solver on an unannotated bilingual corpus, and then using a word-aligner to translate this machine-annotated En- glish dataset to a Swedish dataset, and then training a Swedish model on this dataset. Using Allen NLP:s end-to-end coreference resolution model, both for creating the Swedish dataset and training the Swedish model, this work achieves an F1-score of 0.5. For named entity recognition this work uses the Swedish BERT models released by the Royal Library of Sweden in February 2020 and achieves an overall F1-score of 0.95. To put all of these NLP-models within a single Lan- guage Processing Pipeline, Spacy is used as a unifying framework.<br>Natural Language Processing (NLP) är ett stort och aktuellt forskningsområde idag med många praktiska tillämpningar som sentimentanalys, textkategoriser- ing, maskinöversättning och automatisk textsummering. Forskningen är för när- varande mest inriktad på det engelska språket, men många andra språkområ- den försöker komma ikapp. Det här arbetet fokuserar på ett område inom NLP som kallas informationsextraktion, och mer specifikt relationsextrahering, det vill säga att extrahera relationer mellan namngivna entiteter i en text. Vad det här ar- betet försöker göra är att använda olika maskininlärningstekniker för att skapa en svensk Language Processing Pipeline bestående av part-of-speech tagging, de- pendency parsing, named entity recognition och coreference resolution. Denna pipeline är sedan tänkt att användas som en bas for senare relationsextrahering från svenskt arkivmaterial. Den uppenbara svårigheten med detta ligger i att det är ont om stora, annoterade svenska dataset. Till exempel så finns det inget till- räckligt stort svenskt dataset för coreference resolution. En stor del av detta arbete går därför ut på att skapa en svensk coreference solver genom att implementera distantly supervised machine learning, med vilket menas att använda en engelsk coreference solver på ett oannoterat engelskt-svenskt corpus, och sen använda en word-aligner för att översätta detta maskinannoterade engelska dataset till ett svenskt, och sen träna en svensk coreference solver på detta dataset. Det här arbetet använder Allen NLP:s end-to-end coreference solver, både för att skapa det svenska datasetet, och för att träna den svenska modellen, och uppnår en F1-score på 0.5. Vad gäller named entity recognition så använder det här arbetet Kungliga Bibliotekets BERT-modeller som bas, och uppnår genom detta en F1- score på 0.95. Spacy används som ett enande ramverk för att samla alla dessa NLP-komponenter inom en enda pipeline.
APA, Harvard, Vancouver, ISO, and other styles
14

Kärde, Wilhelm. "Tool for linguistic quality evaluation of student texts." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186434.

Full text
Abstract:
Spell checkers are nowadays a common occurrence in most editors. A student writing an essay in school will often have the availability of a spell checker. However, the feedback from a spell checker seldom correlates with the feedback from a teacher. A reason for this being that the teacher has more aspects on which it evaluates a text. The teacher will, as opposed to the the spell checker, evaluate a text based on aspects such as genre adaptation, structure and word variation. This thesis evaluates how well those aspects translate to NLP (Natural Language Processing) and implements those who translate well into a rule based solution called Granska.<br>Grammatikgranskare finns numera tillgängligt i de flesta ordbehandlare. En student som skriver en uppsats har allt som oftast tillgång till en grammatikgranskare. Dock så skiljer det sig mycket mellan den återkoppling som studenten får från grammatikgranskaren respektive läraren. Detta då läraren ofta har fler aspekter som den använder sig av vid bedömingen utav en elevtext. Läraren, till skillnad från grammatikgranskaren, bedömmer en text på aspekter så som hur väl texten hör till en viss genre, dess struktur och ordvariation. Denna uppsats utforskar hur pass väl dessa aspekter går att anpassas till NLP (Natural Language Processing) och implementerar de som passar väl in i en regelbaserad lösning som heter Granska.
APA, Harvard, Vancouver, ISO, and other styles
15

Liebscher, Robert Aubrey. "Temporal, categorical, and bibliographical context of scientific texts : interactions and applications /." Diss., Connect to a 24 p. preview or request complete full text in PDF formate. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3207704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tabassum, Binte Jafar Jeniya. "Information Extraction From User Generated Noisy Texts." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1606315356821532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

AMENDUNI, FRANCESCA. "Defining and Assessing Critical Thinking: toward an automatic analysis of HiEd students’ written texts." Doctoral thesis, Università di Foggia, 2021. https://hdl.handle.net/11369/425168.

Full text
Abstract:
L'obiettivo principale di questa tesi di dottorato è testare, attraverso due studi empirici, l'affidabilità di un metodo volto a valutare automaticamente le manifestazioni del Pensiero Critico (CT) nei testi scritti da studenti universitari. Gli studi empirici si sono basati su una review critica della letteratura volta a proporre una nuova classificazione per sistematizzare le diverse definizioni di CT e i relativi approcci teorici. La review esamina anche la relazione tra le diverse definizioni di CT e i relativi metodi di valutazione. Dai risultati emerge la necessità di concentrarsi su misure aperte per la valutazione del CT e di sviluppare strumenti automatici basati su tecniche di elaborazione del linguaggio naturale (NLP) per superare i limiti attuali delle misure aperte, come l’attendibilità e i costi di scoring. Sulla base di una rubrica sviluppata e implementata dal gruppo di ricerca del Centro di Didattica Museale – Università di Roma Tre (CDM) per la valutazione e l'analisi dei livelli di CT all'interno di risposte aperte (Poce, 2017), è stato progettato un prototipo per la misurazione automatica di alcuni indicatori di CT. Il primo studio empirico condotto su un gruppo di 66 docenti universitari mostra livelli di affidabilità soddisfacenti della rubrica di valutazione, mentre la valutazione effettuata dal prototipo non era sufficientemente attendibile. I risultati di questa sperimentazione sono stati utilizzati per capire come e in quali condizioni il modello funziona meglio. La seconda indagine empirica era volta a capire quali indicatori del linguaggio naturale sono maggiormente associati a sei sottodimensioni del CT, valutate da esperti in saggi scritti in lingua italiana. Lo studio ha utilizzato un corpus di 103 saggi pre-post di studenti universitari di laurea magistrale che hanno frequentato il corso di "Pedagogia sperimentale e valutazione scolastica". All'interno del corso, sono state proposte due attività per stimolare il CT degli studenti: la valutazione delle risorse educative aperte (OER) (obbligatoria e online) e la progettazione delle OER (facoltativa e in modalità blended). I saggi sono stati valutati sia da valutatori esperti, considerando sei sotto-dimensioni del CT, sia da un algoritmo che misura automaticamente diversi tipi di indicatori del linguaggio naturale. Abbiamo riscontrato un'affidabilità interna positiva e un accordo tra valutatori medio-alto. I livelli di CT degli studenti sono migliorati in modo significativo nel post-test. Tre indicatori del linguaggio naturale sono 5 correlati in modo significativo con il punteggio totale di CT: la lunghezza del corpus, la complessità della sintassi e la funzione di peso tf-idf (term frequency–inverse document frequency). I risultati raccolti durante questo dottorato hanno implicazioni sia teoriche che pratiche per la ricerca e la valutazione del CT. Da un punto di vista teorico, questa tesi mostra sovrapposizioni inesplorate tra diverse tradizioni, prospettive e metodi di studio del CT. Questi punti di contatto potrebbero costituire la base per un approccio interdisciplinare e la costruzione di una comprensione condivisa di CT. I metodi di valutazione automatica possono supportare l’uso di misure aperte per la valutazione del CT, specialmente nell'insegnamento online. Possono infatti facilitare i docenti e i ricercatori nell'affrontare la crescente presenza di dati linguistici prodotti all'interno di piattaforme educative (es. Learning Management Systems). A tal fine, è fondamentale sviluppare metodi automatici per la valutazione di grandi quantità di dati che sarebbe impossibile analizzare manualmente, fornendo agli insegnanti e ai valutatori un supporto per il monitoraggio e la valutazione delle competenze dimostrate online dagli studenti.<br>The main goal of this PhD thesis is to test, through two empirical studies, the reliability of a method aimed at automatically assessing Critical Thinking (CT) manifestations in Higher Education students’ written texts. The empirical studies were based on a critical review aimed at proposing a new classification for systematising different CT definitions and their related theoretical approaches. The review also investigates the relationship between the different adopted CT definitions and CT assessment methods. The review highlights the need to focus on open-ended measures for CT assessment and to develop automatic tools based on Natural Language Processing (NLP) technique to overcome current limitations of open-ended measures, such as reliability and costs. Based on a rubric developed and implemented by the Center for Museum Studies – Roma Tre University (CDM) research group for the evaluation and analysis of CT levels within open-ended answers (Poce, 2017), a NLP prototype for the automatic measurement of CT indicators was designed. The first empirical study was carried out on a group of 66 university teachers. The study showed satisfactory reliability levels of the CT evaluation rubric, while the evaluation carried out by the prototype was not yet sufficiently reliable. The results were used to understand how and under what conditions the model works better. The second empirical investigation was aimed at understanding which NLP features are more associated with six CT sub-dimensions as assessed by human raters in essays written in the Italian language. The study used a corpus of 103 students’ pre-post essays who attended a Master's Degree module in “Experimental Education and School Assessment” to assess students' CT levels. Within the module, we proposed two activities to stimulate students' CT: Open Educational Resources (OERs) assessment (mandatory and online) and OERs design (optional and blended). The essays were assessed both by expert evaluators, considering six CT sub-dimensions, and by an algorithm that automatically calculates different kinds of NLP features. The study shows a positive internal reliability and a medium to high inter-coder agreement in expert evaluation. Students' CT levels improved significantly in the post-test. Three NLP indicators significantly correlate with CT total score: the Corpus Length, the Syntax Complexity, and an adapted measure of Term Frequency- Inverse Document Frequency. The results collected during this PhD have both theoretical and practical implications for CT research and assessment. From a theoretical perspective, this thesis shows unexplored similarities among different CT traditions, perspectives, and study methods. These similarities could be exploited to open up an interdisciplinary dialogue among experts and build up a shared understanding of CT. Automatic assessment methods can enhance the use of open-ended measures for CT assessment, especially in online teaching. Indeed, they can support teachers and researchers to deal with the growing presence of linguistic data produced within educational 4 platforms. To this end, it is pivotal to develop automatic methods for the evaluation of large amounts of data which would be impossible to analyse manually, providing teachers and
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Seungwon. "Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25111.

Full text
Abstract:
Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic tagging tasks, which is labor-intensive, solutions using computers have been developed. This dissertation describes the design and development of a topic identification approach, in this case applied to disaster events. In a sense, this study represents the marriage of research analysis with an engineering effort in that it combines inspiration from Cognitive Informatics with a practical model from Information Retrieval. One of the design constraints, however, is that the Web was used as a universal knowledge source, which was essential in accessing the required information for inferring topics from texts. Retrieving specific information of interest from such a vast information source was achieved by querying a search engine's application programming interface. Specifically, the information gathered was processed mainly by incorporating the Vector Space Model from the Information Retrieval field. As a proof of concept, we subsequently developed and evaluated a prototype tool, Xpantrac, which is able to run in a batch mode to automatically process text documents. A user interface of Xpantrac also was constructed to support an interactive semi-automatic topic tagging application, which was subsequently assessed via a usability study. Throughout the design, development, and evaluation of these various study components, we detail how the hypotheses and research questions of this dissertation have been supported and answered. We also present that our overarching goal, which was the identification of topics in a human-comparable way without depending on a large training set or a corpus, has been achieved.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Anfu. "Leveraging linguistic and semantic information for relation extraction from domain-specific texts." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG081.

Full text
Abstract:
Cette thèse a pour objet l'extraction d'informations relationnelles à partir de documents scientifiques biomédicaux, c'est-à-dire la transformation de texte non structuré en information structurée exploitable par une machine. En tant que tâche dans le domaine du traitement automatique des langues (TAL), l'extraction de relations sémantiques spécialisées entre entités textuelles rend explicite et formalise les structures sous-jacentes. Les méthodes actuelles à l'état de l'art s'appuient sur de l'apprentissage supervisé, plus spécifiquement l'ajustement de modèles de langue pré-entraînés comme BERT. L'apprentissage supervisé a besoin de beaucoup d'exemples d'apprentissages qui sont coûteux à produire, d'autant plus dans les domaines spécialisés comme le domaine biomédical. Les variants de BERT, comme par exemple PubMedBERT, ont obtenu du succès sur les tâches de TAL dans des textes biomédicaux. Nous faisons l'hypothèse que l'injection d'informations externes telles que l'information syntaxique ou la connaissance factuelle dans ces variants de BERT peut pallier le nombre réduit de données d'entraînement annotées. Dans ce but, cette thèse concevra plusieurs architectures neuronales basés sur PubMedBERT qui exploitent des informations linguistiques obtenues par analyse syntaxique ou des connaissances du domaine issues de bases de connaissance<br>This thesis aims to extract relations from scientific documents in the biomedical domain, i.e. transform unstructured texts into structured data that is machine-readable. As a task in the domain of Natural Language Processing (NLP), the extraction of semantic relations between textual entities makes explicit and formalizes the underlying structures. Current state-of-the-art methods rely on supervised learning, more specifically the fine-tuning of pre-trained language models such as BERT. Supervised learning requires a large amount of examples that are expensive to produce, especially in specific domains such as the biomedical domain. BERT variants such as PubMedBERT have been successful on NLP tasks involving biomedical texts. We hypothesize that injecting external information such as syntactic information or factual knowledge into such BERT variants can compensate for the reduced number of annotated training data. To this end, this thesis consists of proposing several neural architectures based on PubMedBERT that exploit linguistic information obtained by syntactic parsers or domain knowledge from knowledge bases
APA, Harvard, Vancouver, ISO, and other styles
20

Candadai, Vasu Madhavun. "ANSWER : A Cognitively-Inspired System for the Unsupervised Detection of Semantically Salient Words in Texts." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439305439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Karmakar, Saurav. "Syntactic and Semantic Analysis and Visualization of Unstructured English Texts." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/cs_diss/61.

Full text
Abstract:
People have complex thoughts, and they often express their thoughts with complex sentences using natural languages. This complexity may facilitate efficient communications among the audience with the same knowledge base. But on the other hand, for a different or new audience this composition becomes cumbersome to understand and analyze. Analysis of such compositions using syntactic or semantic measures is a challenging job and defines the base step for natural language processing. In this dissertation I explore and propose a number of new techniques to analyze and visualize the syntactic and semantic patterns of unstructured English texts. The syntactic analysis is done through a proposed visualization technique which categorizes and compares different English compositions based on their different reading complexity metrics. For the semantic analysis I use Latent Semantic Analysis (LSA) to analyze the hidden patterns in complex compositions. I have used this technique to analyze comments from a social visualization web site for detecting the irrelevant ones (e.g., spam). The patterns of collaborations are also studied through statistical analysis. Word sense disambiguation is used to figure out the correct sense of a word in a sentence or composition. Using textual similarity measure, based on the different word similarity measures and word sense disambiguation on collaborative text snippets from social collaborative environment, reveals a direction to untie the knots of complex hidden patterns of collaboration.
APA, Harvard, Vancouver, ISO, and other styles
22

Plavin, T. V. "Comparison and Search of Texts Using Vector Space Model." Thesis, Sumy State University, 2016. http://essuir.sumdu.edu.ua/handle/123456789/47132.

Full text
Abstract:
The article deals with the issues of coping of information. It outlines one of the technics of a text comparison and search of similar texts. The core logic of this technic is in using of the vector space model. Presented a way of obtaining of a quantitative evaluation of similarity of two texts and finding of matching offers.
APA, Harvard, Vancouver, ISO, and other styles
23

Rodríguez, Penagos Carlos. "Metalinguistic information extraction from specialized texts to enrich computational lexicons." Doctoral thesis, Universitat Pompeu Fabra, 2005. http://hdl.handle.net/10803/7580.

Full text
Abstract:
Este trabajo presenta un estudio empírico del uso y función del metalenguaje en el conocimiento científico experto y los lenguajes de especialidad en lengua inglesa, con especial atención al establecimiento, modificación y negociación de la terminología común del grupo de especialistas de cada área. Mediante enunciados discursivos llamados Operaciones Metalingüísticas Explícitas se formaliza y analiza el carácter dinámico de las estructuras conceptuales científicas y los sublenguajes que las vehiculan.<br/>Por otro lado, se presenta la implementación de un sistema automático de extracción de información metalingüística en textos de especialidad. El sistema MOP (Metalinguistic Operation Processor) extrae enunciados metalingüísticos y definiciones de documentos especializados, utilizando tanto autómatas de estados finitos como algoritmos de aprendizaje automático. El sistema crear bases semi-estructuradas de información terminológica llamadas Metalinguistic Information Databases (MID), de utilidad para la lexicografía especializada, el procesamiento del lenguaje natural y el estudio empírico de la evolución del conocimiento científico, entre otras aplicaciones.<br>This work presents an empirical study of the use and function of metalanguage in expert scientific knowledge and special-domain languages, with special focus on how each field's terminology is established, modified and negotiated within the group of experts. Through discourse statements called Explicit metalinguistic Operations the dynamic nature of conceptual structures and the sublanguages that embody them are formalized and analyzed.<br/>On the other hand, it presents a system implementation for the automatic extraction of metalinguistic information from specialized texts. The Metalinguistic Operation Processor (MOP) system extracts metalinguistic statements and definitions from special-domain documents, using finite-state machinery and machine-learning algorithms. The system creates semi-structured databases called Metalinguistic Information Databases (MID), useful for specialized lexicography, Natural Language Processing, and the empirical study of scientific knowledge, among other applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Ilisei, Iustina-Narcisa. "A machine learning approach to the identification of translational language : an inquiry into translationese learning models." Thesis, University of Wolverhampton, 2012. http://hdl.handle.net/2436/299371.

Full text
Abstract:
In the world of Descriptive Translation Studies, translationese refers to the specific traits that characterise the language used in translations. While translationese has been often investigated to illustrate that translational language is different from non-translational language, scholars have also proposed a set of hypotheses which may characterise such di erences. In the quest for the validation of these hypotheses, embracing corpus-based techniques had a well-known impact in the domain, leading to several advances in the past twenty years. Despite extensive research, however, there are no universally recognised characteristics of translational language, nor universally recognised patterns likely to occur within translational language. This thesis addresses these issues, with a less used approach in the eld of Descriptive Translation Studies, by investigating the nature of translational language from a machine learning perspective. While the main focus is on analysing translationese, this thesis investigates two related sub-hypotheses: simplication and explicitation. To this end, a multilingual learning framework is designed and implemented for the identification of translational language. The framework is modelled as a categorisation task, the learning techniques having the major goal to automatically learn to distinguish between translated and non-translated texts. The second and third major goals of this research are the retrieval of the recurring patterns that are revealed in the process of solving the task of categorisation, as well as the ranking of the most in uential characteristics used to accomplish the learning task. These aims are ful lled by implementing a system that adopts the machine learning methodology proposed in this research. The learning framework proves to be an adaptable multilingual framework for the investigation of the nature of translational language, its adaptability being illustrated in this thesis by applying it to the investigation of two languages: Spanish and Romanian. In this thesis, di erent research scenarios and learning models are experimented with in order to assess to what extent translated texts can be diff erentiated from non-translated texts in certain contexts. The findings show that machine learning algorithms, aggregating a large set of potentially discriminative characteristics for translational language, are able to diff erentiate translated texts from non-translated ones with high scores. The evaluation experiments report performance values such as accuracy, precision, recall, and F-measure on two datasets. The present research is situated at the con uence of three areas, more precisely: Descriptive Translation Studies, Machine Learning and Natural Language Processing, justifying the need to combine these elds for the investigation of translationese and translational hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
25

Jen, Chun-Heng. "Exploring Construction of a Company Domain-Specific Knowledge Graph from Financial Texts Using Hybrid Information Extraction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291107.

Full text
Abstract:
Companies do not exist in isolation. They are embedded in structural relationships with each other. Mapping a given company’s relationships with other companies in terms of competitors, subsidiaries, suppliers, and customers are key to understanding a company’s major risk factors and opportunities. Conventionally, obtaining and staying up to date with this key knowledge was achieved by reading financial news and reports by highly skilled manual labor like a financial analyst. However, with the development of Natural Language Processing (NLP) and graph databases, it is now possible to systematically extract and store structured information from unstructured data sources. The current go-to method to effectively extract information uses supervised machine learning models, which require a large amount of labeled training data. The data labeling process is usually time-consuming and hard to get in a domain-specific area. This project explores an approach to construct a company domain-specific Knowledge Graph (KG) that contains company-related entities and relationships from the U.S. Securities and Exchange Commission (SEC) 10-K filings by combining a pre-trained general NLP with rule-based patterns in Named Entity Recognition (NER) and Relation Extraction (RE). This approach eliminates the time-consuming data-labeling task in the statistical approach, and by evaluating ten 10-k filings, the model has the overall Recall of 53.6%, Precision of 75.7%, and the F1-score of 62.8%. The result shows it is possible to extract company information using the hybrid methods, which does not require a large amount of labeled training data. However, the project requires the time-consuming process of finding lexical patterns from sentences to extract company-related entities and relationships.<br>Företag existerar inte som isolerade organisationer. De är inbäddade i strukturella relationer med varandra. Att kartlägga ett visst företags relationer med andra företag när det gäller konkurrenter, dotterbolag, leverantörer och kunder är nyckeln till att förstå företagets huvudsakliga riskfaktorer och möjligheter. Det konventionella sättet att hålla sig uppdaterad med denna viktiga kunskap var genom att läsa ekonomiska nyheter och rapporter från högkvalificerad manuell arbetskraft som till exempel en finansanalytiker. Men med utvecklingen av ”Natural Language Processing” (NLP) och grafdatabaser är det nu möjligt att systematiskt extrahera och lagra strukturerad information från ostrukturerade datakällor. Den nuvarande metoden för att effektivt extrahera information använder övervakade maskininlärningsmodeller som kräver en stor mängd märkta träningsdata. Datamärkningsprocessen är vanligtvis tidskrävande och svår att få i ett domänspecifikt område. Detta projekt utforskar ett tillvägagångssätt för att konstruera en företagsdomänspecifikt ”Knowledge Graph” (KG) som innehåller företagsrelaterade enheter och relationer från SEC 10-K-arkivering genom att kombinera en i förväg tränad allmän NLP med regelbaserade mönster i ”Named Entity Recognition” (NER) och ”Relation Extraction” (RE). Detta tillvägagångssätt eliminerar den tidskrävande datamärkningsuppgiften i det statistiska tillvägagångssättet och genom att utvärdera tio SEC 10-K arkiv har modellen den totala återkallelsen på 53,6 %, precision på 75,7 % och F1-poängen på 62,8 %. Resultatet visar att det är möjligt att extrahera företagsinformation med hybridmetoderna, vilket inte kräver en stor mängd märkta träningsdata. Projektet kräver dock en tidskrävande process för att hitta lexikala mönster från meningar för att extrahera företagsrelaterade enheter och relationer.
APA, Harvard, Vancouver, ISO, and other styles
26

Mazyad, Ahmad. "Contribution to automatic text classification : metrics and evolutionary algorithms." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0487/document.

Full text
Abstract:
Cette thèse porte sur le traitement du langage naturel et l'exploration de texte, à l'intersection de l'apprentissage automatique et de la statistique. Nous nous intéressons plus particulièrement aux schémas de pondération des termes (SPT) dans le contexte de l'apprentissage supervisé et en particulier à la classification de texte. Dans la classification de texte, la tâche de classification multi-étiquettes a suscité beaucoup d'intérêt ces dernières années. La classification multi-étiquettes à partir de données textuelles peut être trouvée dans de nombreuses applications modernes telles que la classification de nouvelles où la tâche est de trouver les catégories auxquelles appartient un article de presse en fonction de son contenu textuel (par exemple, politique, Moyen-Orient, pétrole), la classification du genre musical (par exemple, jazz, pop, oldies, pop traditionnelle) en se basant sur les commentaires des clients, la classification des films (par exemple, action, crime, drame), la classification des produits (par exemple, électronique, ordinateur, accessoires). La plupart des algorithmes d'apprentissage ne conviennent qu'aux problèmes de classification binaire. Par conséquent, les tâches de classification multi-étiquettes sont généralement transformées en plusieurs tâches binaires à label unique. Cependant, cette transformation introduit plusieurs problèmes. Premièrement, les distributions des termes ne sont considérés qu'en matière de la catégorie positive et de la catégorie négative (c'est-à-dire que les informations sur les corrélations entre les termes et les catégories sont perdues). Deuxièmement, il n'envisage aucune dépendance vis-à-vis des étiquettes (c'est-à-dire que les informations sur les corrélations existantes entre les classes sont perdues). Enfin, puisque toutes les catégories sauf une sont regroupées dans une seule catégories (la catégorie négative), les tâches nouvellement créées sont déséquilibrées. Ces informations sont couramment utilisées par les SPT supervisés pour améliorer l'efficacité du système de classification. Ainsi, après avoir présenté le processus de classification de texte multi-étiquettes, et plus particulièrement le SPT, nous effectuons une comparaison empirique de ces méthodes appliquées à la tâche de classification de texte multi-étiquette. Nous constatons que la supériorité des méthodes supervisées sur les méthodes non supervisées n'est toujours pas claire. Nous montrons ensuite que ces méthodes ne sont pas totalement adaptées au problème de la classification multi-étiquettes et qu'elles ignorent beaucoup d'informations statistiques qui pourraient être utilisées pour améliorer les résultats de la classification. Nous proposons donc un nouvel SPT basé sur le gain d'information. Cette nouvelle méthode prend en compte la distribution des termes, non seulement en ce qui concerne la catégorie positive et la catégorie négative, mais également en rapport avec toutes les autres catégories. Enfin, dans le but de trouver des SPT spécialisés qui résolvent également le problème des tâches déséquilibrées, nous avons étudié les avantages de l'utilisation de la programmation génétique pour générer des SPT pour la tâche de classification de texte. Contrairement aux études précédentes, nous générons des formules en combinant des informations statistiques à un niveau microscopique (par exemple, le nombre de documents contenant un terme spécifique) au lieu d'utiliser des SPT complets. De plus, nous utilisons des informations catégoriques telles que (par exemple, le nombre de catégories dans lesquelles un terme apparaît). Des expériences sont effectuées pour mesurer l'impact de ces méthodes sur les performances du modèle. Nous montrons à travers ces expériences que les résultats sont positifs<br>This thesis deals with natural language processing and text mining, at the intersection of machine learning and statistics. We are particularly interested in Term Weighting Schemes (TWS) in the context of supervised learning and specifically the Text Classification (TC) task. In TC, the multi-label classification task has gained a lot of interest in recent years. Multi-label classification from textual data may be found in many modern applications such as news classification where the task is to find the categories that a newswire story belongs to (e.g., politics, middle east, oil), based on its textual content, music genre classification (e.g., jazz, pop, oldies, traditional pop) based on customer reviews, film classification (e.g. action, crime, drama), product classification (e.g. Electronics, Computers, Accessories). Traditional classification algorithms are generally binary classifiers, and they are not suited for the multi-label classification. The multi-label classification task is, therefore, transformed into multiple single-label binary tasks. However, this transformation introduces several issues. First, terms distributions are only considered in relevance to the positive and the negative categories (i.e., information on the correlations between terms and categories is lost). Second, it fails to consider any label dependency (i.e., information on existing correlations between classes is lost). Finally, since all categories but one are grouped into one category (the negative category), the newly created tasks are imbalanced. This information is commonly used by supervised TWS to improve the effectiveness of the classification system. Hence, after presenting the process of multi-label text classification, and more particularly the TWS, we make an empirical comparison of these methods applied to the multi-label text classification task. We find that the superiority of the supervised methods over the unsupervised methods is still not clear. We show then that these methods are not fully adapted to the multi-label classification problem and they ignore much statistical information that coul be used to improve the classification results. Thus, we propose a new TWS based on information gain. This new method takes into consideration the term distribution, not only regarding the positive and the negative categories but also in relevance to all classes. Finally, aiming at finding specialized TWS that also solve the issue of imbalanced tasks, we studied the benefits of using genetic programming for generating TWS for the text classification task. Unlike previous studies, we generate formulas by combining statistical information at a microscopic level (e.g., the number of documents that contain a specific term) instead of using complete TWS. Furthermore, we make use of categorical information such as (e.g., the number of categories where a term occurs). Experiments are made to measure the impact of these methods on the performance of the model. We show through these experiments that the results are positive
APA, Harvard, Vancouver, ISO, and other styles
27

Al-Khonaizi, Mohammed Taqi. "Natural Arabic language text understanding." Thesis, University of Greenwich, 1999. http://gala.gre.ac.uk/6096/.

Full text
Abstract:
The most challenging part of natural language understanding is the representation of meaning. The current representation techniques are not sufficient to resolve the ambiguities, especially when the meaning is to be used for interrogation at a later stage. Arabic language represents a challenging field for Natural Language Processing (NLP) because of its rich eloquence and free word order, but at the same time it is a good platform to capture understanding because of its rich computational, morphological and grammar rules. Among different representation techniques, Lexical Functional Grammar (LFG) theory is found to be best suited for this task because of its structural approach. LFG lays down a computational approach towards NLP, especially the constituent and the functional structures, and models the completeness of relationships among the contents of each structure internally, as well as among the structures externally. The introduction of Artificial Intelligence (AI) techniques, such as knowledge representation and inferencing, enhances the capture of meaning by utilising domain specific common sense knowledge embedded in the model of domain of discourse and the linguistic rules that have been captured from the Arabic language grammar. This work has achieved the following results: (i) It is the first attempt to apply the LFG formalism on a full Arabic declarative text that consists of more than one paragraph. (ii) It extends the semantic structure of the LFG theory by incorporating a representation based on the thematic-role frames theory. (iii) It extends to the LFG theory to represent domain specific common sense knowledge. (iv) It automates the production process of the functional and semantic structures. (v) It automates the production process of domain specific common sense knowledge structure, which enhances the understanding ability of the system and resolves most ambiguities in subsequent question-answer sessions.
APA, Harvard, Vancouver, ISO, and other styles
28

Botha, Gerrti Reinier. "Text-based language identification for the South African languages." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-090942008-133715/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Faille, Juliette. "Data-Based Natural Language Generation : Evaluation and Explainability." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0305.

Full text
Abstract:
Les modèles de génération de langage naturel (NLG) ont récemment atteint de très hautes performances. Les textes qu'ils produisent sont généralement corrects sur le plan grammatical et syntaxique, ce qui les rend naturels. Bien que leur sens soit correct dans la grande majorité des cas, même les modèles de NLG les plus avancés produisent encore des textes avec des significations partiellement inexactes. Dans cette thèse, en nous concentrons sur le cas particulier des problèmes liés au contenu des textes générés, nous proposons d'évaluer et d'analyser les modèles utilisés dans les tâches de verbalisation de graphes RDF (Resource Description Framework) et de génération de questions conversationnelles. Tout d'abord, nous étudions la tâche de verbalisation des graphes RDF et en particulier les omissions et hallucinations d'entités RDF, c'est-à-dire lorsqu'un texte généré automatiquement ne mentionne pas toutes les entités du graphe RDF d'entrée ou mentionne d'autres entités que celles du graphe d'entrée. Nous évaluons 25 modèles de verbalisation de graphes RDF sur les données WebNLG. Nous développons une méthode pour détecter automatiquement les omissions et les hallucinations d'entités RDF dans les sorties de ces modèles. Nous proposons une métrique basée sur le nombre d'omissions ou d'hallucinations pour quantifier l'adéquation sémantique des modèles NLG avec l'entrée. Nous constatons que cette métrique est corrélée avec ce que les annotateurs humains considèrent comme sémantiquement correct et nous montrons que même les modèles les plus globalement performants sont sujets à des omissions et à des hallucinations. Suite à cette observation sur la tendance des modèles de verbalisation RDF à générer des textes avec des problèmes liés au contenu, nous proposons d'analyser l'encodeur de deux de ces modèles, BART et T5. Nous utilisons une méthode d'explicabilité par sondage et introduisons deux sondes de classification, l'une paramétrique et l'autre non paramétrique, afin de détecter les omissions et les déformations des entités RDF dans les plongements lexicaux des modèles encodeur-décodeur. Nous constatons que ces classifieurs sont capables de détecter ces erreurs dans les encodages, ce qui suggère que l'encodeur des modèles est responsable d'une certaine perte d'informations sur les entités omises et déformées. Enfin, nous proposons un modèle de génération de questions conversationnelles basé sur T5 qui, en plus de générer une question basée sur un graphe RDF d'entrée et un contexte conversationnel, génère à la fois une question et le triplet RDF correspondant. Ce modèle nous permet d'introduire une procédure d'évaluation fine évaluant automatiquement la cohérence avec le contexte de la conversation et l'adéquation sémantique avec le graphe RDF d'entrée. Nos contributions s'inscrivent dans les domaines de l'évaluation en NLG et de l'explicabilité. Nous empruntons des techniques et des méthodologies à ces deux domaines de recherche afin d'améliorer la fiabilité des modèles de génération de texte<br>Recent Natural Language Generation (NLG) models achieve very high average performance. Their output texts are generally grammatically and syntactically correct which makes them sound natural. Though the semantics of the texts are right in most cases, even the state-of-the-art NLG models still produce texts with partially incorrect meanings. In this thesis, we propose evaluating and analyzing content-related issues of models used in the NLG tasks of Resource Description Framework (RDF) graphs verbalization and conversational question generation. First, we focus on the task of RDF verbalization and the omissions and hallucinations of RDF entities, i.e. when an automatically generated text does not mention all the input RDF entities or mentions other entities than those in the input. We evaluate 25 RDF verbalization models on the WebNLG dataset. We develop a method to automatically detect omissions and hallucinations of RDF entities in the outputs of these models. We propose a metric based on omissions or hallucination counts to quantify the semantic adequacy of the NLG models. We find that this metric correlates well with what human annotators consider to be semantically correct and show that even state-of-the-art models are subject to omissions and hallucinations. Following this observation about the tendency of RDF verbalization models to generate texts with content-related issues, we propose to analyze the encoder of two such state-of-the-art models, BART and T5. We use the probing explainability method and introduce two probing classifiers (one parametric and one non-parametric) to detect omissions and distortions of RDF input entities in the embeddings of the encoder-decoder models. We find that such probing classifiers are able to detect these mistakes in the encodings, suggesting that the encoder of the models is responsible for some loss of information about omitted and distorted entities. Finally, we propose a T5-based conversational question generation model that in addition to generating a question based on an input RDF graph and a conversational context, generates both a question and its corresponding RDF triples. This setting allows us to introduce a fine-grained evaluation procedure automatically assessing coherence with the conversation context and the semantic adequacy with the input RDF. Our contributions belong to the fields of NLG evaluation and explainability and use techniques and methodologies from these two research fields in order to work towards providing more reliable NLG models
APA, Harvard, Vancouver, ISO, and other styles
30

Sætre, Rune. "GeneTUC: Natural Language Understanding in Medical Text." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-545.

Full text
Abstract:
<p>Natural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists.</p><p>The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems.</p><p>The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities.</p><p>The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Jarman, Jay. "Combining Natural Language Processing and Statistical Text Mining: A Study of Specialized Versus Common Languages." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3166.

Full text
Abstract:
This dissertation focuses on developing and evaluating hybrid approaches for analyzing free-form text in the medical domain. This research draws on natural language processing (NLP) techniques that are used to parse and extract concepts based on a controlled vocabulary. Once important concepts are extracted, additional machine learning algorithms, such as association rule mining and decision tree induction, are used to discover classification rules for specific targets. This multi-stage pipeline approach is contrasted with traditional statistical text mining (STM) methods based on term counts and term-by-document frequencies. The aim is to create effective text analytic processes by adapting and combining individual methods. The methods are evaluated on an extensive set of real clinical notes annotated by experts to provide benchmark results. There are two main research question for this dissertation. First, can information (specialized language) be extracted from clinical progress notes that will represent the notes without loss of predictive information? Secondly, can classifiers be built for clinical progress notes that are represented by specialized language? Three experiments were conducted to answer these questions by investigating some specific challenges with regard to extracting information from the unstructured clinical notes and classifying documents that are so important in the medical domain. The first experiment addresses the first research question by focusing on whether relevant patterns within clinical notes reside more in the highly technical medically-relevant terminology or in the passages expressed by common language. The results from this experiment informed the subsequent experiments. It also shows that predictive patterns are preserved by preprocessing text documents with a grammatical NLP system that separates specialized language from common language and it is an acceptable method of data reduction for the purpose of STM. Experiments two and three address the second research question. Experiment two focuses on applying rule-mining techniques to the output of the information extraction effort from experiment one, with the ultimate goal of creating rule-based classifiers. There are several contributions of this experiment. First, it uses a novel approach to create classification rules from specialized language and to build a classifier. The data is split by classification and then rules are generated. Secondly, several toolkits were assembled to create the automated process by which the rules were created. Third, this automated process created interpretable rules and finally, the resulting model provided good accuracy. The resulting performance was slightly lower than from the classifier from experiment one but had the benefit of having interpretable rules. Experiment three focuses on using decision tree induction (DTI) for a rule discovery approach to classification, which also addresses research question three. DTI is another rule centric method for creating a classifier. The contributions of this experiment are that DTI can be used to create an accurate and interpretable classifier using specialized language. Additionally, the resulting rule sets are simple and easily interpretable, as well as created using a highly automated process.
APA, Harvard, Vancouver, ISO, and other styles
32

Hatier, Sylvain. "Identification et analyse linguistique du lexique scientifique transdisciplinaire. Approche outillée sur un corpus d'articles de recherche en SHS." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAL027/document.

Full text
Abstract:
Cette thèse s’intéresse au lexique scientifique transdisciplinaire (LST), lexique inscrit dans le genre de l’article de recherche en sciences humaines et sociales. Le LST est fréquemment mobilisé dans les écrits scientifiques et constitue ainsi un objet d’importance pour l’étude de ce genre. Ce lexique trouve également des applications concrètes tant en indexation terminologique que pour l’aide à la rédaction/compréhension de textes scientifiques. Ces différents objectifs nous amènent à adopter une approche outillée pour identifier et caractériser les unités lexicales du LST, lexique complexe à circonscrire, situé entre lexique de la langue générale et terminologie. En nous basant sur les propriétés de spécificité et de transdisciplinarité ainsi que sur l’étude des propriétés lexico-syntaxiques de ses éléments, nous élaborons une ressource du LST intégrant informations lexicales, syntaxiques et sémantiques. L’analyse de la combinatoire à l’aide d’un corpus arboré autorise ainsi une caractérisation du LST ancrée sur l’usage dans le genre de l’article de recherche. Selon cette même approche, nous identifions les acceptions nominales transdisciplinaires et proposons une classification sémantique fondée sur la combinatoire en corpus pour intégrer à notre ressource lexicale une typologie nominale sur deux niveaux. Nous montrons enfin que cette structuration du LST nous permet d’aborder la dimension phraséologique et rhétorique du LST en faisant émerger du corpus des constructions récurrentes définies par leurs propriétés syntactico-sémantiques<br>In this dissertation we study the French cross-disciplinary scientific lexicon (CSL), a lexicon which fall within the genre of scientific articles in humanities and social sciences. As the CSL is commonly used in scientific texts, it is a gateway of interest to explore this genre. This lexicon has also practical applications in the fields of automatic terms identification and foreign language teaching in the academic background. To this end, we apply a corpus-driven approach in order to extract and structure the CSL lexical units which are complex to circumscribe. The method relies on the cross-disciplinarity and specificity criteria and on the lexico-syntactic properties of the CSL lexical units. As a result, we designed a lexical resource which include lexical, syntactical and semantical informations. As we analyze the combinatorial properties extracted from a parsed corpus of scientific articles, we performed a CSL study based on its genre specific use. We follow the same approach to identify cross-disciplinary meanings for the CSL nouns and to design a nominal semantic classification. This two-level typology allow us to explore rhetorical and phraseological CSL properties by identifying frequent syntactico-semantic patterns
APA, Harvard, Vancouver, ISO, and other styles
33

Moncecchi, Guillermo. "Recognizing speculative language in research texts." Paris 10, 2013. http://www.theses.fr/2013PA100039.

Full text
Abstract:
Cette thèse présente une méthodologie pour résoudre des problèmes de classification, en particulier ceux concernant le classement séquentiel pour les tâches de traitement du langage naturel. Elle propose l'utilisation d'une méthode itérative, basée sur l'analyse des erreurs, pour améliorer la performance de classification. Ce sont des experts du domaine qui suggèrent l'intégration des connaissances spécifiques du domaine dans le processus d'apprentissage automatique. Nous avons appliqué et évalué la méthodologie dans deux tâches liées à la détection des phénomènes de « hedging » dans des textes scientifiques: celle de la détection de repères de « hedging » et celle de l’identification de la portée des repères détectés dans les phrases. Les résultats sont prometteurs: pour la première tâche, nous avons amélioré les résultats de base en 2,5 points en termes de F_mesure au moyen de l’intégration des informations de cooccurrence, tandis que pour la détection de la portée, l'incorporation des informations sur la syntaxe de la phrase nous a permis d'améliorer les performances de classification en F-mesure de 0,712 à un nombre final de 0,835. Par rapport à l'état de l'art des méthodes, les résultats sont compétitifs, ce qui suggère que l'approche de l'amélioration des classificateurs basée uniquement sur l’analyse des erreurs dans une partie du corpus dédiée seulement à cette tâche peut être utilisée avec succès dans d'autres tâches similaires. De plus, cette thèse propose un schéma de classes pour représenter des analyse des phrases dans une structure de donnés unique, y compris les résultats de divers analyses linguistiques. Cela nous permet de mieux gérer le processus itératif d'amélioration du classificateur, où des ensembles d'attributs différents pour l'apprentissage sont utilisés à chaque itération. Nous proposons également de stocker des attributs dans un modèle relationnel, plutôt que des structures traditionnelles à base de texte, pour faciliter l'analyse et la manipulation de données nécessaires pour l’apprentissage<br>This thesis presents a methodology to solve certain classification problems, particularly those involving sequential classification for Natural Language Processing tasks. It proposes the use of an iterative, error-based approach to improve classification performance, suggesting the incorporation of expert knowledge into the learning process through the use of knowledge rules. We applied and evaluated the methodology to two tasks related with the detection of hedging in scientific articles: those of hedge cue identification and hedge cue scope detection. Results are promising: for the first task, we improved baseline results by 2. 5 points in terms of F-score incorporating cue cooccurence information, while for scope detection, the incorporation of syntax information and rules for syntax scope pruning allowed us to improve classification performance from an F-score of 0. 712 to a final number of 0. 835. Compared with state-of-the-art methods, results are competitive, suggesting that the approach of improving classifiers based only on committed errors on a held out corpus could be successfully used in other, similar tasks. Additionally, this thesis proposes a class schema for representing sentence analysis in a unique structure, including the results of different linguistic analysis. This allows us to better manage the iterative process of classifier improvement, where different attribute sets for learning are used in each iteration. We also propose to store attributes in a relational model, instead of the traditional text-based structures, to facilitate learning data analysis and manipulation
APA, Harvard, Vancouver, ISO, and other styles
34

Ramachandran, Venkateshwaran. "A temporal analysis of natural language narrative text." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-03122009-040648/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yeates, Stuart Andrew. "Text Augmentation: Inserting markup into natural language text with PPM Models." The University of Waikato, 2006. http://hdl.handle.net/10289/2600.

Full text
Abstract:
This thesis describes a new optimisation and new heuristics for automatically marking up XML documents, and CEM, a Java implementation, using PPM models. CEM is significantly more general than previous systems, marking up large numbers of hierarchical tags, using n-gram models for large n and a variety of escape methods. Four corpora are discussed, including the bibliography corpus of 14682 bibliographies laid out in seven standard styles using the BibTeX system and marked up in XML with every field from the original BibTeX. Other corpora include the ROCLING Chinese text segmentation corpus, the Computists' Communique corpus and the Reuters' corpus. A detailed examination is presented of the methods of evaluating mark up algorithms, including computation complexity measures and correctness measures from the fields of information retrieval, string processing, machine learning and information theory. A new taxonomy of markup complexities is established and the properties of each taxon are examined in relation to the complexity of marked up documents. The performance of the new heuristics and optimisation are examined using the four corpora.
APA, Harvard, Vancouver, ISO, and other styles
36

Johansson, Richard. "Natural language processing methods for automatic illustration of text /." Lund : Department of Computer Science, Lund Institute of Technology, Lund University, 2006. http://www.df.lth.se/~richardj/pdf/richard-lic.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Miller, Daniel. "A System for Natural Language Unmarked Clausal Transformations in Text-to-Text Applications." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/137.

Full text
Abstract:
A system is proposed which separates clauses from complex sentences into simpler stand-alone sentences. This is useful as an initial step on raw text, where the resulting processed text may be fed into text-to-text applications such as Automatic Summarization, Question Answering, and Machine Translation, where complex sentences are difficult to process. Grammatical natural language transformations provide a possible method to simplify complex sentences to enhance the results of text-to-text applications. Using shallow parsing, this system improves the performance of existing systems to identify and separate marked and unmarked embedded clauses in complex sentence structure resulting in syntactically simplified source for further processing.
APA, Harvard, Vancouver, ISO, and other styles
38

Sunil, Kamalakar FNU. "Automatically Generating Tests from Natural Language Descriptions of Software Behavior." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23907.

Full text
Abstract:
Behavior-Driven Development (BDD) is an emerging agile development approach where all stakeholders (including developers and customers) work together to write user stories in structured natural language to capture a software application's functionality in terms of re- quired "behaviors". Developers then manually write "glue" code so that these scenarios can be executed as software tests. This glue code represents individual steps within unit and acceptance test cases, and tools exist that automate the mapping from scenario descriptions to manually written code steps (typically using regular expressions). Instead of requiring programmers to write manual glue code, this thesis investigates a practical approach to con- vert natural language scenario descriptions into executable software tests fully automatically. To show feasibility, we developed a tool called Kirby that uses natural language processing techniques, code information extraction and probabilistic matching to automatically gener- ate executable software tests from structured English scenario descriptions. Kirby relieves the developer from the laborious work of writing code for the individual steps described in scenarios, so that both developers and customers can both focus on the scenarios as pure behavior descriptions (understandable to all, not just programmers). Results from assessing the performance and accuracy of this technique are presented.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Konstas, Ioannis. "Joint models for concept-to-text generation." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8926.

Full text
Abstract:
Much of the data found on the world wide web is in numeric, tabular, or other nontextual format (e.g., weather forecast tables, stock market charts, live sensor feeds), and thus inaccessible to non-experts or laypersons. However, most conventional search engines and natural language processing tools (e.g., summarisers) can only handle textual input. As a result, data in non-textual form remains largely inaccessible. Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input, and holds promise for rendering non-linguistic data widely accessible. Several successful generation systems have been produced in the past twenty years. They mostly rely on human-crafted rules or expert-driven grammars, implement a pipeline architecture, and usually operate in a single domain. In this thesis, we present several novel statistical models that take as input a set of database records and generate a description of them in natural language text. Our unique idea is to combine the processes of structuring a document (document planning), deciding what to say (content selection) and choosing the specific words and syntactic constructs specifying how to say it (lexicalisation and surface realisation), in a uniform joint manner. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). This joint representation allows individual processes (i.e., document planning, content selection, and surface realisation) to communicate and influence each other naturally. We recast generation as the task of finding the best derivation tree for a set of input database records and our grammar, and describe several algorithms for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. We implement our generators using the hypergraph framework. Contrary to traditional systems, we learn all the necessary document, structural and linguistic knowledge from unannotated data. Additionally, we explore a discriminative reranking approach on the hypergraph representation of our model, by including more refined content selection features. Central to our approach is the idea of porting our models to various domains; we experimented on four widely different domains, namely sportscasting, weather forecast generation, booking flights, and troubleshooting guides. The performance of our systems is competitive and often superior compared to state-of-the-art systems that use domain specific constraints, explicit feature engineering or labelled data.
APA, Harvard, Vancouver, ISO, and other styles
40

McDonald, Daniel Merrill. "Combining Text Structure and Meaning to Support Text Mining." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/194015.

Full text
Abstract:
Text mining methods strive to make unstructured text more useful for decision making. As part of the mining process, language is processed prior to analysis. Processing techniques have often focused primarily on either text structure or text meaning in preparing documents for analysis. As approaches have evolved over the years, increases in the use of lexical semantic parsing usually have come at the expense of full syntactic parsing. This work explores the benefits of combining structure and meaning or syntax and lexical semantics to support the text mining process.Chapter two presents the Arizona Summarizer, which includes several processing approaches to automatic text summarization. Each approach has varying usage of structural and lexical semantic information. The usefulness of the different summaries is evaluated in the finding stage of the text mining process. The summary produced using structural and lexical semantic information outperforms all others in the browse task. Chapter three presents the Arizona Relation Parser, a system for extracting relations from medical texts. The system is a grammar-based system that combines syntax and lexical semantic information in one grammar for relation extraction. The relation parser attempts to capitalize on the high precision performance of semantic systems and the good coverage of the syntax-based systems. The parser performs in line with the top reported systems in the literature. Chapter four presents the Arizona Entity Finder, a system for extracting named entities from text. The system greatly expands on the combination grammar approach from the relation parser. Each tag is given a semantic and syntactic component and placed in a tag hierarchy. Over 10,000 tags exist in the hierarchy. The system is tested on multiple domains and is required to extract seven additional types of entities in the second corpus. The entity finder achieves a 90 percent F-measure on the MUC-7 data and an 87 percent F-measure on the Yahoo data where additional entity types were extracted.Together, these three chapters demonstrate that combining text structure and meaning in algorithms to process language has the potential to improve the text mining process. A lexical semantic grammar is effective at recognizing domain-specific entities and language constructs. Syntax information, on the other hand, allows a grammar to generalize its rules when possible. Balancing performance and coverage in light of the world's growing body of unstructured text is important.
APA, Harvard, Vancouver, ISO, and other styles
41

Tonoike, Masatsugu. "Natural language processing exploiting topics in the Web text archive." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/135956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

George, Gloria. "Natural language processing context-dependent error correction in English text." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq21064.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Glinos, Demetrios George. "An intelligent editor for natural language processing of unrestricted text." Master's thesis, University of Central Florida, 1999. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/16913.

Full text
Abstract:
University of Central Florida College of Arts and Sciences Thesis<br>The understanding of natural language by computational methods has been a continuing and elusive problem in artificial intelligence. In recent years there has been a resurgence in natural language processing research. Much of this work has been on empirical or corpus-based methods which use a data-driven approach to train systems on large amounts of real language data. Using corpus-based methods, the performance of part-of-speech (POS) taggers, which assign to the individual words of a sentence their appropriate part of speech category (e.g., noun, verb, preposition), now rivals human performance levels, achieving accuracies exceeding 95%. Such taggers have proved useful as preprocessors for such tasks as parsing, speech synthesis, and information retrieval. Parsing remains, however, a difficult problem, even with the benefit of POS tagging. Moveover, as sentence length increases, there is a corresponding combinatorial explosing of alternative possible parses. Consider the following sentence from a New York Times online article: After Salinas was arrested for murder in 1995 and lawyers for the bank had begun monitoring his accounts, his personal banker in New York quietly advised Salinas' wife to move the money elsewhere, apparently without the consent of the legal department. To facilite the parsing and other tasks, we would like to decompose this sentence into the following three shorter sentences which, taken together, convey the same meaning as the original: 1. Salinas was arrested for murder in 1995. 2. Lawyers for the bank had begun monitoring his accounts. 3. His personal banker in New York quietly adviced Salinas' wife to move the money elsewhere, apprently without the consent of the legal department. This study investigates the development of heuristics for decomposing such long sentences into sets of shorter sentences without affecting the meaning of the original sentences. Without parsing or semantic analysis, heuristic rules were developed based on: (1) the output of a POS tagger (Brill's tagger); (2) the punctuation contained in the input sentences; and (3) the words themselves. The heuristic algorithms were implemented in an intelligent editor program which first augmented the POS tags and assigned tags to punctuation, and then tested the rules against a corpus of 25 New York Times online articles containing approximately 1,200 sentences and over 32,000 words, with good results. Recommendations are made for improving the algorithms and for continuing this line of research.<br>M.S.;<br>Computer Science<br>Arts and Sciences<br>Computer Science;<br>220 p.<br>xii, 220 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
44

Harrington, Brian. "ASKNet : automatically creating semantic knowledge networks from natural language text." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:1c7154d3-f7d1-493e-b521-4e5ceb540038.

Full text
Abstract:
This thesis details the creation of ASKNet (Automated Semantic Knowledge Network), a system for creating large scale semantic networks from natural language texts. Using ASKNet as an example, we will show that by using existing natural language processing (NLP) tools, combined with a novel use of spreading activation theory, it is possible to efficiently create high quality semantic networks on a scale never before achievable. The ASKNet system takes naturally occurring English text (e.g., newspaper articles), and processes them using existing NLP tools. It then uses the output of those tools to create semantic network fragments representing the meaning of each sentence in the text. Those fragments are then combined by a spreading activation based algorithm that attempts to decide which portions of the networks refer to the same real-world entity. This allows ASKNet to combine the small fragments together into a single cohesive resource, which has more expressive power than the sum of its parts. Systems aiming to build semantic resources have typically either overlooked information integration completely, or else dismissed it as being AI-complete, and thus unachievable. In this thesis we will show that information integration is both an integral component of any semantic resource, and achievable through a combination of NLP technologies and novel applications of spreading activation theory. While extraction and integration of all knowledge within a text may be AI-complete, we will show that by processing large quantities of text efficiently, we can compensate for minor processing errors and missed relations with volume and creation speed. If relations are too difficult to extract, or we are unsure which nodes should integrate at any given stage, we can simply leave them to be picked up later when we have more information or come across a document which explains the concept more clearly. ASKNet is primarily designed as a proof of concept system. However, this thesis will show that it is capable of creating semantic networks larger than any existing similar resource in a matter of days, and furthermore that the networks it creates of are sufficient quality to be used for real world tasks. We will demonstrate that ASKNet can be used to judge semantic relatedness of words, achieving results comparable to the best state-of-the-art systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Omar, Mussa. "Semi-automated development of conceptual models from natural language text." Thesis, University of Huddersfield, 2018. http://eprints.hud.ac.uk/id/eprint/34665/.

Full text
Abstract:
The process of converting natural language specifications into conceptual models requires detailed analysis of natural language text, and designers frequently make mistakes when undertaking this transformation manually. Although many approaches have been used to help designers translate natural language text into conceptual models, each approach has its limitations. One of the main limitations is the lack of a domain-independent ontology that can be used as a repository for entities and relationships, thus guiding the transition from natural language processing into a conceptual model. Such an ontology is not currently available because it would be very difficult and time consuming to produce. In this thesis, a semi-automated system for mapping natural language text into conceptual models is proposed. The model, which is called SACMES, combines a linguistic approach with an ontological approach and human intervention to achieve the task. The model learns from the natural language specifications that it processes, and stores the information that is learnt in a conceptual model ontology and a user history knowledge database. It then uses the stored information to improve performance and reduce the need for human intervention. The evaluation conducted on SACMES demonstrates that (1) designers’ creation of conceptual models is improved when using the system comparing with not using any system, and that (2) the performance of the system is improved by processing more natural language requirements, and thus, the need for human intervention has decreased. However, these advantages may be improved further through development of the learning and retrieval techniques used by the system.
APA, Harvard, Vancouver, ISO, and other styles
46

Lazic, Marko. "Using Natural Language Processing to extract information from receipt text." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279302.

Full text
Abstract:
The ability to automatically read, recognize, and extract different information from unstructured text is of key importance to many areas. Most research in this area has been focused on scanned invoices. This thesis investigates the feasibility of using natural language processing to extract information from receipt text. Three different machine learning models, BiLSTM, GCN, and BERT, were trained to extract a total of 7 different data points from a dataset consisting of 790 receipts. In addition, a simple rule-based model is built to serve as a baseline. These four models were then compered on how well they perform on different data points. The best performing machine learning model was BERT with an overall F1 score of 0.455. The second best machine learning model was BiLSTM with the F1 score of 0.278 and GCN had the F1 score of 0.167. These F1 scores are highly affected by the low performance on the product list which was observed with all three models. BERT showed promising results on vendor name, date, tax rate, price, and currency. However, a simple rule-based method was able to outperform the BERT model on all data points except vendor name and tax rate. Receipt images from the dataset were often blurred, rotated, and crumbled which introduced a high OCR error. This error then propagated through all of the steps and was most likely the main rea- son why machine learning models, especially BERT were not able to perform. It is concluded that there is potential in using natural language processing for the problem of information extraction. However, further research is needed if it is going to outperform the rule-based models.<br>Förmågan att automatiskt läsa, känna igen och utvinna information från ostrukturerad text har en avgörande betydelse för många områden. Majoriteten av den forskning som gjorts inom området har varit inriktad på inskannade fakturor. Detta examensarbete undersöker huruvida språkteknologi kan användas för att utvinna information från kvittotext. Tre olika maskininlärningsmodeller, BiLSTM, GCN och BERT, tränades på att utvinna totalt 7 olika datapunkter från ett dataset bestående av 790 kvitton. Dessutom byggdes en enkel regel- baserad modell som en referens. Dessa fyra modeller har sedan jämförts på hur väl de presterat på de olika datapunkterna. Modellen som gav bäst resultat bland maskininlärningsmodellerna var BERT med F1-resultatet 0.455. Den näst bästa modellen var BiLSTM med F1-resultatet 0.278 medan GCN ha- de F1-resultat 0.167. Dessa resultat påverkas starkt av den låga prestandan på produktlistan som observerades med alla tre modellerna. BERT visade lovande resultat på leverantörens namn, datum, moms, pris och valuta. Dock hade den regelbaserade modellen bättre resultat på alla datapunkter förutom leve- rantörens namn och moms. Kvittobilder från datasetet är ofta suddiga, roterade och innehåller skrynkliga kvitton, vilket resulterar i ett högt fel hos maskinläsningverktyget. Detta fel propagerades sedan genom alla steg och var troligen den främsta anledningen till att maskininlärningsmodellerna, särskilt BERT, inte kunde prestera. Sammanfattningsvis kan slutsatsen dras att användandet av språkteknologi för att utvinna information från kvittotext har potential. Ytterligare forskning behövs dock om det ska användas istället för regelbaserade modeller.
APA, Harvard, Vancouver, ISO, and other styles
47

Bothma, Bothma. "Ontology learning from Swedish text." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-245334.

Full text
Abstract:
Ontology learning from text generally consists roughly of NLP, knowledge extraction and ontology construction. While NLP and information extraction for Swedish is approaching that of English, these methods have not been assembled into the full ontology learning pipeline. This means that there is currently very little automated support for using knowledge from Swedish literature in semantically-enabled systems. This thesis demonstrates the feasibility of using some existing OL methods for Swedish text and elicits proposals for further work toward building and studying open domain ontology learning systems for Swedish and perhaps multiple languages. This is done by building a prototype ontology learning system based on the state of the art architecture of such systems, using the Korp NLP framework for Swedish text, the GATE system for corpus and annotation management, and embedding it as a self-contained plugin to the Protege ontology engineering framework. The prototype is evaluated similarly to other OL systems. As expected, it is found that while sufficient for demonstrating feasibility, the ontology produced in the evaluation is not usable in practice, since many more methods and fewer cascading errors are necessary to richly and accurately model the domain. In addition to simply implementing more methods to extract more ontology elements, a framework for programmatically defining knowledge extraction and ontology construction methods and their dependencies is recommended to enable more effective research and application of ontology learning.
APA, Harvard, Vancouver, ISO, and other styles
48

LA, QUATRA MORENO. "Deep Learning for Natural Language Understanding and Summarization." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Neto, Georges Basile Stávracas. "Reescrita sentencial baseada em traços de personalidade." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-09052018-203241/.

Full text
Abstract:
Sistemas de Geração de Língua Natural tentam produzir textos de maneira automatizada. Em sistemas desse tipo, é desejável produzir textos de forma realista - ou psicologicamente plausível - como forma de aumentar o engajamento do leitor. Uma das formas de alcançar esse objetivo é gerando textos de modo a refletir uma personalidade-alvo de interesse. Por exemplo, uma pessoa extrovertida usaria palavras mais simples e seus textos teriam mais interjeições e traços de oralidade. Esse trabalho tem o objetivo de desenvolver um modelo de reescrita sentencial para o português brasileiro com base em traços de personalidade de um locutor-alvo. Para isso, foi coletado um córpus de textos e inventários de personalidade e, com base em uma análise preliminar desses dados, foram encontrados fortes indícios de correlação entre os fatores de personalidade e as características observadas dos textos em português brasileiro. Foram gerados três modelos de lexicalização, referentes à adjetivos, substantivos e verbos. Esses modelos de lexicalização, então, foram utilizados na proposta de um modelo de reescrita sentencial para selecionar as palavras mais adequadas à personalidade-alvo. Os resultados demonstram que o uso de personalidade permite que o texto gerado seja mais próximo do desempenho humano se comparado a um sistema de baseline que faz escolhas lexicais mais frequentes<br>Natural Language Generation Systems attempt to produce texts in an automated fashion. In systems of this kind, it is desired to produce texts realisticaly - or at least psychologically plausible - as a way to increase reader\'s engagement. One way to achieve this goal is generating texts in such a way to reflect a target personality profile. For example, an extroverted individual would use simpler words and its texts would have more interjections and orality traces. This work proposes the development of a Brazilian Portuguese personality-based sentence rewrite model. To this end, a corpus with text samples and personality inventories has been collected, and, based on a preliminary analysis, strong correlations between personality and text features have been found. Three lexicalization models were generated, related to adjectives, nouns and verbs. These models were then used by the sentence rewrite model to select the most appropriate word for the target personality. Results show that the usage of personality allows the generated text to be closer to human performance when compared to a baseline system that makes lexical choices based on frequency
APA, Harvard, Vancouver, ISO, and other styles
50

Leopold, Henrik, Jan Mendling, and Artem Polyvyanyy. "Supporting Process Model Validation through Natural Language Generation." Institute of Electrical and Electronics Engineers (IEEE), 2014. http://dx.doi.org/10.1109/TSE.2014.2327044.

Full text
Abstract:
The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!