To see the other types of publications on this topic, follow the link: English language — Named Entities.

Dissertations / Theses on the topic 'English language — Named Entities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'English language — Named Entities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ringland, Nicola. "Structured Named Entities." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14558.

Full text
Abstract:
The names of people, locations, and organisations play a central role in language, and named entity recognition (NER) has been widely studied, and successfully incorporated, into natural language processing (NLP) applications. The most common variant of NER involves identifying and classifying proper noun mentions of these and miscellaneous entities as linear spans in text. Unfortunately, this version of NER is no closer to a detailed treatment of named entities than chunking is to a full syntactic analysis. NER, so construed, reflects neither the syntactic nor semantic structure of NE mentions, and provides insufficient categorical distinctions to represent that structure. Representing this nested structure, where a mention may contain mention(s) of other entities, is critical for applications such as coreference resolution. The lack of this structure creates spurious ambiguity in the linear approximation. Research in NER has been shaped by the size and detail of the available annotated corpora. The existing structured named entity corpora are either small, in specialist domains, or in languages other than English. This thesis presents our Nested Named Entity (NNE) corpus of named entities and numerical and temporal expressions, taken from the WSJ portion of the Penn Treebank (PTB, Marcus et al., 1993). We use the BBN Pronoun Coreference and Entity Type Corpus (Weischedel and Brunstein, 2005a) as our basis, manually annotating it with a principled, fine-grained, nested annotation scheme and detailed annotation guidelines. The corpus comprises over 279,000 entities over 49,211 sentences (1,173,000 words), including 118,495 top-level entities. Our annotations were designed using twelve high-level principles that guided the development of the annotation scheme and difficult decisions for annotators. We also monitored the semantic grammar that was being induced during annotation, seeking to identify and reinforce common patterns to maintain consistent, parsimonious annotations. The result is a scheme of 118 hierarchical fine-grained entity types and nesting rules, covering all capitalised mentions of entities, and numerical and temporal expressions. Unlike many corpora, we have developed detailed guidelines, including extensive discussion of the edge cases, in an ongoing dialogue with our annotators which is critical for consistency and reproducibility. We annotated independently from the PTB bracketing, allowing annotators to choose spans which were inconsistent with the PTB conventions and errors, and only refer back to it to resolve genuine ambiguity consistently. We merged our NNE with the PTB, requiring some systematic and one-off changes to both annotations. This allows the NNE corpus to complement other PTB resources, such as PropBank, and inform PTB-derived corpora for other formalisms, such as CCG and HPSG. We compare this corpus against BBN. We consider several approaches to integrating the PTB and NNE annotations, which affect the sparsity of grammar rules and visibility of syntactic and NE structure. We explore their impact on parsing the NNE and merged variants using the Berkeley parser (Petrov et al., 2006), which performs surprisingly well without specialised NER features. We experiment with flattening the NNE annotations into linear NER variants with stacked categories, and explore the ability of a maximum entropy and a CRF NER system to reproduce them. The CRF performs substantially better, but is infeasible to train on the enormous stacked category sets. The flattened output of the Berkeley parser are almost competitive with the CRF. Our results demonstrate that the NNE corpus is feasible for statistical models to reproduce. We invite researchers to explore new, richer models of (joint) parsing and NER on this complex and challenging task. Our nested named entity corpus will improve a wide range of NLP tasks, such as coreference resolution and question answering, allowing automated systems to understand and exploit the true structure of named entities.
APA, Harvard, Vancouver, ISO, and other styles
2

Radford, William Edward John. "Linking named entities to Wikipedia." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/12850.

Full text
Abstract:
Natural language is fraught with problems of ambiguity, including name reference. A name in text can refer to multiple entities just as an entity can be known by different names. This thesis examines how a mention in text can be linked to an external knowledge base (KB), in our case, Wikipedia. The named entity linking (NEL) task requires systems to identify the KB entry, or Wikipedia article, that a mention refers to; or, if the KB does not contain the correct entry, return NIL. Entity linking systems can be complex and we present a framework for analysing their different components, which we use to analyse three seminal systems which are evaluated on a common dataset and we show the importance of precise search for linking. The Text Analysis Conference (TAC) is a major venue for NEL research. We report on our submissions to the entity linking shared task in 2010, 2011 and 2012. The information required to disambiguate entities is often found in the text, close to the mention. We explore apposition, a common way for authors to provide information about entities. We model syntactic and semantic restrictions with a joint model that achieves state-of-the-art apposition extraction performance. We generalise from apposition to examine local descriptions specified close to the mention. We add local description to our state-of-the-art linker by using patterns to extract the descriptions and matching against this restricted context. Not only does this make for a more precise match, we are also able to model failure to match. Local descriptions help disambiguate entities, further improving our state-of-the-art linker. The work in this thesis seeks to link textual entity mentions to knowledge bases. Linking is important for any task where external world knowledge is used and resolving ambiguity is fundamental to advancing research into these problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Perkins, Drew. "Separating the Signal from the Noise: Predicting the Correct Entities in Named-Entity Linking." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412556.

Full text
Abstract:
In this study, I constructed a named-entity linking system that maps between contextual word embeddings and knowledge graph embeddings to predict correct entities. To establish a named-entity linking system, I first applied named-entity recognition to identify the entities of interest. I then performed candidate generation via locality sensitivity hashing (LSH), where a candidate group of potential entities were created for each identified entity. Afterwards, my named-entity disambiguation component was performed to select the most probable candidate. By concatenating contextual word embeddings and knowledge graph embeddings in my disambiguation component, I present a novel approach to named-entity linking. I conducted the experiments with the Kensho-Derived Wikimedia Dataset and the AIDA CoNLL-YAGO Dataset; the former dataset was used for deployment and the later is a benchmark dataset for entity linking tasks. Three deep learning models were evaluated on the named-entity disambiguation component with different context embeddings. The evaluation was treated as a classification task, where I trained my models to select the correct entity from a list of candidates. By optimizing the named-entity linking through this methodology, this entire system can be used in recommendation engines with high F1 of 86% using the former dataset. With the benchmark dataset, the proposed method is able to achieve F1 of 79%.
APA, Harvard, Vancouver, ISO, and other styles
4

Ruan, Wei. "Topic Segmentation and Medical Named Entities Recognition for Pictorially Visualizing Health Record Summary System." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39023.

Full text
Abstract:
Medical Information Visualization makes optimized use of digitized data of medical records, e.g. Electronic Medical Record. This thesis is an extended work of Pictorial Information Visualization System (PIVS) developed by Yongji Jin (Jin, 2016) Jiaren Suo (Suo, 2017) which is a graphical visualization system by picturizing patient’s medical history summary depicting patients’ medical information in order to help patients and doctors to easily capture patients’ past and present conditions. The summary information has been manually entered into the interface where the information can be taken from clinical notes. This study proposes a methodology of automatically extracting medical information from patients’ clinical notes by using the techniques of Natural Language Processing in order to produce medical history summarization from past medical records. We develop a Named Entities Recognition system to extract the information of the medical imaging procedure (performance date, human body location, imaging results and so on) and medications (medication names, frequency and quantities) by applying the model of conditional random fields with three main features and others: word-based, part-of-speech, Metamap semantic features. Adding Metamap semantic features is a novel idea which raised the accuracy compared to previous studies. Our evaluation shows that our model has higher accuracy than others on medication extraction as a case study. For enhancing the accuracy of entities extraction, we also propose a methodology of Topic Segmentation to clinical notes using boundary detection by determining the difference of classification probabilities of subsequence sequences, which is different from the traditional Topic Segmentation approaches such as TextTiling, TopicTiling and Beeferman Statistical Model. With Topic Segmentation combined for Named Entities Extraction, we observed higher accuracy for medication extraction compared to the case without the segmentation. Finally, we also present a prototype of integrating our information extraction system with PIVS by simply building the database of interface coordinates and the terms of human body parts.
APA, Harvard, Vancouver, ISO, and other styles
5

Hairston, Dorian. "PRETEND THE BALL IS NAMED JIM CROW." UKnowledge, 2018. https://uknowledge.uky.edu/english_etds/78.

Full text
Abstract:
The poems that form this collection titled, Pretend the Ball is Named Jim Crow, are written in the persona of Negro League Baseball’s Josh Gibson (1911-1947) and those closest to him. Gibson is credited with hitting over 800 home runs in his career and was the first Negro League Baseball Player to be inducted into Major League Baseball’s Hall of Fame without ever playing an inning of Major League Baseball.
APA, Harvard, Vancouver, ISO, and other styles
6

Bauer, Christian. "Stereotypical Gender Roles and their Patriarchal Effects in A Streetcar Named Desire." Thesis, Högskolan i Halmstad, Sektionen för humaniora (HUM), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-17170.

Full text
Abstract:
Stereotypical gender roles have probably existed as long as human culture and are such a natural part if our lives that we barely take notice of them. Nevertheless, images of what we perceive as typically masculine and feminine in appearance and behavior depend on the individual’s perception. Within each gender one can find different stereotypes. A commonly assumed idea is that men are hard tough, while women are soft and vulnerable. I find it interesting hoe stereotypes function and how they are preserved almost without our awareness. Once I started reading and researching the topic of stereotypes it became clear to me that literature contains many stereotypes. The intension of this essay is to critically examine the stereotypical gender roles in the play A Streetcar Named Desire, written by Tennessee Williams in 1947. It is remarkable how the author portrays the three main characters: Stanley, Stella and Blanche. The sharp contracts and the dynamics between them are fascinating.
APA, Harvard, Vancouver, ISO, and other styles
7

Yoshida, Etsuko. "Patterns of use of referring expressions in English and Japanese dialogues." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/4036.

Full text
Abstract:
The main aim of the thesis is to investigate how discourse entities are linked with topic chaining and discourse coherence by showing that the choice and the distribution of referring expressions is correlated with the center transition patterns in the centering framework. The thesis provides an integrated interpretation in understanding the behaviour of referring expressions in discourse by considering the relation between referential choice and the local and global coherence of discourse. The thesis has three stages: (1) to provide a semantic and pragmatic perspective in a contrastive study of referring expressions in English and Japanese spontaneous dialogues, (2) to analyse the way anaphoric and deictic expressions can contribute to discourse organisation in structuring and focusing the specific discourse segment, and (3) to investigate the choice and the distribution of referring expressions in the Map Task Corpus and to clarify the way the participants collaborate to judge the most salient entity in the current discourse against their common ground. Significantly, despite the grammatical differences in the form of reference between the two languages, the ways of discourse development in both data sets show distinctive similarities in the process by which the topic entities are introduced, established, and shifted away to the subsequent topic entities. Comparing and contrasting the choice and the distribution of referring expressions of the four different transition patterns of centers, the crucial factors of their correspondent relations between English and Japanese referring expressions are shown in the findings that the topic chains of noun phrases are constructed and are treated like proper names in discourse. This can suggest that full noun phrases play a major role when the topic entity is established in the course of discourse. Since the existing centering model cannot handle the topic chain of noun phrases in the anaphoric relations in terms of the local focus of discourse, centering must be integrated with a model of global focus to account for both pronouns and full noun phrases that can be used for continuations across segment boundaries. Based on Walker’s cache model, I argue that the forms of anaphors are not always shorter, and the focus of attention is maintained by the chain of noun phrases rather than by (zero) pronouns both within a discourse segment and over discourse segment boundaries. These processes are predicted and likely to underlie other uses of language as well. The result can modify the existing perspectives that the focus of attention is normally represented by attenuated forms of reference, and full noun phrases always show focus-shift. In addition, necessary extension to the global coherence of discourse can link these anaphoric relations with the deictic expressions over discourse segment boundaries. Finally, I argue that the choice and the distribution of referring expressions in the Map Task Corpus depends on the way the participants collaborate to judge the most salient entity in the current discourse against their common ground.
APA, Harvard, Vancouver, ISO, and other styles
8

Ek, Adam. "Extracting social networks from fiction : Imaginary and invisible friends: Investigating the social world of imaginary friends." Thesis, Stockholms universitet, Institutionen för lingvistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-145659.

Full text
Abstract:
This thesis develops an approach to extract the social relation between characters in literary text to create a social network. The approach uses co-occurrences of named entities, keywords associated with the named entities, and the dependency relations that exist between the named entities to construct the network. Literary texts contain a large amount of pronouns to represent the named entities, to resolve the antecedents of pronouns, a pronoun resolution system is implemented based on a standard pronoun resolution algorithm. The results indicate that the pronoun resolution system finds the correct named entity in 60,4\% of all cases. The social network is evaluated by comparing character importance rankings based on graph properties with an independently human generated importance rankings. The generated social networks correlate moderately to strongly with the independent character ranking.
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Ling-Xiang. "Link discovery for Chinese/English cross-language web information retrieval." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/58416/1/Ling-Xiang_Tang_Thesis.pdf.

Full text
Abstract:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
APA, Harvard, Vancouver, ISO, and other styles
10

Amancio, Marcelo Adriano. "Elaboração textual via definição de entidades mencionadas e de perguntas relacionadas aos verbos em textos simplificados do português." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-31082011-122100/.

Full text
Abstract:
Esta pesquisa aborda o tema da Elaboração Textual para um público alvo que tem letramento nos níveis básicos e rudimentar, de acordo com a classificação do Indicador Nacional de Alfabetismo Funcional (INAF, 2009). A Elaboração Textual é definida como um conjunto de técnicas que acrescentam material redundante em textos, sendo tradicionalmente usadas a adição de definições, sinônimos, antônimos, ou qualquer informação externa com o objetivo de auxiliar na compreensão do texto. O objetivo deste projeto de mestrado foi a proposta de dois métodos originais de elaboração textual: (1) via definição das entidades mencionadas que aparecem em um texto e (2) via definições de perguntas elaboradas direcionadas aos verbos das orações de um texto. Para a primeira tarefa, usou-se um sistema de reconhecimento de entidades mencionadas da literatura, o Rembrandt, e definições curtas da enciclopédia Wikipédia, sendo este método incorporado no sistema Web FACILITA EDUCATIVO, uma das ferramentas desenvolvidas no projeto PorSimples. O método foi avaliado de forma preliminar com um pequeno grupo de leitores com baixo nível de letramento e a avaliação foi positiva, indicando que este auxílio facilitou a leitura dos usuários da avaliação. O método de geração de perguntas elaboradas aos verbos de uma oração é uma tarefa nova que foi definida, estudada, implementada e avaliada neste mestrado. A avaliação não foi realizada junto ao público alvo e sim com especialistas em processamento de língua natural que avaliaram positivamente o método e indicaram quais erros influenciam negativamente na qualidade das perguntas geradas automaticamente. Existem boas indicações de que os métodos de elaboração desenvolvidos podem ser úteis na melhoria da compreensão da leitura para o público alvo em questão, as pessoas com baixo nível de letramento
This research addresses the topic of Textual Elaboration for low-literacy readers, i.e. people at the rudimentary and basic literacy levels according to the National Indicator of Functional Literacy (INAF, 2009). Text Elaboration consists of a set of techniques that adds extra material in texts using, traditionally, definitions, synonyms, antonyms, or any external information to assist in text understanding. The main goal of this research was the proposal of two methods of Textual Elaboration: (1) the use of short definitions for Named Entities in texts and (2) assignment of wh-questions related to verbs in text. The first task used the Rembrandt named entity recognition system and short definitions of Wikipedia. It was implemented in PorSimples web Educational Facilita tool. This method was preliminarily evaluated with a small group of low-literacy readers. The evaluation results were positive, what indicates that the tool was useful for improving the text understanding. The assignment of wh-questions related to verbs task was defined, studied, implemented and assessed during this research. Its evaluation was conducted with NLP researches instead of with low-literacy readers. There are good evidences that the text elaboration methods and resources developed here are useful in helping text understanding for low-literacy readers
APA, Harvard, Vancouver, ISO, and other styles
11

Andreani, Vanessa. "Immersion dans des documents scientifiques et techniques : unités, modèles théoriques et processus." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00662668.

Full text
Abstract:
Cette thèse aborde la problématique de l'accès à l'information scientifique et technique véhiculée par de grands ensembles documentaires. Pour permettre à l'utilisateur de trouver l'information qui lui est pertinente, nous avons oeuvré à la définition d'un modèle répondant à l'exigence de souplesse de notre contexte applicatif industriel ; nous postulons pour cela la nécessité de segmenter l'information tirée des documents en plans ontologiques. Le modèle résultant permet une immersion documentaire, et ce grâce à trois types de processus complémentaires : des processus endogènes (exploitant le corpus pour analyser le corpus), exogènes (faisant appel à des ressources externes) et anthropogènes (dans lesquels les compétences de l'utilisateur sont considérées comme ressource) sont combinés. Tous concourent à l'attribution d'une place centrale à l'utilisateur dans le système, en tant qu'agent interprétant de l'information et concepteur de ses connaissances, dès lors qu'il est placé dans un contexte industriel ou spécialisé.
APA, Harvard, Vancouver, ISO, and other styles
12

Watanabe, Willian Massami. "Auxílio à leitura de textos em português facilitado: questões de acessibilidade." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-22092010-164526/.

Full text
Abstract:
A grande capacidade de disponibilização de informações que a Web possibilita se traduz em múltiplas possibilidades e oportunidades para seus usuários. Essas pessoas são capazes de acessar conteúdos provenientes de todas as partes do planeta, independentemente de onde elas estejam. Mas essas possibilidades não são estendidas a todos, sendo necessário mais que o acesso a um computador e a Internet para que sejam realizadas. Indivíduos que apresentem necessidades especiais (deficiência visual, cognitiva, dificuldade de locomoção, entre outras) são privados do acesso a sites e aplicações web que façam mal emprego de tecnologias web ou possuam o conteúdo sem os devidos cuidados para com a acessibilidade. Um dos grupos que é privado do acesso a esse ambiente é o de pessoas com dificuldade de leitura (analfabetos funcionais). A ampla utilização de recursos textuais nas aplicações pode tornar difícil ou mesmo impedir as interações desses indivíduos com os sistemas computacionais. Nesse contexto, este trabalho tem por finalidade o desenvolvimento de tecnologias assistivas que atuem como facilitadoras de leitura e compreensão de sites e aplicações web a esses indivíduos (analfabetos funcionais). Essas tecnologias assistivas utilizam recursos de processamento de língua natural visando maximizar a compreensão do conteúdo pelos usuários. Dentre as técnicas utilizadas são destacadas: simplificação sintática, sumarização automática, elaboração léxica e reconhecimento das entidades nomeadas. Essas técnicas são utilizadas com a finalidade de promover a adaptação automática de conteúdos disponíveis na Web para usuários com baixo nível de alfabetização. São descritas características referentes à acessibilidade de aplicações web e princípios de design para usuários com baixo nível de alfabetização, para garantir a identificação e entendimento das funcionalidades que são implementadas nas duas tecnologias assistivas resultado deste trabalho (Facilita e Facilita Educacional). Este trabalho contribuiu com a identificação de requisitos de acessibilidade para usuários com baixo nível de alfabetização, modelo de acessibilidade para automatizar a conformidade com a WCAG e desenvolvimento de soluções de acessibilidade na camada de agentes de usuários
The large capacity of Web for providing information leads to multiple possibilities and opportunities for users. The development of high performance networks and ubiquitous devices allow users to retrieve content from any location and in different scenarios or situations they might face in their lives. Unfortunately the possibilities offered by the Web are not necessarily currently available to all. Individuals who do not have completely compliant software or hardware that are able to deal with the latest technologies, or have some kind of physical or cognitive disability, find it difficult to interact with web pages, depending on the page structure and the ways in which the content is made available. When specifically considering the cognitive disabilities, users classified as functionally illiterate face severe difficulties accessing web content. The heavy use of texts on interfaces design creates an accessibility barrier to those who cannot read fluently in their mother tongue due to both text length and linguistic complexity. In this context, this work aims at developing an assistive technologies that assists functionally illiterate users during their reading and understanding of websites textual content. These assistive technologies make use of natural language processing (NLP) techniques that maximize reading comprehension for users. The natural language techniques that this work uses are: syntactic simplification, automatic summarization, lexical elaboration and named entities recognition. The techniques are used with the goal of automatically adapting textual content available on the Web for users with low literacy levels. This work describes the accessibility characteristics incorporated into both resultant applications (Facilita and Educational Facilita) that focus on low literacy users limitations towards computer usage and experience. This work contributed with the identification of accessibility requirements for low-literacy users, elaboration of an accessibility model for automatizing WCAG conformance and development of accessible solutions in the user agents layer of web applications
APA, Harvard, Vancouver, ISO, and other styles
13

Nouvel, Damien. "Reconnaissance des entités nommées par exploration de règles d'annotation - Interpréter les marqueurs d'annotation comme instructions de structuration locale." Phd thesis, Université François Rabelais - Tours, 2012. http://tel.archives-ouvertes.fr/tel-00788630.

Full text
Abstract:
Ces dernières décennies, le développement considérable des technologies de l'information et de la communication a modifié en profondeur la manière dont nous avons accès aux connaissances. Face à l'afflux de données et à leur diversité, il est nécessaire de mettre au point des technologies performantes et robustes pour y rechercher des informations. Les entités nommées (personnes, lieux, organisations, dates, expressions numériques, marques, fonctions, etc.) sont sollicitées afin de catégoriser, indexer ou, plus généralement, manipuler des contenus. Notre travail porte sur leur reconnaissance et leur annotation au sein de transcriptions d'émissions radiodiffusées ou télévisuelles, dans le cadre des campagnes d'évaluation Ester2 et Etape. En première partie, nous abordons la problématique de la reconnaissance automatique des entités nommées. Nous y décrivons les analyses généralement conduites pour traiter le langage naturel, discutons diverses considérations à propos des entités nommées (rétrospective des notions couvertes, typologies, évaluation et annotation) et faisons un état de l'art des approches automatiques pour les reconnaître. A travers la caractérisation de leur nature linguistique et l'interprétation de l'annotation comme structuration locale, nous proposons une approche par instructions, fondée sur les marqueurs (balises) d'annotation, dont l'originalité consiste à considérer ces éléments isolément (début ou fin d'une annotation). En seconde partie, nous faisons état des travaux en fouille de données dont nous nous inspirons et présentons un cadre formel pour explorer les données. Les énoncés sont représentés comme séquences d'items enrichies (morpho-syntaxe, lexiques), tout en préservant les ambigüités à ce stade. Nous proposons une formulation alternative par segments, qui permet de limiter la combinatoire lors de l'exploration. Les motifs corrélés à un ou plusieurs marqueurs d'annotation sont extraits comme règles d'annotation. Celles-ci peuvent alors être utilisées par des modèles afin d'annoter des textes. La dernière partie décrit le cadre expérimental, quelques spécificités de l'implémentation du système (mXS) et les résultats obtenus. Nous montrons l'intérêt d'extraire largement les règles d'annotation, même celles qui présentent une moindre confiance. Nous expérimentons les motifs de segments, qui donnent de bonnes performances lorsqu'il s'agit de structurer les données en profondeur. Plus généralement, nous fournissons des résultats chiffrés relatifs aux performances du système à divers point de vue et dans diverses configurations. Ils montrent que l'approche que nous proposons est compétitive et qu'elle ouvre des perspectives dans le cadre de l'observation des langues naturelles et de l'annotation automatique à l'aide de techniques de fouille de données.
APA, Harvard, Vancouver, ISO, and other styles
14

Rafaj, Filip. "Pojmenované entity a ontologie metodami hlubokého učení." Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-438029.

Full text
Abstract:
In this master thesis we describe a method for linking named entities in a given text to a knowledge base - Named Entity Linking. Using a deep neural architecture together with BERT contextualized word embeddings we created a semi-supervised model that jointly performs Named Entity Recognition and Named Entity Disambiguation. The model outputs a Wikipedia ID for each entity detected in an input text. To compute contextualized word embeddings we used pre-trained BERT without making any changes to it (no fine-tuning). We experimented with components of our model and various versions of BERT embeddings. Moreover, we tested several different ways of using the contextual embeddings. Our model is evaluated using standard metrics and surpasses scores of models that were establishing the state of the art before the expansion of pre-trained contextualized models. The scores of our model are comparable to current state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
15

Dias, Mariana Rebelo. "Discovery of sensitive data with natural language processing." Master's thesis, 2019. http://hdl.handle.net/10071/20905.

Full text
Abstract:
The process of protecting sensitive data is continually growing and becoming increasingly important, especially as a result of the directives and laws imposed by the European Union. The effort to create automatic systems is continuous, but in most cases, the processes behind them are still manual or semi-automatic. In this work, we have developed a component that can extract and classify sensitive data, from unstructured text information in European Portuguese. The objective was to create a system that allows organizations to understand their data and comply with legal and security purposes. We studied a hybrid approach to the problem of Named Entities Recognition for the Portuguese language. This approach combines several techniques such as rule-based/lexical-based models, machine learning algorithms and neural networks. The rule-based and lexical-based approaches were used only for a set of specific classes. For the remaining classes of entities, SpaCy and Stanford NLP tools were tested, two statistical models – Conditional Random Fields and Random Forest – were implemented and, finally, a Bidirectional- LSTM approach as experimented. The best results were achieved with the Stanford NER model (86.41%), from the Stanford NLP tool. Regarding the statistical models, we realized that Conditional Random Fields is the one that can obtain the best results, with a f1-score of 65.50%. With the Bi-LSTM approach, we have achieved a result of 83.01%. The corpora used for training and testing were HAREM Golden Collection, SIGARRA News Corpus and DataSense NER Corpus.
O processo de preservação de dados sensíveis está em constante crescimento e cada vez apresenta maior importância, proveniente especialmente das diretivas e leis impostas pela União Europeia. O esforço para criar sistemas automáticos é contínuo, mas o processo é realizado na maioria dos casos de forma manual ou semiautomática. Neste trabalho desenvolvemos um componente de Extração e Classificação de dados sensíveis, que processa textos não-estruturados em Português Europeu. O objetivo consistiu em criar um sistema que permite às organizações compreender os seus dados e cumprir com fins legais de conformidade e segurança. Para resolver este problema, foi estudada uma abordagem híbrida de Reconhecimento de Entidades Mencionadas para a língua Portuguesa. Esta abordagem combina técnicas baseadas em regras e léxicos, algoritmos de aprendizagem automática e redes neuronais. As primeiras abordagens baseadas em regras e léxicos, foram utilizadas apenas para um conjunto de classes especificas. Para as restantes classes de entidades foram utilizadas as ferramentas SpaCy e Stanford NLP, testados dois modelos estatísticos — Conditional Random Fields e Random Forest – e por fim testada uma abordagem baseada em redes neuronais – Bidirectional-LSTM. Ao nível das ferramentas utilizadas os melhores resultados foram conseguidos com o modelo Stanford NER (86,41%). Através dos modelos estatísticos percebemos que o Conditional Random Fields é o que consegue obter melhores resultados, com um f1-score de 65,50%. Com a última abordagem, uma rede neuronal Bi-LSTM, conseguimos resultado de f1-score de aproximadamente 83,01%. Para o treino e teste das diferentes abordagens foram utilizados os conjuntos de dados HAREM Golden Collection, SIGARRA News Corpus e DataSense NER Corpus.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography