To see the other types of publications on this topic, follow the link: Document Summarization.

Dissertations / Theses on the topic 'Document Summarization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Document Summarization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tohalino, Jorge Andoni Valverde. "Extractive document summarization using complex networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24102018-155954/.

Full text
Abstract:
Due to a large amount of textual information available on the Internet, the task of automatic document summarization has gained significant importance. Document summarization became important because its focus is the development of techniques aimed at finding relevant and concise content in large volumes of information without changing its original meaning. The purpose of this Masters work is to use network theory concepts for extractive document summarization for both Single Document Summarization (SDS) and Multi-Document Summarization (MDS). In this work, the documents are modeled as networks, where sentences are represented as nodes with the aim of extracting the most relevant sentences through the use of ranking algorithms. The edges between nodes are established in different ways. The first approach for edge calculation is based on the number of common nouns between two sentences (network nodes). Another approach to creating an edge is through the similarity between two sentences. In order to calculate the similarity of such sentences, we used the vector space model based on Tf-Idf weighting and word embeddings for the vector representation of the sentences. Also, we make a distinction between edges linking sentences from different documents (inter-layer) and those connecting sentences from the same document (intra-layer) by using multilayer network models for the Multi-Document Summarization task. In this approach, each network layer represents a document of the document set that will be summarized. In addition to the measurements typically used in complex networks such as node degree, clustering coefficient, shortest paths, etc., the network characterization also is guided by dynamical measurements of complex networks, including symmetry, accessibility and absorption time. The generated summaries were evaluated by using different corpus for both Portuguese and English language. The ROUGE-1 metric was used for the validation of generated summaries. The results suggest that simpler models like Noun and Tf-Idf based networks achieved a better performance in comparison to those models based on word embeddings. Also, excellent results were achieved by using the multilayered representation of documents for MDS. Finally, we concluded that several measurements could be used to improve the characterization of networks for the summarization task.
Devido à grande quantidade de informações textuais disponíveis na Internet, a tarefa de sumarização automática de documentos ganhou importância significativa. A sumarização de documentos tornou-se importante porque seu foco é o desenvolvimento de técnicas destinadas a encontrar conteúdo relevante e conciso em grandes volumes de informação sem alterar seu significado original. O objetivo deste trabalho de Mestrado é usar os conceitos da teoria de grafos para o resumo extrativo de documentos para Sumarização mono-documento (SDS) e Sumarização multi-documento (MDS). Neste trabalho, os documentos são modelados como redes, onde as sentenças são representadas como nós com o objetivo de extrair as sentenças mais relevantes através do uso de algoritmos de ranqueamento. As arestas entre nós são estabelecidas de maneiras diferentes. A primeira abordagem para o cálculo de arestas é baseada no número de substantivos comuns entre duas sentenças (nós da rede). Outra abordagem para criar uma aresta é através da similaridade entre duas sentenças. Para calcular a similaridade de tais sentenças, foi usado o modelo de espaço vetorial baseado na ponderação Tf-Idf e word embeddings para a representação vetorial das sentenças. Além disso, fazemos uma distinção entre as arestas que vinculam sentenças de diferentes documentos (inter-camada) e aquelas que conectam sentenças do mesmo documento (intra-camada) usando modelos de redes multicamada para a tarefa de Sumarização multi-documento. Nesta abordagem, cada camada da rede representa um documento do conjunto de documentos que será resumido. Além das medições tipicamente usadas em redes complexas como grau dos nós, coeficiente de agrupamento, caminhos mais curtos, etc., a caracterização da rede também é guiada por medições dinâmicas de redes complexas, incluindo simetria, acessibilidade e tempo de absorção. Os resumos gerados foram avaliados usando diferentes corpus para Português e Inglês. A métrica ROUGE-1 foi usada para a validação dos resumos gerados. Os resultados sugerem que os modelos mais simples, como redes baseadas em Noun e Tf-Idf, obtiveram um melhor desempenho em comparação com os modelos baseados em word embeddings. Além disso, excelentes resultados foram obtidos usando a representação de redes multicamada de documentos para MDS. Finalmente, concluímos que várias medidas podem ser usadas para melhorar a caracterização de redes para a tarefa de sumarização.
APA, Harvard, Vancouver, ISO, and other styles
2

Ou, Shiyan, Christopher S. G. Khoo, and Dion H. Goh. "Automatic multi-document summarization for digital libraries." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.

Full text
Abstract:
With the rapid growth of the World Wide Web and online information services, more and more information is available and accessible online. Automatic summarization is an indispensable solution to reduce the information overload problem. Multi-document summarization is useful to provide an overview of a topic and allow users to zoom in for more details on aspects of interest. This paper reports three types of multi-document summaries generated for a set of research abstracts, using different summarization approaches: a sentence-based summary generated by a MEAD summarization system that extracts important sentences using various features, another sentence-based summary generated by extracting research objective sentences, and a variable-based summary focusing on research concepts and relationships. A user evaluation was carried out to compare the three types of summaries. The evaluation results indicated that the majority of users (70%) preferred the variable-based summary, while 55% of the users preferred the research objective summary, and only 25% preferred the MEAD summary.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Fang. "Multi-document summarization with latent semantic analysis." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grant, Harald. "Extractive Multi-document Summarization of News Articles." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158275.

Full text
Abstract:
Publicly available data grows exponentially through web services and technological advancements. To comprehend large data-streams multi-document summarization (MDS) can be used. In this research, the area of multi-document summarization is investigated. Multiple systems for extractive multi-document summarization are implemented using modern techniques, in the form of the pre-trained BERT language model for word embeddings and sentence classification. This is combined with well proven techniques, in the form of the TextRank ranking algorithm, the Waterfall architecture and anti-redundancy filtering. The systems are evaluated on the DUC-2002, 2006 and 2007 datasets using the ROUGE metric. Where the results show that the BM25 sentence representation implemented in the TextRank model using the Waterfall architecture and an anti-redundancy technique outperforms the other implementations, providing competitive results with other state-of-the-art systems. A cohesive model is derived from the leading system and tried in a user study using a real-world application. The user study is conducted using a real-time news detection application with users from the news-domain. The study shows a clear favour for cohesive summaries in the case of extractive multi-document summarization. Where the cohesive summary is preferred in the majority of cases.
APA, Harvard, Vancouver, ISO, and other styles
5

Geiss, Johanna. "Latent semantic sentence clustering for multi-document summarization." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chellal, Abdelhamid. "Event summarization on social media stream : retrospective and prospective tweet summarization." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30118/document.

Full text
Abstract:
Le contenu généré dans les médias sociaux comme Twitter permet aux utilisateurs d'avoir un aperçu rétrospectif d'évènement et de suivre les nouveaux développements dès qu'ils se produisent. Cependant, bien que Twitter soit une source d'information importante, il est caractérisé par le volume et la vélocité des informations publiées qui rendent difficile le suivi de l'évolution des évènements. Pour permettre de mieux tirer profit de ce nouveau vecteur d'information, deux tâches complémentaires de recherche d'information dans les médias sociaux ont été introduites : la génération de résumé rétrospectif qui vise à sélectionner les tweets pertinents et non redondant récapitulant "ce qui s'est passé" et l'envoi des notifications prospectives dès qu'une nouvelle information pertinente est détectée. Notre travail s'inscrit dans ce cadre. L'objectif de cette thèse est de faciliter le suivi d'événement, en fournissant des outils de génération de synthèse adaptés à ce vecteur d'information. Les défis majeurs sous-jacents à notre problématique découlent d'une part du volume, de la vélocité et de la variété des contenus publiés et, d'autre part, de la qualité des tweets qui peut varier d'une manière considérable. La tâche principale dans la notification prospective est l'identification en temps réel des tweets pertinents et non redondants. Le système peut choisir de retourner les nouveaux tweets dès leurs détections où bien de différer leur envoi afin de s'assurer de leur qualité. Dans ce contexte, nos contributions se situent à ces différents niveaux : Premièrement, nous introduisons Word Similarity Extended Boolean Model (WSEBM), un modèle d'estimation de la pertinence qui exploite la similarité entre les termes basée sur le word embedding et qui n'utilise pas les statistiques de flux. L'intuition sous- jacente à notre proposition est que la mesure de similarité à base de word embedding est capable de considérer des mots différents ayant la même sémantique ce qui permet de compenser le non-appariement des termes lors du calcul de la pertinence. Deuxièmement, l'estimation de nouveauté d'un tweet entrant est basée sur la comparaison de ses termes avec les termes des tweets déjà envoyés au lieu d'utiliser la comparaison tweet à tweet. Cette méthode offre un meilleur passage à l'échelle et permet de réduire le temps d'exécution. Troisièmement, pour contourner le problème du seuillage de pertinence, nous utilisons un classificateur binaire qui prédit la pertinence. L'approche proposée est basée sur l'apprentissage supervisé adaptatif dans laquelle les signes sociaux sont combinés avec les autres facteurs de pertinence dépendants de la requête. De plus, le retour des jugements de pertinence est exploité pour re-entrainer le modèle de classification. Enfin, nous montrons que l'approche proposée, qui envoie les notifications en temps réel, permet d'obtenir des performances prometteuses en termes de qualité (pertinence et nouveauté) avec une faible latence alors que les approches de l'état de l'art tendent à favoriser la qualité au détriment de la latence. Cette thèse explore également une nouvelle approche de génération du résumé rétrospectif qui suit un paradigme différent de la majorité des méthodes de l'état de l'art. Nous proposons de modéliser le processus de génération de synthèse sous forme d'un problème d'optimisation linéaire qui prend en compte la diversité temporelle des tweets. Les tweets sont filtrés et regroupés d'une manière incrémentale en deux partitions basées respectivement sur la similarité du contenu et le temps de publication. Nous formulons la génération du résumé comme étant un problème linéaire entier dans lequel les variables inconnues sont binaires, la fonction objective est à maximiser et les contraintes assurent qu'au maximum un tweet par cluster est sélectionné dans la limite de la longueur du résumé fixée préalablement
User-generated content on social media, such as Twitter, provides in many cases, the latest news before traditional media, which allows having a retrospective summary of events and being updated in a timely fashion whenever a new development occurs. However, social media, while being a valuable source of information, can be also overwhelming given the volume and the velocity of published information. To shield users from being overwhelmed by irrelevant and redundant posts, retrospective summarization and prospective notification (real-time summarization) were introduced as two complementary tasks of information seeking on document streams. The former aims to select a list of relevant and non-redundant tweets that capture "what happened". In the latter, systems monitor the live posts stream and push relevant and novel notifications as soon as possible. Our work falls within these frameworks and focuses on developing a tweet summarization approaches for the two aforementioned scenarios. It aims at providing summaries that capture the key aspects of the event of interest to help users to efficiently acquire information and follow the development of long ongoing events from social media. Nevertheless, tweet summarization task faces many challenges that stem from, on one hand, the high volume, the velocity and the variety of the published information and, on the other hand, the quality of tweets, which can vary significantly. In the prospective notification, the core task is the relevancy and the novelty detection in real-time. For timeliness, a system may choose to push new updates in real-time or may choose to trade timeliness for higher notification quality. Our contributions address these levels: First, we introduce Word Similarity Extended Boolean Model (WSEBM), a relevance model that does not rely on stream statistics and takes advantage of word embedding model. We used word similarity instead of the traditional weighting techniques. By doing this, we overcome the shortness and word mismatch issues in tweets. The intuition behind our proposition is that context-aware similarity measure in word2vec is able to consider different words with the same semantic meaning and hence allows offsetting the word mismatch issue when calculating the similarity between a tweet and a topic. Second, we propose to compute the novelty score of the incoming tweet regarding all words of tweets already pushed to the user instead of using the pairwise comparison. The proposed novelty detection method scales better and reduces the execution time, which fits real-time tweet filtering. Third, we propose an adaptive Learning to Filter approach that leverages social signals as well as query-dependent features. To overcome the issue of relevance threshold setting, we use a binary classifier that predicts the relevance of the incoming tweet. In addition, we show the gain that can be achieved by taking advantage of ongoing relevance feedback. Finally, we adopt a real-time push strategy and we show that the proposed approach achieves a promising performance in terms of quality (relevance and novelty) with low cost of latency whereas the state-of-the-art approaches tend to trade latency for higher quality. This thesis also explores a novel approach to generate a retrospective summary that follows a different paradigm than the majority of state-of-the-art methods. We consider the summary generation as an optimization problem that takes into account the topical and the temporal diversity. Tweets are filtered and are incrementally clustered in two cluster types, namely topical clusters based on content similarity and temporal clusters that depends on publication time. Summary generation is formulated as integer linear problem in which unknowns variables are binaries, the objective function is to be maximized and constraints ensure that at most one post per cluster is selected with respect to the defined summary length limit
APA, Harvard, Vancouver, ISO, and other styles
7

Linhares, Pontes Elvys. "Compressive Cross-Language Text Summarization." Thesis, Avignon, 2018. http://www.theses.fr/2018AVIG0232/document.

Full text
Abstract:
La popularisation des réseaux sociaux et des documents numériques a rapidement accru l'information disponible sur Internet. Cependant, cette quantité massive de données ne peut pas être analysée manuellement. Parmi les applications existantes du Traitement Automatique du Langage Naturel (TALN), nous nous intéressons dans cette thèse au résumé cross-lingue de texte, autrement dit à la production de résumés dans une langue différente de celle des documents sources. Nous analysons également d'autres tâches du TALN (la représentation des mots, la similarité sémantique ou encore la compression de phrases et de groupes de phrases) pour générer des résumés cross-lingues plus stables et informatifs. La plupart des applications du TALN, celle du résumé automatique y compris, utilisent une mesure de similarité pour analyser et comparer le sens des mots, des séquences de mots, des phrases et des textes. L’une des façons d'analyser cette similarité est de générer une représentation de ces phrases tenant compte de leur contenu. Le sens des phrases est défini par plusieurs éléments, tels que le contexte des mots et des expressions, l'ordre des mots et les informations précédentes. Des mesures simples, comme la mesure cosinus et la distance euclidienne, fournissent une mesure de similarité entre deux phrases. Néanmoins, elles n'analysent pas l'ordre des mots ou les séquences de mots. En analysant ces problèmes, nous proposons un modèle de réseau de neurones combinant des réseaux de neurones récurrents et convolutifs pour estimer la similarité sémantique d'une paire de phrases (ou de textes) en fonction des contextes locaux et généraux des mots. Sur le jeu de données analysé, notre modèle a prédit de meilleurs scores de similarité que les systèmes de base en analysant mieux le sens local et général des mots mais aussi des expressions multimots. Afin d'éliminer les redondances et les informations non pertinentes de phrases similaires, nous proposons de plus une nouvelle méthode de compression multiphrase, fusionnant des phrases au contenu similaire en compressions courtes. Pour ce faire, nous modélisons des groupes de phrases semblables par des graphes de mots. Ensuite, nous appliquons un modèle de programmation linéaire en nombres entiers qui guide la compression de ces groupes à partir d'une liste de mots-clés ; nous cherchons ainsi un chemin dans le graphe de mots qui a une bonne cohésion et qui contient le maximum de mots-clés. Notre approche surpasse les systèmes de base en générant des compressions plus informatives et plus correctes pour les langues française, portugaise et espagnole. Enfin, nous combinons les méthodes précédentes pour construire un système de résumé de texte cross-lingue. Notre système génère des résumés cross-lingue de texte en analysant l'information à la fois dans les langues source et cible, afin d’identifier les phrases les plus pertinentes. Inspirés par les méthodes de résumé de texte par compression en analyse monolingue, nous adaptons notre méthode de compression multiphrase pour ce problème afin de ne conserver que l'information principale. Notre système s'avère être performant pour compresser l'information redondante et pour préserver l'information pertinente, en améliorant les scores d'informativité sans perdre la qualité grammaticale des résumés cross-lingues du français vers l'anglais. En analysant les résumés cross-lingues depuis l’anglais, le français, le portugais ou l’espagnol, vers l’anglais ou le français, notre système améliore les systèmes par extraction de l'état de l'art pour toutes ces langues. En outre, une expérience complémentaire menée sur des transcriptions automatiques de vidéo montre que notre approche permet là encore d'obtenir des scores ROUGE meilleurs et plus stables, même pour ces documents qui présentent des erreurs grammaticales et des informations inexactes ou manquantes
The popularization of social networks and digital documents increased quickly the informationavailable on the Internet. However, this huge amount of data cannot be analyzedmanually. Natural Language Processing (NLP) analyzes the interactions betweencomputers and human languages in order to process and to analyze natural languagedata. NLP techniques incorporate a variety of methods, including linguistics, semanticsand statistics to extract entities, relationships and understand a document. Amongseveral NLP applications, we are interested, in this thesis, in the cross-language textsummarization which produces a summary in a language different from the languageof the source documents. We also analyzed other NLP tasks (word encoding representation,semantic similarity, sentence and multi-sentence compression) to generate morestable and informative cross-lingual summaries.Most of NLP applications (including all types of text summarization) use a kind ofsimilarity measure to analyze and to compare the meaning of words, chunks, sentencesand texts in their approaches. A way to analyze this similarity is to generate a representationfor these sentences that contains the meaning of them. The meaning of sentencesis defined by several elements, such as the context of words and expressions, the orderof words and the previous information. Simple metrics, such as cosine metric andEuclidean distance, provide a measure of similarity between two sentences; however,they do not analyze the order of words or multi-words. Analyzing these problems,we propose a neural network model that combines recurrent and convolutional neuralnetworks to estimate the semantic similarity of a pair of sentences (or texts) based onthe local and general contexts of words. Our model predicted better similarity scoresthan baselines by analyzing better the local and the general meanings of words andmulti-word expressions.In order to remove redundancies and non-relevant information of similar sentences,we propose a multi-sentence compression method that compresses similar sentencesby fusing them in correct and short compressions that contain the main information ofthese similar sentences. We model clusters of similar sentences as word graphs. Then,we apply an integer linear programming model that guides the compression of theseclusters based on a list of keywords. We look for a path in the word graph that has goodcohesion and contains the maximum of keywords. Our approach outperformed baselinesby generating more informative and correct compressions for French, Portugueseand Spanish languages. Finally, we combine these previous methods to build a cross-language text summarizationsystem. Our system is an {English, French, Portuguese, Spanish}-to-{English,French} cross-language text summarization framework that analyzes the informationin both languages to identify the most relevant sentences. Inspired by the compressivetext summarization methods in monolingual analysis, we adapt our multi-sentencecompression method for this problem to just keep the main information. Our systemproves to be a good alternative to compress redundant information and to preserve relevantinformation. Our system improves informativeness scores without losing grammaticalquality for French-to-English cross-lingual summaries. Analyzing {English,French, Portuguese, Spanish}-to-{English, French} cross-lingual summaries, our systemsignificantly outperforms extractive baselines in the state of the art for all these languages.In addition, we analyze the cross-language text summarization of transcriptdocuments. Our approach achieved better and more stable scores even for these documentsthat have grammatical errors and missing information
APA, Harvard, Vancouver, ISO, and other styles
8

Kipp, Darren. "Shallow semantics for topic-oriented multi-document automatic text summarization." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27772.

Full text
Abstract:
There are presently a number of NLP tools available which can provide semantic information about a sentence. Connexor Machinese Semantics is one of the most elaborate of such tools in terms of the information it provides. It has been hypothesized that semantic analysis of sentences is required in order to make significant improvements in automatic summarization. Elaborate semantic analysis is still not particularly feasible. In this thesis, I will look at what shallow semantic features are available from an off the shelf semantic analysis tool which might improve the responsiveness of a summary. The aim of this work is to use the information made available as an intermediary approach to improving the responsiveness of summaries. While this approach is not likely to perform as well as full semantic analysis, it is considerably easier to achieve and could provide an important stepping stone in the direction of deeper semantic analysis. As a significant portion of this task we develop mechanisms in various programming languages to view, process, and extract relevant information and features from the data.
APA, Harvard, Vancouver, ISO, and other styles
9

Hennig, Leonhard Verfasser], and Sahin [Akademischer Betreuer] [Albayrak. "Content Modeling for Automatic Document Summarization / Leonhard Hennig. Betreuer: Sahin Albayrak." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/1017593698/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tsai, Chun-I. "A Study on Neural Network Modeling Techniques for Automatic Document Summarization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-395940.

Full text
Abstract:
With the Internet becoming widespread, countless articles and multimedia content have been filled in our daily life. How to effectively acquire the knowledge we seek becomes one of the unavoidable issues. To help people to browse the main theme of the document faster, many studies are dedicated to automatic document summarization, which aims to condense one or more documents into a short text yet still keep its essential content as much as possible. Automatic document summarization can be categorized into extractive and abstractive. Extractive summarization selects the most relevant set of sentences to a target ratio and assemble them into a concise summary. On the other hand, abstractive summarization produces an abstract after understanding the key concept of a document. The recent past has seen a surge of interest in developing deep neural network-based supervised methods for both types of automatic summarization. This thesis presents a continuation of this line and exploit two kinds of frameworks, which integrate convolutional neural network (CNN), long short-term memory (LSTM) and multilayer perceptron (MLP) for extractive speech summarization. The empirical results seem to demonstrate the effectiveness of neural summarizers when compared with other conventional supervised methods. Finally, to further explore the ability of neural networks, we experiment and analyze the results of applying sequence-to-sequence neural networks for abstractive summarization.
APA, Harvard, Vancouver, ISO, and other styles
11

Keskes, Iskandar. "Discourse analysis of arabic documents and application to automatic summarization." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30023/document.

Full text
Abstract:
Dans un discours, les textes et les conversations ne sont pas seulement une juxtaposition de mots et de phrases. Ils sont plutôt organisés en une structure dans laquelle des unités de discours sont liées les unes aux autres de manière à assurer à la fois la cohérence et la cohésion du discours. La structure du discours a montré son utilité dans de nombreuses applications TALN, y compris la traduction automatique, la génération de texte et le résumé automatique. L'utilité du discours dans les applications TALN dépend principalement de la disponibilité d'un analyseur de discours performant. Pour aider à construire ces analyseurs et à améliorer leurs performances, plusieurs ressources ont été annotées manuellement par des informations de discours dans des différents cadres théoriques. La plupart des ressources disponibles sont en anglais. Récemment, plusieurs efforts ont été entrepris pour développer des ressources discursives pour d'autres langues telles que le chinois, l'allemand, le turc, l'espagnol et le hindi. Néanmoins, l'analyse de discours en arabe standard moderne (MSA) a reçu moins d'attention malgré le fait que MSA est une langue de plus de 422 millions de locuteurs dans 22 pays. Le sujet de thèse s'intègre dans le cadre du traitement automatique de la langue arabe, plus particulièrement, l'analyse de discours de textes arabes. Cette thèse a pour but d'étudier l'apport de l'analyse sémantique et discursive pour la génération de résumé automatique de documents en langue arabe. Pour atteindre cet objectif, nous proposons d'étudier la théorie de la représentation discursive segmentée (SDRT) qui propose un cadre logique pour la représentation sémantique de phrases ainsi qu'une représentation graphique de la structure du texte où les relations de discours sont de nature sémantique plutôt qu'intentionnelle. Cette théorie a été étudiée pour l'anglais, le français et l'allemand mais jamais pour la langue arabe. Notre objectif est alors d'adapter la SDRT à la spécificité de la langue arabe afin d'analyser sémantiquement un texte pour générer un résumé automatique. Nos principales contributions sont les suivantes : Une étude de la faisabilité de la construction d'une structure de discours récursive et complète de textes arabes. En particulier, nous proposons : Un schéma d'annotation qui couvre la totalité d'un texte arabe, dans lequel chaque constituant est lié à d'autres constituants. Un document est alors représenté par un graphe acyclique orienté qui capture les relations explicites et les relations implicites ainsi que des phénomènes de discours complexes, tels que l'attachement, la longue distance du discours pop-ups et les dépendances croisées. Une nouvelle hiérarchie des relations de discours. Nous étudions les relations rhétoriques d'un point de vue sémantique en se concentrant sur leurs effets sémantiques et non pas sur la façon dont elles sont déclenchées par des connecteurs de discours, qui sont souvent ambigües en arabe. o une analyse quantitative (en termes de connecteurs de discours, de fréquences de relations, de proportion de relations implicites, etc.) et une analyse qualitative (accord inter-annotateurs et analyse des erreurs) de la campagne d'annotation. Un outil d'analyse de discours où nous étudions à la fois la segmentation automatique de textes arabes en unités de discours minimales et l'identification automatique des relations explicites et implicites du discours. L'utilisation de notre outil pour résumer des textes arabes. Nous comparons la représentation de discours en graphes et en arbres pour la production de résumés
Within a discourse, texts and conversations are not just a juxtaposition of words and sentences. They are rather organized in a structure in which discourse units are related to each other so as to ensure both discourse coherence and cohesion. Discourse structure has shown to be useful in many NLP applications including machine translation, natural language generation and language technology in general. The usefulness of discourse in NLP applications mainly depends on the availability of powerful discourse parsers. To build such parsers and improve their performances, several resources have been manually annotated with discourse information within different theoretical frameworks. Most available resources are in English. Recently, several efforts have been undertaken to develop manually annotated discourse information for other languages such as Chinese, German, Turkish, Spanish and Hindi. Surprisingly, discourse processing in Modern Standard Arabic (MSA) has received less attention despite the fact that MSA is a language with more than 422 million speakers in 22 countries. Computational processing of Arabic language has received a great attention in the literature for over twenty years. Several resources and tools have been built to deal with Arabic non concatenative morphology and Arabic syntax going from shallow to deep parsing. However, the field is still very vacant at the layer of discourse. As far as we know, the sole effort towards Arabic discourse processing was done in the Leeds Arabic Discourse Treebank that extends the Penn Discourse TreeBank model to MSA. In this thesis, we propose to go beyond the annotation of explicit relations that link adjacent units, by completely specifying the semantic scope of each discourse relation, making transparent an interpretation of the text that takes into account the semantic effects of discourse relations. In particular, we propose the first effort towards a semantically driven approach of Arabic texts following the Segmented Discourse Representation Theory (SDRT). Our main contributions are: A study of the feasibility of building a recursive and complete discourse structures of Arabic texts. In particular, we propose: An annotation scheme for the full discourse coverage of Arabic texts, in which each constituent is linked to other constituents. A document is then represented by an oriented acyclic graph, which captures explicit and implicit relations as well as complex discourse phenomena, such as long-distance attachments, long-distance discourse pop-ups and crossed dependencies. A novel discourse relation hierarchy. We study the rhetorical relations from a semantic point of view by focusing on their effect on meaning and not on how they are lexically triggered by discourse connectives that are often ambiguous, especially in Arabic. A thorough quantitative analysis (in terms of discourse connectives, relation frequencies, proportion of implicit relations, etc.) and qualitative analysis (inter-annotator agreements and error analysis) of the annotation campaign. An automatic discourse parser where we investigate both automatic segmentation of Arabic texts into elementary discourse units and automatic identification of explicit and implicit Arabic discourse relations. An application of our discourse parser to Arabic text summarization. We compare tree-based vs. graph-based discourse representations for producing indicative summaries and show that the full discourse coverage of a document is definitively a plus
APA, Harvard, Vancouver, ISO, and other styles
12

Qumsiyeh, Rani Majed. "Easy to Find: Creating Query-Based Multi-Document Summaries to Enhance Web Search." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2713.

Full text
Abstract:
Current web search engines, such as Google, Yahoo!, and Bing, rank the set of documents S retrieved in response to a user query Q and display each document with a title and a snippet, which serves as an abstract of the corresponding document in S. Snippets, however, are not as useful as they are designed for, i.e., to assist search engine users to quickly identify results of interest, if they exist, without browsing through the documents in S, since they (i) often include very similar information and (ii) do not capture the main content of the corresponding documents. Moreover, when the intended information need specified in a search query is ambiguous, it is difficult, if not impossible, for a search engine to identify precisely the set of documents that satisfy the user's intended request. Furthermore, a document title retrieved by web search engines is not always a good indicator of the content of the corresponding document, since it is not always informative. All these design problems can be solved by our proposed query-based, web informative summarization engine, denoted Q-WISE. Q-WISE clusters documents in S, which allows users to view segregated document collections created according to the specific topic covered in each collection, and generates a concise/comprehensive summary for each collection/cluster of documents. Q-WISE is also equipped with a query suggestion module that provides a guide to its users in formulating a keyword query, which facilitates the web search and improves the precision and recall of the search results. Experimental results show that Q-WISE is highly effective and efficient in generating a high quality summary for each cluster of documents on a specific topic, retrieved in response to a Q-WISE user's query. The empirical study also shows that Q-WISE's clustering algorithm is highly accurate, labels generated for the clusters are useful and often reflect the topic of the corresponding clustered documents, and the performance of the query suggestion module of Q-WISE is comparable to commercial web search engines.
APA, Harvard, Vancouver, ISO, and other styles
13

Karlsson, Simon. "Using semantic folding with TextRank for automatic summarization." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210040.

Full text
Abstract:
This master thesis deals with automatic summarization of text and how semantic folding can be used as a similarity measure between sentences in the TextRank algorithm. The method was implemented and compared with two common similarity measures. These two similarity measures were cosine similarity of tf-idf vectors and the number of overlapping terms in two sentences. The three methods were implemented and the linguistic features used in the construction were stop words, part-of-speech filtering and stemming. Five different part-of-speech filters were used, with different mixtures of nouns, verbs, and adjectives. The three methods were evaluated by summarizing documents from the Document Understanding Conference and comparing them to gold-standard summarization created by human judges. Comparison between the system summaries and gold-standard summaries was made with the ROUGE-1 measure. The algorithm with semantic folding performed worst of the three methods, but only 0.0096 worse in F-score than cosine similarity of tf-idf vectors that performed best. For semantic folding, the average precision was 46.2% and recall 45.7% for the best-performing part-of-speech filter.
Det här examensarbetet behandlar automatisk textsammanfattning och hur semantisk vikning kan användas som likhetsmått mellan meningar i algoritmen TextRank. Metoden implementerades och jämfördes med två vanliga likhetsmått. Dessa två likhetsmått var cosinus-likhet mellan tf-idf-vektorer samt antal överlappande termer i två meningar. De tre metoderna implementerades och de lingvistiska särdragen som användes vid konstruktionen var stoppord, filtrering av ordklasser samt en avstämmare. Fem olika filter för ordklasser användes, med olika blandningar av substantiv, verb och adjektiv. De tre metoderna utvärderades genom att sammanfatta dokument från DUC och jämföra dessa mot guldsammanfattningar skapade av mänskliga domare. Jämförelse mellan systemsammanfattningar och guldsammanfattningar gjordes med måttet ROUGE-1. Algoritmen med semantisk vikning presterade sämst av de tre jämförda metoderna, dock bara 0.0096 sämre i F-score än cosinus-likhet mellan tf-idf-vektorer som presterade bäst. För semantisk vikning var den genomsnittliga precisionen 46.2% och recall 45.7% för det ordklassfiltret som presterade bäst.
APA, Harvard, Vancouver, ISO, and other styles
14

Aker, Ahmet. "Entity type modeling for multi-document summarization : generating descriptive summaries of geo-located entities." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/5138/.

Full text
Abstract:
In this work we investigate the application of entity type models in extractive multi-document summarization using the automatic caption generation for images of geo-located entities (e.g. Westminster Abbey, Loch Ness, Eiffel Tower) as an application scenario. Entity type models contain sets of patterns aiming to capture the ways the geo-located entities are described in natural language. They are automatically derived from texts about geo-located entities of the same type (e.g. churches, lakes, towers). We collect texts about geo-located entities from Wikipedia because our investigation show that the information humans associate with entity types positively correlates with the information contained in Wikipedia articles about the same entity types. We integrate entity type models into a multi-document summarizer and use them to address the two major tasks in extractive multi-document summarization: sentence scoring and summary composition. We experiment with three different representation methods for entity type models: signature words, n-gram language models and dependency patterns. We first propose that entity type models will improve sentence scoring, i.e. they will help to assign higher scores to sentences which are more relevant to the output summary than to those which are not. Secondly, we claim that summary composition can be improved using entity type models. We follow two different approaches to integrate the entity type models into our multi-document summarizer. In the first approach we use the entity type models in combination with existing standard summarization features to score the sentences. We also manually categorize the set of patterns by the information types they describe and use them to reduce redundancy and to produce better flow within the summary. The second approach aims to eliminate the need for manual intervention and to fully automate the process of summary generation. As in the first approach the sentences are scored using standard summarization features and entity type models. However, unlike the first approach we fully automate the process of summary composition by simultaneously addressing the redundancy and flow aspects of the summary. We evaluate the summarizer with integrated entity type models relative to (1) a summarizer using standard text related features commonly used in summarization and (2) the Wikipedia location descriptions. The latter constitute a strong baseline for automated summaries to be evaluated against. The automated summaries are evaluated against human reference summaries using ROUGE and human readability evaluation, as is a common practice in automatic summarization. Our results show that entity type models significantly improve the quality of output summaries over that of summaries generated using standard summarization features andWikipedia baseline summaries. The representation of entity type models using dependency patterns is superior to the representations using signature words and n-gram language models.
APA, Harvard, Vancouver, ISO, and other styles
15

Fang, Yimai. "Proposition-based summarization with a coherence-driven incremental model." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287468.

Full text
Abstract:
Summarization models which operate on meaning representations of documents have been neglected in the past, although they are a very promising and interesting class of methods for summarization and text understanding. In this thesis, I present one such summarizer, which uses the proposition as its meaning representation. My summarizer is an implementation of Kintsch and van Dijk's model of comprehension, which uses a tree of propositions to represent the working memory. The input document is processed incrementally in iterations. In each iteration, new propositions are connected to the tree under the principle of local coherence, and then a forgetting mechanism is applied so that only a few important propositions are retained in the tree for the next iteration. A summary can be generated using the propositions which are frequently retained. Originally, this model was only played through by hand by its inventors using human-created propositions. In this work, I turned it into a fully automatic model using current NLP technologies. First, I create propositions by obtaining and then transforming a syntactic parse. Second, I have devised algorithms to numerically evaluate alternative ways of adding a new proposition, as well as to predict necessary changes in the tree. Third, I compared different methods of modelling local coherence, including coreference resolution, distributional similarity, and lexical chains. In the first group of experiments, my summarizer realizes summary propositions by sentence extraction. These experiments show that my summarizer outperforms several state-of-the-art summarizers. The second group of experiments concerns abstractive generation from propositions, which is a collaborative project. I have investigated the option of compressing extracted sentences, but generation from propositions has been shown to provide better information packaging.
APA, Harvard, Vancouver, ISO, and other styles
16

Bost, Xavier. "A storytelling machine ? : automatic video summarization : the case of TV series." Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0216/document.

Full text
Abstract:
Ces dix dernières années, les séries télévisées sont devenues de plus en plus populaires. Par opposition aux séries TV classiques composées d’épisodes autosuffisants d’un point de vue narratif, les séries TV modernes développent des intrigues continues sur des dizaines d’épisodes successifs. Cependant, la continuité narrative des séries TV modernes entre directement en conflit avec les conditions usuelles de visionnage : en raison des technologies modernes de visionnage, les nouvelles saisons des séries TV sont regardées sur de courtes périodes de temps. Par conséquent, les spectateurs sur le point de visionner de nouvelles saisons sont largement désengagés de l’intrigue, à la fois d’un point de vue cognitif et affectif. Une telle situation fournit au résumé de vidéos des scénarios d’utilisation remarquablement réalistes, que nous détaillons dans le Chapitre 1. De plus, le résumé automatique de films, longtemps limité à la génération de bande-annonces à partir de descripteurs de bas niveau, trouve dans les séries TV une occasion inédite d’aborder dans des conditions bien définies ce qu’on appelle le fossé sémantique : le résumé de médias narratifs exige des approches orientées contenu, capables de jeter un pont entre des descripteurs de bas niveau et le niveau humain de compréhension. Nous passons en revue dans le Chapitre 2 les deux principales approches adoptées jusqu’ici pour aborder le problème du résumé automatique de films de fiction. Le Chapitre 3 est consacré aux différentes sous-tâches requises pour construire les représentations intermédiaires sur lesquelles repose notre système de génération de résumés : la Section 3.2 se concentre sur la segmentation de vidéos,tandis que le reste du chapitre est consacré à l’extraction de descripteurs de niveau intermédiaire,soit orientés saillance (échelle des plans, musique de fond), soit en relation avec le contenu (locuteurs). Dans le Chapitre 4, nous utilisons l’analyse des réseaux sociaux comme une manière possible de modéliser l’intrigue des séries TV modernes : la dynamique narrative peut être adéquatement capturée par l’évolution dans le temps du réseau des personnages en interaction. Cependant, nous devons faire face ici au caractère séquentiel de la narration lorsque nous prenons des vues instantanées de l’état des relations entre personnages. Nous montrons que les approches classiques par fenêtrage temporel ne peuvent pas traiter convenablement ce cas, et nous détaillons notre propre méthode pour extraire des réseaux sociaux dynamiques dans les médias narratifs.Le Chapitre 5 est consacré à la génération finale de résumés orientés personnages,capables à la fois de refléter la dynamique de l’intrigue et de ré-engager émotionnellement les spectateurs dans la narration. Nous évaluons notre système en menant à une large échelle et dans des conditions réalistes une enquête auprès d’utilisateurs
These past ten years, TV series became increasingly popular. In contrast to classicalTV series consisting of narratively self-sufficient episodes, modern TV seriesdevelop continuous plots over dozens of successive episodes. However, thenarrative continuity of modern TV series directly conflicts with the usual viewing conditions:due to modern viewing technologies, the new seasons of TV series are beingwatched over short periods of time. As a result, viewers are largely disengaged fromthe plot, both cognitively and emotionally, when about to watch new seasons. Sucha situation provides video summarization with remarkably realistic use-case scenarios,that we detail in Chapter 1. Furthermore, automatic movie summarization, longrestricted to trailer generation based on low-level features, finds with TV series a unprecedentedopportunity to address in well-defined conditions the so-called semanticgap: summarization of narrative media requires content-oriented approaches capableto bridge the gap between low-level features and human understanding. We review inChapter 2 the two main approaches adopted so far to address automatic movie summarization.Chapter 3 is dedicated to the various subtasks needed to build the intermediaryrepresentations on which our summarization framework relies: Section 3.2focuses on video segmentation, whereas the rest of Chapter 3 is dedicated to the extractionof different mid-level features, either saliency-oriented (shot size, backgroundmusic), or content-related (speakers). In Chapter 4, we make use of social network analysisas a possible way to model the plot of modern TV series: the narrative dynamicscan be properly captured by the evolution over time of the social network of interactingcharacters. Nonetheless, we have to address here the sequential nature of thenarrative when taking instantaneous views of the state of the relationships between thecharacters. We show that standard time-windowing approaches can not properly handlethis case, and we detail our own method for extracting dynamic social networksfrom narrative media. Chapter 5 is dedicated to the final generation and evaluation ofcharacter-oriented summaries, both able to reflect the plot dynamics and to emotionallyre-engage viewers into the narrative. We evaluate our framework by performing alarge-scale user study in realistic conditions
APA, Harvard, Vancouver, ISO, and other styles
17

MELLO, Rafael Ferreira Leite de. "A solution to extractive summarization based on document type and a new measure for sentence similarity." UNIVERSIDADE FEDERAL DE PERNAMBUCO, 2015. https://repositorio.ufpe.br/handle/123456789/15257.

Full text
Abstract:
Submitted by Isaac Francisco de Souza Dias (isaac.souzadias@ufpe.br) on 2016-02-19T18:25:04Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE Rafael Ferreira Leite de Mello.pdf: 1860839 bytes, checksum: 4d54a6ef5e3c40f8bce57e3cc957a8f4 (MD5)
Made available in DSpace on 2016-02-19T18:25:04Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE Rafael Ferreira Leite de Mello.pdf: 1860839 bytes, checksum: 4d54a6ef5e3c40f8bce57e3cc957a8f4 (MD5) Previous issue date: 2015-03-20
The Internet is a enormous and fast growing digital repository encompassing billions of documents in a diversity of subjects, quality, reliability, etc. It is increasingly difficult to scavenge useful information from it. Thus, it is necessary to provide automatically techniques that allowing users to save time and resources. Automatic text summarization techniques may offer a way out to this problem. Text summarization (TS) aims at automatically compress one or more documents to present their main ideas in less space. TS platforms receive one or more documents as input to generate a summary. In recent years, a variety of text summarization methods has been proposed. However, due to the different document types (such as news, blogs, and scientific articles) it became difficult to create a general TS application to create expressive summaries for each type. Another related relevant problem is measuring the degree of similarity between sentences, which is used in applications, such as: text summarization, information retrieval, image retrieval, text categorization, and machine translation. Recent works report several efforts to evaluate sentence similarity by representing sentences using vectors of bag of words or a tree of the syntactic information among words. However, most of these approaches do not take in consideration the sentence meaning and the words order. This thesis proposes: (i) a new text summarization solution which identifies the document type before perform the summarization, (ii) the creation of a new sentence similarity measure based on lexical, syntactic and semantic evaluation to deal with meaning and word order problems. The previous identification of the document types allows the summarization solution to select the methods that is more suitable to each type of text. This thesis also perform a detailed assessment with the most used text summarization methods to selects which create more informative summaries for news, blogs and scientific articles contexts.The sentence similarity measure proposed is completely unsupervised and reaches results similar to humans annotator using the dataset proposed by Li et al. The proposed measure was satisfactorily applied to evaluate the similarity between summaries and to eliminate redundancy in multi-document summarization.
Atualmente a quantidade de documentos de texto aumentou consideravelmente principalmente com o grande crescimento da internet. Existem milhares de artigos de notícias, livros eletrônicos, artigos científicos, blog, etc. Com isso é necessário aplicar técnicas automáticas para extrair informações dessa grande massa de dados. Sumarização de texto pode ser usada para lidar com esse problema. Sumarização de texto (ST) cria versões comprimidas de um ou mais documentos de texto. Em outras palavras, palataformas de ST recebem um ou mais documentos como entrada e gera um sumário deles. Nos últimos anos, uma grande quantidade de técnicas de sumarização foram propostas. Contudo, dado a grande quantidade de tipos de documentos (por exemplo, notícias, blogs e artigos científicos) é difícil encontrar uma técnica seja genérica suficiente para criar sumários para todos os tipos de forma eficiente. Além disto, outro tópico bastante trabalhado na área de mineração de texto é a análise de similaridade entre sentenças. Essa similaridade pode ser usada em aplicações como: sumarização de texto, recuperação de infromação, recuperação de imagem, categorização de texto e tradução. Em geral, as técnicas propostas são baseados em vetores de palavras ou árvores sintáticas, com isso dois problemas não são abordados: o problema de significado e de ordem das palavras. Essa tese propõe: (i) Uma nova solução em sumarização de texto que identifica o tipo de documento antes de realizar a sumarização. (ii) A criação de uma nova medida de similaridade entre sentenças baseada nas análises léxica, sintática e semântica. A identificação de tipo de documento permite que a solução de sumarização selecione os melhores métodos para cada tipo de texto. Essa tese também realizar um estudo detalhado sobre os métodos de sumarização para selecinoar os que criam sumários mais informativos nos contextos de notícias blogs e artigos científicos. A medida de similaridade entre sentences é completamente não supervisionada e alcança resultados similarires dos anotadores humanos usando o dataset proposed por Li et al. A medida proposta também foi satisfatoriamente aplicada na avaliação de similaridade entre resumos e para eliminar redundância em sumarização multi-documento.
APA, Harvard, Vancouver, ISO, and other styles
18

Varadarajan, Ramakrishna R. "Ranked Search on Data Graphs." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/220.

Full text
Abstract:
Graph-structured databases are widely prevalent, and the problem of effective search and retrieval from such graphs has been receiving much attention recently. For example, the Web can be naturally viewed as a graph. Likewise, a relational database can be viewed as a graph where tuples are modeled as vertices connected via foreign-key relationships. Keyword search querying has emerged as one of the most effective paradigms for information discovery, especially over HTML documents in the World Wide Web. One of the key advantages of keyword search querying is its simplicity – users do not have to learn a complex query language, and can issue queries without any prior knowledge about the structure of the underlying data. The purpose of this dissertation was to develop techniques for user-friendly, high quality and efficient searching of graph structured databases. Several ranked search methods on data graphs have been studied in the recent years. Given a top-k keyword search query on a graph and some ranking criteria, a keyword proximity search finds the top-k answers where each answer is a substructure of the graph containing all query keywords, which illustrates the relationship between the keyword present in the graph. We applied keyword proximity search on the web and the page graph of web documents to find top-k answers that satisfy user’s information need and increase user satisfaction. Another effective ranking mechanism applied on data graphs is the authority flow based ranking mechanism. Given a top-k keyword search query on a graph, an authority-flow based search finds the top-k answers where each answer is a node in the graph ranked according to its relevance and importance to the query. We developed techniques that improved the authority flow based search on data graphs by creating a framework to explain and reformulate them taking in to consideration user preferences and feedback. We also applied the proposed graph search techniques for Information Discovery over biological databases. Our algorithms were experimentally evaluated for performance and quality. The quality of our method was compared to current approaches by using user surveys.
APA, Harvard, Vancouver, ISO, and other styles
19

AbuRa'ed, Ahmed Ghassan Tawfiq. "Automatic generation of descriptive related work reports." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669975.

Full text
Abstract:
A related work report is a section in a research paper which integrates key information from a list of related scientific papers providing context to the work being presented. Related work reports can either be descriptive or integrative. Integrative related work reports provide a high-level overview and critique of the scientific papers by comparing them with each other, providing fewer details of individual studies. Descriptive related work reports, instead, provide more in-depth information about each mentioned study providing information such as methods and results of the cited works. In order to write a related work report, scientist have to identify, condense/summarize, and combine relevant information from different scientific papers. However, such task is complicated due to the available volume of scientific papers. In this context, the automatic generation of related work reports appears to be an important problem to tackle. The automatic generation of related work reports can be considered as an instance of the multi-document summarization problem where, given a list of scientific papers, the main objective is to automatically summarize those scientific papers and generate related work reports. In order to study the problem of related work generation, we have developed a manually annotated, machine readable data-set of related work sections, cited papers (e.g. references) and sentences, together with an additional layer of papers citing the references. We have also investigated the relation between a citation context in a citing paper and the scientific paper it is citing so as to properly model cross-document relations and inform our summarization approach. Moreover, we have also investigated the identification of explicit and implicit citations to a given scientific paper which is an important task in several scientific text mining activities such as citation purpose identification, scientific opinion mining, and scientific summarization. We present both extractive and abstractive methods to summarize a list of scientific papers by utilizing their citation network. The extractive approach follows three stages: scoring the sentences of the scientific papers based on their citation network, selecting sentences from each scientific paper to be mentioned in the related work report, and generating an organized related work report by grouping the sentences of the scientific papers that belong to the same topic together. On the other hand, the abstractive approach attempts to generate citation sentences to be included in a related work report, taking advantage of current sequence-to-sequence neural architectures and resources that we have created specifically for this task. The thesis also presents and discusses automatic and manual evaluation of the generated related work reports showing the viability of the proposed approaches.
La sección de trabajos relacionados de un artículo científico resume e integra información clave de una lista de documentos científicos relacionados con el trabajo que se presenta. Para redactar esta sección del artículo científico el autor debe identificar, condensar/resumir y combinar información relevante de diferentes artículos. Esta tarea es complicada debido al gran volumen disponible de artículos científicos. En este contexto, la generación automática de tales secciones es un problema importante a abordar. La generación automática de secciones de trabajo relacionados puede ser considerada como una instancia del problema de resumen de documentos múltiples donde, dada una lista de documentos científicos, el objetivo es resumir automáticamente esos documentos científicos y generar la sección de trabajos relacionados. Para estudiar este problema, hemos creado un corpus de secciones de trabajos relacionados anotado manualmente y procesado automáticamente. Asimismo, hemos investigado la relación entre las citaciones y el artículo científico que se cita para modelar adecuadamente las relaciones entre documentos y, así, informar nuestro método de resumen automático. Además, hemos investigado la identificación de citaciones implícitas a un artículo científico dado que es una tarea importante en varias actividades de minería de textos científicos. Presentamos métodos extractivos y abstractivos para resumir una lista de artículos científicos utilizando su red de citaciones. El enfoque extractivo sigue tres etapas: cálculo de la relevancia las oraciones de cada artículo en función de la red de citaciones, selección de oraciones de cada artículo científico para integrarlas en el resumen y generación de la sección de trabajos relacionados agrupando las oraciones por tema. Por otro lado, el enfoque abstractivo intenta generar citaciones para incluirlas en un resumen utilizando redes neuronales y recursos que hemos creado específicamente para esta tarea. La tesis también presenta y discute la evaluación automática y manual de los resúmenes generados automáticamente, demostrando la viabilidad de los enfoques propuestos.
Una secció d’antecedents o estat de l’art d’un articulo científic resumeix la informació clau d'una llista de documents científics relacionats amb el treball que es presenta. Per a redactar aquesta secció de l’article científic l’autor ha d’identificar, condensar / resumir i combinar informació rellevant de diferents articles. Aquesta activitat és complicada per causa del gran volum disponible d’articles científics. En aquest context, la generació automàtica d’aquestes seccions és un problema important a abordar. La generació automàtica d’antecedents o d’estat de l’art pot considerar-se com una instància del problema de resum de documents. Per estudiar aquest problema, es va crear un corpus de seccions d’estat de l’art d’articles científics manualment anotat i processat automàticament. Així mateix, es va investigar la relació entre citacions i l’article científic que es cita per modelar adequadament les relacions entre documents i, així, informar el nostre mètode de resum automàtic. A més, es va investigar la identificació de citacions implícites a un article científic que és un problema important en diverses activitats de mineria de textos científics. Presentem mètodes extractius i abstractius per resumir una llista d'articles científics utilitzant el conjunt de citacions de cada article. L’enfoc extractiu segueix tres etapes: càlcul de la rellevància de les oracions de cada article en funció de les seves citacions, selecció d’oracions de cada article científic per a integrar-les en el resum i generació de la secció de treballs relacionats agrupant les oracions per tema. Per un altre costat, l’enfoc abstractiu implementa la generació de citacions per a incloure-les en un resum que utilitza xarxes neuronals i recursos que hem creat específicament per a aquest tasca. La tesi també presenta i discuteix l'avaluació automàtica i el manual dels resums generats automàticament, demostrant la viabilitat dels mètodes proposats.
APA, Harvard, Vancouver, ISO, and other styles
20

Camargo, Renata Tironi de. "Investigação de estratégias de sumarização humana multidocumento." Universidade Federal de São Carlos, 2013. https://repositorio.ufscar.br/handle/ufscar/5781.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:25:21Z (GMT). No. of bitstreams: 1 5583.pdf: 2165924 bytes, checksum: 9508776d3397fc5a516393218f88c50f (MD5) Previous issue date: 2013-08-30
Universidade Federal de Minas Gerais
The multi-document human summarization (MHS), which is the production of a manual summary from a collection of texts from different sources on the same subject, is a little explored linguistic task. Considering the fact that single document summaries comprise information that present recurrent features which are able to reveal summarization strategies, we aimed to investigate multi-document summaries in order to identify MHS strategies. For the identification of MHS strategies, the source texts sentences from the CSTNews corpus (CARDOSO et al., 2011) were manually aligned to their human summaries. The corpus has 50 clusters of news texts and their multi-document summaries in Portuguese. Thus, the alignment revealed the origin of the selected information to compose the summaries. In order to identify whether the selected information show recurrent features, the aligned (and nonaligned) sentences were semi automatically characterized considering a set of linguistic attributes identified in some related works. These attributes translate the content selection strategies from the single document summarization and the clues about MHS. Through the manual analysis of the characterizations of the aligned and non-aligned sentences, we identified that the selected sentences commonly have certain attributes such as sentence location in the text and redundancy. This observation was confirmed by a set of formal rules learned by a Machine Learning (ML) algorithm from the same characterizations. Thus, these rules translate MHS strategies. When the rules were learned and tested in CSTNews by ML, the precision rate was 71.25%. To assess the relevance of the rules, we performed 3 different kinds of intrinsic evaluations: (i) verification of the occurrence of the same strategies in another corpus, and (ii) comparison of the quality of summaries produced by the HMS strategies with the quality of summaries produced by different strategies. Regarding the evaluation (i), which was automatically performed by ML, the rules learned from the CSTNews were tested in a different newspaper corpus and its precision was 70%, which is very close to the precision obtained in the training corpus (CSTNews). Concerning the evaluating (ii), the quality, which was manually evaluated by 10 computational linguists, was considered better than the quality of other summaries. Besides describing features concerning multi-document summaries, this work has the potential to support the multi-document automatic summarization, which may help it to become more linguistically motivated. This task consists of automatically generating multi-document summaries and, therefore, it has been based on the adjustment of strategies identified in single document summarization or only on not confirmed clues about MHS. Based on this work, the automatic process of content selection in multi-document summarization methods may be performed based on strategies systematically identified in MHS.
A sumarização humana multidocumento (SHM), que consiste na produção manual de um sumário a partir de uma coleção de textos, provenientes de fontes-distintas, que abordam um mesmo assunto, é uma tarefa linguística até então pouco explorada. Tomando-se como motivação o fato de que sumários monodocumento são compostos por informações que apresentam características recorrentes, a ponto de revelar estratégias de sumarização, objetivou-se investigar sumários multidocumento com o objetivo de identificar estratégias de SHM. Para a identificação das estratégias de SHM, os textos-fonte (isto é, notícias) das 50 coleções do corpus multidocumento em português CSTNews (CARDOSO et al., 2011) foram manualmente alinhados em nível sentencial aos seus respectivos sumários humanos, relevando, assim, a origem das informações selecionadas para compor os sumários. Com o intuito de identificar se as informações selecionadas para compor os sumários apresentam características recorrentes, as sentenças alinhadas (e não-alinhadas) foram caracterizadas de forma semiautomática em função de um conjunto de atributos linguísticos identificados na literatura. Esses atributos traduzem as estratégias de seleção de conteúdo da sumarização monodocumento e os indícios sobre a SHM. Por meio da análise manual das caracterizações das sentenças alinhadas e não-alinhadas, identificou-se que as sentenças selecionadas para compor os sumários multidocumento comumente apresentam certos atributos, como localização das sentenças no texto e redundância. Essa constatação foi confirmada pelo conjunto de regras formais aprendidas por um algoritmo de Aprendizado de Máquina (AM) a partir das mesmas caracterizações. Tais regras traduzem, assim, estratégias de SHM. Quando aprendidas e testadas no CSTNews pelo AM, as regras obtiveram precisão de 71,25%. Para avaliar a pertinência das regras, 2 avaliações intrínsecas foram realizadas, a saber: (i) verificação da ocorrência das estratégias em outro corpus, e (ii) comparação da qualidade de sumários produzidos pelas estratégias de SHM com a qualidade de sumários produzidos por estratégias diferentes. Na avaliação (i), realizada automaticamente por AM, as regras aprendidas a partir do CSTNews foram testadas em um corpus jornalístico distinto e obtiveram a precisão de 70%, muito próxima da obtida no corpus de treinamento (CSTNews). Na avaliação (ii), a qualidade, avaliada de forma manual por 10 linguistas computacionais, foi considerada superior à qualidade dos demais sumários de comparação. Além de descrever características relativas aos sumários multidocumento, este trabalho, uma vez que gera regras formais (ou seja, explícitas e não-ambíguas), tem potencial de subsidiar a Sumarização Automática Multidocumento (SAM), tornando-a mais linguisticamente motivada. A SAM consiste em gerar sumários multidocumento de forma automática e, para tanto, baseava-se na adaptação das estratégias identificadas na sumarização monodocumento ou apenas em indícios, não comprovados sistematicamente, sobre a SHM. Com base neste trabalho, a seleção de conteúdo em métodos de SAM poderá ser feita com base em estratégias identificadas de forma sistemática na SHM.
APA, Harvard, Vancouver, ISO, and other styles
21

Ermakova, Liana. "Short text contextualization in information retrieval : application to tweet contextualization and automatic query expansion." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20023/document.

Full text
Abstract:
La communication efficace a tendance à suivre la loi du moindre effort. Selon ce principe, en utilisant une langue donnée les interlocuteurs ne veulent pas travailler plus que nécessaire pour être compris. Ce fait mène à la compression extrême de textes surtout dans la communication électronique, comme dans les microblogues, SMS, ou les requêtes dans les moteurs de recherche. Cependant souvent ces textes ne sont pas auto-suffisants car pour les comprendre, il est nécessaire d’avoir des connaissances sur la terminologie, les entités nommées ou les faits liés. Ainsi, la tâche principale de la recherche présentée dans ce mémoire de thèse de doctorat est de fournir le contexte d’un texte court à l’utilisateur ou au système comme à un moteur de recherche par exemple.Le premier objectif de notre travail est d'aider l’utilisateur à mieux comprendre un message court par l’extraction du contexte d’une source externe comme le Web ou la Wikipédia au moyen de résumés construits automatiquement. Pour cela nous proposons une approche pour le résumé automatique de documents multiples et nous l’appliquons à la contextualisation de messages, notamment à la contextualisation de tweets. La méthode que nous proposons est basée sur la reconnaissance des entités nommées, la pondération des parties du discours et la mesure de la qualité des phrases. Contrairement aux travaux précédents, nous introduisons un algorithme de lissage en fonction du contexte local. Notre approche s’appuie sur la structure thème-rhème des textes. De plus, nous avons développé un algorithme basé sur les graphes pour le ré-ordonnancement des phrases. La méthode a été évaluée à la tâche INEX/CLEF Tweet Contextualization sur une période de 4 ans. La méthode a été également adaptée pour la génération de snippets. Les résultats des évaluations attestent une bonne performance de notre approche
The efficient communication tends to follow the principle of the least effort. According to this principle, using a given language interlocutors do not want to work any harder than necessary to reach understanding. This fact leads to the extreme compression of texts especially in electronic communication, e.g. microblogs, SMS, search queries. However, sometimes these texts are not self-contained and need to be explained since understanding them requires knowledge of terminology, named entities or related facts. The main goal of this research is to provide a context to a user or a system from a textual resource.The first aim of this work is to help a user to better understand a short message by extracting a context from an external source like a text collection, the Web or the Wikipedia by means of text summarization. To this end we developed an approach for automatic multi-document summarization and we applied it to short message contextualization, in particular to tweet contextualization. The proposed method is based on named entity recognition, part-of-speech weighting and sentence quality measuring. In contrast to previous research, we introduced an algorithm for smoothing from the local context. Our approach exploits topic-comment structure of a text. Moreover, we developed a graph-based algorithm for sentence reordering. The method has been evaluated at INEX/CLEF tweet contextualization track. We provide the evaluation results over the 4 years of the track. The method was also adapted to snippet retrieval. The evaluation results indicate good performance of the approach
APA, Harvard, Vancouver, ISO, and other styles
22

Boukadida, Haykel. "Création automatique de résumés vidéo par programmation par contraintes." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S074/document.

Full text
Abstract:
Cette thèse s’intéresse à la création automatique de résumés de vidéos. L’idée est de créer de manière adaptative un résumé vidéo qui prenne en compte des règles définies sur le contenu audiovisuel d’une part, et qui s’adapte aux préférences de l’utilisateur d’autre part. Nous proposons une nouvelle approche qui considère le problème de création automatique de résumés sous forme d’un problème de satisfaction de contraintes. La solution est basée sur la programmation par contraintes comme paradigme de programmation. Un expert commence par définir un ensemble de règles générales de production du résumé, règles liées au contenu multimédia de la vidéo d’entrée. Ces règles de production sont exprimées sous forme de contraintes à satisfaire. L’utilisateur final peut alors définir des contraintes supplémentaires (comme la durée souhaitée du résumé) ou fixer des paramètres de haut niveau des contraintes définies par l’expert. Cette approche a plusieurs avantages. Elle permet de séparer clairement les règles de production des résumés (modélisation du problème) de l’algorithme de génération de résumés (la résolution du problème par le solveur de contraintes). Le résumé peut donc être adapté sans qu’il soit nécessaire de revoir tout le processus de génération des résumés. Cette approche permet par exemple aux utilisateurs d’adapter le résumé à l’application cible et à leurs préférences en ajoutant une contrainte ou en modifiant une contrainte existante, ceci sans avoir à modifier l’algorithme de production des résumés. Nous avons proposé trois modèles de représentation des vidéos qui se distinguent par leur flexibilité et leur efficacité. Outre les originalités liées à chacun des trois modèles, une contribution supplémentaire de cette thèse est une étude comparative de leurs performances et de la qualité des résumés résultants en utilisant des mesures objectives et subjectives. Enfin, et dans le but d’évaluer la qualité des résumés générés automatiquement, l’approche proposée a été évaluée par des utilisateurs à grande échelle. Cette évaluation a impliqué plus de 60 personnes. Ces expériences ont porté sur le résumé de matchs de tennis
This thesis focuses on the issue of automatic video summarization. The idea is to create an adaptive video summary that takes into account a set of rules defined on the audiovisual content on the one hand, and that adapts to the users preferences on the other hand. We propose a novel approach that considers the problem of automatic video summarization as a constraint satisfaction problem. The solution is based on constraint satisfaction programming (CSP) as programming paradigm. A set of general rules for summary production are inherently defined by an expert. These production rules are related to the multimedia content of the input video. The rules are expressed as constraints to be satisfied. The final user can then define additional constraints (such as the desired duration of the summary) or enter a set of high-level parameters involving to the constraints already defined by the expert. This approach has several advantages. This will clearly separate the summary production rules (the problem modeling) from the summary generation algorithm (the problem solving by the CSP solver). The summary can hence be adapted without reviewing the whole summary generation process. For instance, our approach enables users to adapt the summary to the target application and to their preferences by adding a constraint or modifying an existing one, without changing the summaries generation algorithm. We have proposed three models of video representation that are distinguished by their flexibility and their efficiency. Besides the originality related to each of the three proposed models, an additional contribution of this thesis is an extensive comparative study of their performance and the quality of the resulting summaries using objective and subjective measures. Finally, and in order to assess the quality of automatically generated summaries, the proposed approach was evaluated by a large-scale user evaluation. This evaluation involved more than 60 people. All these experiments have been performed within the challenging application of tennis match automatic summarization
APA, Harvard, Vancouver, ISO, and other styles
23

Pitarch, Yoann. "Résumé de Flots de Données : motifs, Cubes et Hiérarchies." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20051/document.

Full text
Abstract:
L'explosion du volume de données disponibles due au développement des technologies de l'information et de la communication a démocratisé les flots qui peuvent être définis comme des séquences non bornées de données très précises et circulant à grande vitesse. Les stocker intégralement est par définition impossible. Il est alors essentiel de proposer des techniques de résumé permettant une analyse a posteriori de cet historique. En outre, un grand nombre de flots de données présentent un caractère multidimensionnel et multiniveaux que très peu d'approches existantes exploitent. Ainsi, l'objectif de ces travaux est de proposer des méthodes de résumé exploitant ces spécificités multidimensionnelles et applicables dans un contexte dynamique. Nous nous intéressons à l'adaptation des techniques OLAP (On Line Analytical Processing ) et plus particulièrement, à l'exploitation des hiérarchies de données pour réaliser cette tâche. Pour aborder cette problématique, nous avons mis en place trois angles d'attaque. Tout d'abord, après avoir discuté et mis en évidence le manque de solutions satisfaisantes, nous proposons deux approches permettant de construire un cube de données alimenté par un flot. Le deuxième angle d'attaque concerne le couplage des approches d'extractions de motifs fréquents (itemsets et séquences) et l'utilisation des hiérarchies pour produire un résumé conservant les tendances d'un flot. Enfin, les catégories de hiérarchies existantes ne permettent pas d'exploiter les connaissances expertes dans le processus de généralisation. Nous pallions ce manque en définissant une nouvelle catégorie de hiérarchies, dites contextuelles, et en proposant une modélisation conceptuelle, graphique et logique d'un entrepôt de données intégrant ces hiérarchies contextuelles. Cette thèse s'inscrivant dans un projet ANR (MIDAS), une plateforme de démonstration intégrant les principales approches de résumé a été mise au point. En outre, la présence de partenaires industriels tels que Orange Labs ou EDF RD dans le projet a permis de confronter nos approches à des jeux de données réelles
Due to the rapid increase of information and communication technologies, the amount of generated and available data exploded and a new kind of data, the stream data, appeared. One possible and common definition of data stream is an unbounded sequence of very precise data incoming at an high rate. Thus, it is impossible to store such a stream to perform a posteriori analysis. Moreover, more and more data streams concern multidimensional and multilevel data and very few approaches tackle these specificities. Thus, in this work, we proposed some practical and efficient solutions to deal with such particular data in a dynamic context. More specifically, we were interested in adapting OLAP (On Line Analytical Processing ) and hierarchy techniques to build relevant summaries of the data. First, after describing and discussing existent similar approaches, we have proposed two solutions to build more efficiently data cube on stream data. Second, we were interested in combining frequent patterns and the use of hierarchies to build a summary based on the main trends of the stream. Third, even if it exists a lot of types of hierarchies in the literature, none of them integrates the expert knowledge during the generalization phase. However, such an integration could be very relevant to build semantically richer summaries. We tackled this issue and have proposed a new type of hierarchies, namely the contextual hierarchies. We provide with this new type of hierarchies a new conceptual, graphical and logical data warehouse model, namely the contextual data warehouse. Finally, since this work was founded by the ANR through the MIDAS project and thus, we had evaluated our approaches on real datasets provided by the industrial partners of this project (e.g., Orange Labs or EDF R&D)
APA, Harvard, Vancouver, ISO, and other styles
24

Bawakid, Abdullah. "Automatic documents summarization using ontology based methodologies." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/2896/.

Full text
Abstract:
When humans summarize a document they usually read the text first, understand it then attempt to write a summary. In essence, these processes require at least some basic level of background knowledge by the reader. The least of which would be the Natural Language the text is written in. In this thesis, an attempt is made to bridge the gap of machines understanding by proposing a framework backed with knowledge repositories constructed by humans and containing real human concepts. I use WordNet, a hierarchically-structured repository that was created by linguistic experts and is rich in its explicitly defined lexical relations. With WordNet, algorithms for computing the semantic similarity between terms were proposed and implemented. These algorithms were especially useful when applied to the application of Automatic Documents Summarization as shown with the obtained evaluation results. I also use Wikipedia, the largest encyclopedia to date. Because of its openness and structure, three problems had to be handled in this thesis: Extracting knowledge and features from Wikipedia, enriching the representation of text documents with the extracted features, and using them in the application of Automatic Summarization. When applying the features extractor to a summarization system, competitive evaluation results were obtained.
APA, Harvard, Vancouver, ISO, and other styles
25

Potapov, Danila. "Supervised Learning Approaches for Automatic Structuring of Videos." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM023/document.

Full text
Abstract:
L'Interprétation automatique de vidéos est un horizon qui demeure difficile a atteindre en utilisant les approches actuelles de vision par ordinateur. Une des principales difficultés est d'aller au-delà des descripteurs visuels actuels (de même que pour les autres modalités, audio, textuelle, etc) pour pouvoir mettre en oeuvre des algorithmes qui permettraient de reconnaitre automatiquement des sections de vidéos, potentiellement longues, dont le contenu appartient à une certaine catégorie définie de manière sémantique. Un exemple d'une telle section de vidéo serait une séquence ou une personne serait en train de pêcher; un autre exemple serait une dispute entre le héros et le méchant dans un film d'action hollywoodien. Dans ce manuscrit, nous présentons plusieurs contributions qui vont dans le sens de cet objectif ambitieux, en nous concentrant sur trois tâches d'analyse de vidéos: le résumé automatique, la classification, la localisation temporelle.Tout d'abord, nous introduisons une approche pour le résumé automatique de vidéos, qui fournit un résumé de courte durée et informatif de vidéos pouvant être très longues, résumé qui est de plus adapté à la catégorie de vidéos considérée. Nous introduisons également une nouvelle base de vidéos pour l'évaluation de méthodes de résumé automatique, appelé MED-Summaries, ou chaque plan est annoté avec un score d'importance, ainsi qu'un ensemble de programmes informatiques pour le calcul des métriques d'évaluation.Deuxièmement, nous introduisons une nouvelle base de films de cinéma annotés, appelée Inria Action Movies, constitué de films d'action hollywoodiens, dont les plans sont annotés suivant des catégories sémantiques non-exclusives, dont la définition est suffisamment large pour couvrir l'ensemble du film. Un exemple de catégorie est "course-poursuite"; un autre exemple est "scène sentimentale". Nous proposons une approche pour localiser les sections de vidéos appartenant à chaque catégorie et apprendre les dépendances temporelles entre les occurrences de chaque catégorie.Troisièmement, nous décrivons les différentes versions du système développé pour la compétition de détection d'événement vidéo TRECVID Multimédia Event Detection, entre 2011 et 2014, en soulignant les composantes du système dont l'auteur du manuscrit était responsable
Automatic interpretation and understanding of videos still remains at the frontier of computer vision. The core challenge is to lift the expressive power of the current visual features (as well as features from other modalities, such as audio or text) to be able to automatically recognize typical video sections, with low temporal saliency yet high semantic expression. Examples of such long events include video sections where someone is fishing (TRECVID Multimedia Event Detection), or where the hero argues with a villain in a Hollywood action movie (Inria Action Movies). In this manuscript, we present several contributions towards this goal, focusing on three video analysis tasks: summarization, classification, localisation.First, we propose an automatic video summarization method, yielding a short and highly informative video summary of potentially long videos, tailored for specified categories of videos. We also introduce a new dataset for evaluation of video summarization methods, called MED-Summaries, which contains complete importance-scorings annotations of the videos, along with a complete set of evaluation tools.Second, we introduce a new dataset, called Inria Action Movies, consisting of long movies, and annotated with non-exclusive semantic categories (called beat-categories), whose definition is broad enough to cover most of the movie footage. Categories such as "pursuit" or "romance" in action movies are examples of beat-categories. We propose an approach for localizing beat-events based on classifying shots into beat-categories and learning the temporal constraints between shots.Third, we overview the Inria event classification system developed within the TRECVID Multimedia Event Detection competition and highlight the contributions made during the work on this thesis from 2011 to 2014
APA, Harvard, Vancouver, ISO, and other styles
26

Zacarias, Andressa Caroline Inácio. "Investigação de métodos de sumarização automática multidocumento baseados em hierarquias conceituais." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/7974.

Full text
Abstract:
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-09-30T19:20:49Z No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T16:19:10Z (GMT) No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T16:19:17Z (GMT) No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5)
Made available in DSpace on 2016-10-20T16:19:25Z (GMT). No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5) Previous issue date: 2016-03-29
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
The Automatic Multi-Document Summarization (MDS) aims at creating a single summary, coherent and cohesive, from a collection of different sources texts, on the same topic. The creation of these summaries, in general extracts (informative and generic), requires the selection of the most important sentences from the collection. Therefore, one may use superficial linguistic knowledge (or statistic) or deep knowledge. It is important to note that deep methods, although more expensive and less robust, produce more informative extracts and with more linguistic quality. For the Portuguese language, the sole deep methods that use lexical-conceptual knowledge are based on the frequency of the occurrence of the concepts in the collection for the selection of a content. Considering the potential for application of semantic-conceptual knowledge, the proposition is to investigate MDS methods that start with representation of lexical concepts of source texts in a hierarchy for further exploration of certain hierarchical properties able to distinguish the most relevant concepts (in other words, the topics from a collection of texts) from the others. Specifically, 3 out of 50 CSTNews (multi-document corpus of Portuguese reference) collections were selected and the names that have occurred in the source texts of each collection were manually indexed to the concepts of the WordNet from Princenton (WN.Pr), engendering at the end, an hierarchy with the concepts derived from the collection and other concepts inherited from the WN.PR for the construction of the hierarchy. The hierarchy concepts were characterized in 5 graph metrics (of relevancy) potentially relevant to identify the concepts that compose a summary: Centrality, Simple Frequency, Cumulative Frequency, Closeness and Level. Said characterization was analyzed manually and by machine learning algorithms (ML) with the purpose of verifying the most suitable measures to identify the relevant concepts of the collection. As a result, the measure Centrality was disregarded and the other ones were used to propose content selection methods to MDS. Specifically, 2 sentences selection methods were selected which make up the extractive methods: (i) CFSumm whose content selection is exclusively based on the metric Simple Frequency, and (ii) LCHSumm whose selection is based on rules learned by machine learning algorithms from the use of all 4 relevant measures as attributes. These methods were intrinsically evaluated concerning the informativeness, by means of the package of measures called ROUGE, and the evaluation of linguistic quality was based on the criteria from the TAC conference. Therefore, the 6 human abstracts available in each CSTNews collection were used. Furthermore, the summaries generated by the proposed methods were compared to the extracts generated by the GistSumm summarizer, taken as baseline. The two methods got satisfactory results when compared to the GistSumm baseline and the CFSumm method stands out upon the LCHSumm method.
Na Sumarização Automática Multidocumento (SAM), busca-se gerar um único sumário, coerente e coeso, a partir de uma coleção de textos, de diferentes fontes, que tratam de um mesmo assunto. A geração de tais sumários, comumente extratos (informativos e genéricos), requer a seleção das sentenças mais importantes da coleção. Para tanto, pode-se empregar conhecimento linguístico superficial (ou estatística) ou conhecimento profundo. Quanto aos métodos profundos, destaca-se que estes, apesar de mais caros e menos robustos, produzem extratos mais informativos e com mais qualidade linguística. Para o português, os únicos métodos profundos que utilizam conhecimento léxico-conceitual baseiam na frequência de ocorrência dos conceitos na coleção para a seleção de conteúdo. Tendo em vista o potencial de aplicação do conhecimento semântico-conceitual, propôs-se investigar métodos de SAM que partem da representação dos conceitos lexicais dos textos-fonte em uma hierarquia para a posterior exploração de certas propriedades hierárquicas capazes de distinguir os conceitos mais relevantes (ou seja, os tópicos da coleção) dos demais. Especificamente, selecionaram-se 3 das 50 coleções do CSTNews, corpus multidocumento de referência do português, e os nomes que ocorrem nos textos-fonte de cada coleção foram manualmente indexados aos conceitos da WordNet de Princeton (WN.Pr), gerando, ao final, uma hierarquia com os conceitos constitutivos da coleção e demais conceitos herdados da WN.Pr para a construção da hierarquia. Os conceitos da hierarquia foram caracterizados em função de 5 métricas (de relevância) de grafo potencialmente pertinentes para a identificação dos conceitos a comporem um sumário: Centrality, Simple Frequency, Cumulative Frequency, Closeness e Level. Tal caracterização foi analisada de forma manual e por meio de algoritmos de Aprendizado de Máquina (AM) com o objetivo de verificar quais medidas seriam as mais adequadas para identificar os conceitos relevantes da coleção. Como resultado, a medida Centrality foi descartada e as demais utilizadas para propor métodos de seleção de conteúdo para a SAM. Especificamente, propuseram-se 2 métodos de seleção de sentenças, os quais compõem os métodos extrativos: (i) CFSumm, cuja seleção de conteúdo se baseia exclusivamente na métrica Simple Frequency, e (ii) LCHSumm, cuja seleção se baseia em regras aprendidas por algoritmos de AM a partir da utilização em conjunto das 4 medidas relevantes como atributos. Tais métodos foram avaliados intrinsecamente quanto à informatividade, por meio do pacote de medidas ROUGE, e qualidade linguística, com base nos critérios da conferência TAC. Para tanto, utilizaram-se os 6 abstracts humanos disponíveis em cada coleção do CSTNews. Ademais, os sumários gerados pelos métodos propostos foram comparados aos extratos gerados pelo sumarizador GistSumm, tido como baseline. Os dois métodos obtiveram resultados satisfatórios quando comparados ao baseline GistSumm e o método CFSumm se sobressai ao método LCHSumm.
FAPESP 2014/12817-4
APA, Harvard, Vancouver, ISO, and other styles
27

Diot, Fabien. "Graph mining for object tracking in videos." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4009/document.

Full text
Abstract:
Détecter et suivre les objets principaux d’une vidéo est une étape nécessaire en vue d’en décrire le contenu pour, par exemple, permettre une indexation judicieuse des données multimédia par les moteurs de recherche. Les techniques de suivi d’objets actuelles souffrent de défauts majeurs. En effet, soit elles nécessitent que l’utilisateur désigne la cible a suivre, soit il est nécessaire d’utiliser un classifieur pré-entraîné à reconnaitre une classe spécifique d’objets, comme des humains ou des voitures. Puisque ces méthodes requièrent l’intervention de l’utilisateur ou une connaissance a priori du contenu traité, elles ne sont pas suffisamment génériques pour être appliquées aux vidéos amateurs telles qu’on peut en trouver sur YouTube. Pour résoudre ce problème, nous partons de l’hypothèse que, dans le cas de vidéos dont l’arrière-plan n’est pas fixe, celui-ci apparait moins souvent que les objets intéressants. De plus, dans une vidéo, la topologie des différents éléments visuels composant un objet est supposée consistante d’une image a l’autre. Nous représentons chaque image par un graphe plan modélisant sa topologie. Ensuite, nous recherchons des motifs apparaissant fréquemment dans la base de données de graphes plans ainsi créée pour représenter chaque vidéo. Cette approche nous permet de détecter et suivre les objets principaux d’une vidéo de manière non supervisée en nous basant uniquement sur la fréquence des motifs. Nos contributions sont donc réparties entre les domaines de la fouille de graphes et du suivi d’objets. Dans le premier domaine, notre première contribution est de présenter un algorithme de fouille de graphes plans efficace, appelé PLAGRAM. Cet algorithme exploite la planarité des graphes et une nouvelle stratégie d’extension des motifs. Nous introduisons ensuite des contraintes spatio-temporelles au processus de fouille afin d’exploiter le fait que, dans une vidéo, les objets se déplacent peu d’une image a l’autre. Ainsi, nous contraignons les occurrences d’un même motif a être proches dans l’espace et dans le temps en limitant le nombre d’images et la distance spatiale les séparant. Nous présentons deux nouveaux algorithmes, DYPLAGRAM qui utilise la contrainte temporelle pour limiter le nombre de motifs extraits, et DYPLAGRAM_ST qui extrait efficacement des motifs spatio-temporels fréquents depuis les bases de données représentant les vidéos. Dans le domaine du suivi d’objets, nos contributions consistent en deux approches utilisant les motifs spatio-temporels pour suivre les objets principaux dans les vidéos. La première est basée sur une recherche du chemin de poids minimum dans un graphe connectant les motifs spatio-temporels tandis que l’autre est basée sur une méthode de clustering permettant de regrouper les motifs pour suivre les objets plus longtemps. Nous présentons aussi deux applications industrielles de notre méthode
Detecting and following the main objects of a video is necessary to describe its content in order to, for example, allow for a relevant indexation of the multimedia content by the search engines. Current object tracking approaches either require the user to select the targets to follow, or rely on pre-trained classifiers to detect particular classes of objects such as pedestrians or car for example. Since those methods rely on user intervention or prior knowledge of the content to process, they cannot be applied automatically on amateur videos such as the ones found on YouTube. To solve this problem, we build upon the hypothesis that, in videos with a moving background, the main objects should appear more frequently than the background. Moreover, in a video, the topology of the visual elements composing an object is supposed consistent from one frame to another. We represent each image of the videos with plane graphs modeling their topology. Then, we search for substructures appearing frequently in the database of plane graphs thus created to represent each video. Our contributions cover both fields of graph mining and object tracking. In the first field, our first contribution is to present an efficient plane graph mining algorithm, named PLAGRAM. This algorithm exploits the planarity of the graphs and a new strategy to extend the patterns. The next contributions consist in the introduction of spatio-temporal constraints into the mining process to exploit the fact that, in a video, the motion of objects is small from on frame to another. Thus, we constrain the occurrences of a same pattern to be close in space and time by limiting the number of frames and the spatial distance separating them. We present two new algorithms, DYPLAGRAM which makes use of the temporal constraint to limit the number of extracted patterns, and DYPLAGRAM_ST which efficiently mines frequent spatio-temporal patterns from the datasets representing the videos. In the field of object tracking, our contributions consist in two approaches using the spatio-temporal patterns to track the main objects in videos. The first one is based on a search of the shortest path in a graph connecting the spatio-temporal patterns, while the second one uses a clustering approach to regroup them in order to follow the objects for a longer period of time. We also present two industrial applications of our method
APA, Harvard, Vancouver, ISO, and other styles
28

Maaloul, Mohamed. "Approche hybride pour le résumé automatique de textes : Application à la langue arabe." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4778.

Full text
Abstract:
Cette thèse s'intègre dans le cadre du traitement automatique du langage naturel. La problématique du résumé automatique de documents arabes qui a été abordée, dans cette thèse, s'est cristallisée autour de deux points. Le premier point concerne les critères utilisés pour décider du contenu essentiel à extraire. Le deuxième point se focalise sur les moyens qui permettent d'exprimer le contenu essentiel extrait sous la forme d'un texte ciblant les besoins potentiels d'un utilisateur. Afin de montrer la faisabilité de notre approche, nous avons développé le système "L.A.E", basé sur une approche hybride qui combine une analyse symbolique avec un traitement numérique. Les résultats d'évaluation de ce système sont encourageants et prouvent la performance de l'approche hybride proposée. Ces résultats, ont montré, en premier lieu, l'applicabilité de l'approche dans le contexte de documents sans restriction quant à leur thème (Éducation, Sport, Science, Politique, Reportage, etc.), leur contenu et leur volume. Ils ont aussi montré l'importance de l'apprentissage dans la phase de classement et sélection des phrases forment l'extrait final
This thesis falls within the framework of Natural Language Processing. The problems of automatic summarization of Arabic documents which was approached, in this thesis, are based on two points. The first point relates to the criteria used to determine the essential content to extract. The second point focuses on the means to express the essential content extracted in the form of a text targeting the user potential needs.In order to show the feasibility of our approach, we developed the "L.A.E" system, based on a hybrid approach which combines a symbolic analysis with a numerical processing.The evaluation results are encouraging and prove the performance of the proposed hybrid approach.These results showed, initially, the applicability of the approach in the context of mono documents without restriction as for their topics (Education, Sport, Science, Politics, Interaction, etc), their content and their volume. They also showed the importance of the machine learning in the phase of classification and selection of the sentences forming the final extract
APA, Harvard, Vancouver, ISO, and other styles
29

Dias, Márcio de Souza. "Investigação de modelos de coerência local para sumários multidocumento." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11112016-084734/.

Full text
Abstract:
A sumarização multidocumento consiste na tarefa de produzir automaticamente um único sumário a partir de um conjunto de textos derivados de um mesmo assunto. É imprescindível que seja feito o tratamento de fenômenos que ocorrem neste cenário, tais como: (i) a redundância, a complementaridade e a contradição de informações; (ii) a uniformização de estilos de escrita; (iii) tratamento de expressões referenciais; (iv) a manutenção de focos e perspectivas diferentes nos textos; (v) e a ordenação temporal das informações no sumário. O tratamento de tais fenômenos contribui significativamente para que seja produzido ao final um sumário informativo e coerente, características difíceis de serem garantidas ainda que por um humano. Um tipo particular de coerência estudado nesta tese é a coerência local, a qual é definida por meio de relações entre enunciados (unidades menores) em uma sequência de sentenças, de modo a garantir que os relacionamentos contribuirão para a construção do sentido do texto em sua totalidade. Partindo do pressuposto de que o uso de conhecimento discursivo pode melhorar a avaliação da coerência local, o presente trabalho propõe-se a investigar o uso de relações discursivas para elaborar modelos de coerência local, os quais são capazes de distinguir automaticamente sumários coerentes dos incoerentes. Além disso, um estudo sobre os erros que afetam a Qualidade Linguística dos sumários foi realizado com o propósito de verificar quais são os erros que afetam a coerência local dos sumários, se os modelos de coerência podem identificar tais erros e se há alguma relação entre os modelos de coerência e a informatividade dos sumários. Para a realização desta pesquisa foi necessário fazer o uso das informações semântico-discursivas dos modelos CST (Cross-document Structure Theory) e RST (Rhetorical Structure Theory) anotadas no córpus, de ferramentas automáticas, como o parser Palavras e de algoritmos que extraíram informações do córpus. Os resultados mostraram que o uso de informações semântico-discursivas foi bem sucedido na distinção dos sumários coerentes dos incoerentes e que os modelos de coerência implementados nesta tese podem ser usados na identificação de erros da qualidade linguística que afetam a coerência local.
Multi-document summarization is the task of automatically producing a single summary from a collection of texts derived from the same subject. It is essential to treat many phenomena, such as: (i) redundancy, complementarity and contradiction of information; (ii) writing styles standardization; (iii) treatment of referential expressions; (iv) text focus and different perspectives; (v) and temporal ordering of information in the summary. The treatment of these phenomena contributes to the informativeness and coherence of the final summary. A particular type of coherence studied in this thesis is the local coherence, which is defined by the relationship between statements (smallest units) in a sequence of sentences. The local coherence contributes to the construction of textual meaning in its totality. Assuming that the use of discursive knowledge can improve the evaluation of the local coherence, this thesis proposes to investigate the use of discursive relations to develop local coherence models, which are able to automatically distinguish coherent summaries from incoherent ones. In addition, a study on the errors that affect the Linguistic Quality of the summaries was conducted in order to verify what are the errors that affect the local coherence of summaries, as well as if the coherence models can identify such errors, and whether there is any relationship between coherence models and informativenessof summaries. For thisresearch, it wasnecessary theuseof semantic-discursive information of CST models (Cross-document Structure Theory) and RST (Rhetorical Structure Theory) annoted in the corpora, automatic tools, parser as Palavras, and algorithms that extract information from the corpus. The results showed that the use of semantic-discursive information was successful on the distinction between coherent and incoherent summaries, and that the information about coherence can be used in error detection of linguistic quality that affect the local coherence.
APA, Harvard, Vancouver, ISO, and other styles
30

Tosta, Fabricio Elder da Silva. "Aplicação de conhecimento léxico-conceitual na sumarização multidocumento multilíngue." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/5796.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:25:23Z (GMT). No. of bitstreams: 1 6554.pdf: 2657931 bytes, checksum: 11403ad2acdeafd11148154c92757f20 (MD5) Previous issue date: 2014-02-27
Financiadora de Estudos e Projetos
Traditionally, Multilingual Multi-document Automatic Summarization (MMAS) is a computational application that, from a single collection of source-texts on the same subject/topic in at least two languages, produces an informative and generic summary (extract) in one of these languages. The simplest methods automatically translate the source-texts and, from a monolingual collection, apply content selection strategies based on shallow and/or deep linguistic knowledge. Therefore, the MMAS applications need to identify the main information of the collection, avoiding the redundancy, but also treating the problems caused by the machine translation (MT) of the full source-texts. Looking for alternatives to the traditional scenario of MMAS, we investigated two methods (Method 1 and 2) that once based on deep linguistic knowledge of lexical-conceptual level avoid the full MT of the sourcetexts, generating informative and cohesive/coherent summaries. In these methods, the content selection starts with the score and the ranking of the original sentences based on the frequency of occurrence of the concepts in the collection, expressed by their common names. In Method 1, only the most well-scored and non redundant sentences from the user s language are selected to compose the extract, until it reaches the compression rate. In Method 2, the original sentences which are better ranked and non redundant are selected to the summary without privileging the user s language; in cases which sentences that are not in the user s language are selected, they are automatically translated. In order to producing automatic summaries according to Methods 1 and 2 and their subsequent evaluation, the CM2News corpus was built. The corpus has 20 collections of news texts, 1 original text in English and 1 original text in Portuguese, both on the same topic. The common names of CM2News were identified through morphosyntactic annotation and then it was semiautomatically annotated with the concepts in Princeton WordNet through the Mulsen graphic editor, which was especially developed for the task. For the production of extracts according to Method 1, only the best ranked sentences in Portuguese were selected until the compression rate was reached. For the production of extracts according to Method 2, the best ranked sentences were selected, without privileging the language of the user. If English sentences were selected, they were automatically translated into Portuguese by the Bing translator. The Methods 1 and 2 were evaluated intrinsically considering the linguistic quality and informativeness of the summaries. To evaluate linguistic quality, 15 computational linguists analyzed manually the grammaticality, non-redundancy, referential clarity, focus and structure / coherence of the summaries and to evaluate the informativeness of the sumaries, they were automatically compared to reference sumaries by ROUGE measures. In both evaluations, the results have shown the better performance of Method 1, which might be explained by the fact that sentences were selected from a single source text. Furthermore, we highlight the best performance of both methods based on lexicalconceptual knowledge compared to simpler methods of MMAS, which adopted the full MT of the source-texts. Finally, it is noted that, besides the promising results on the application of lexical-conceptual knowledge, this work has generated important resources and tools for MMAS, such as the CM2News corpus and the Mulsen editor.
Tradicionalmente, a Sumarização Automática Multidocumento Multilíngue (SAMM) é uma aplicação que, a partir de uma coleção de textos sobre um mesmo assunto em ao menos duas línguas distintas, produz um sumário (extrato) informativo e genérico em uma das línguas-fonte. Os métodos mais simples realizam a tradução automática (TA) dos textos-fonte e, a partir de uma coleção monolíngue, aplicam estratégias superficiais e/ou profundas de seleção de conteúdo. Dessa forma, a SAMM precisa não só identificar a informação principal da coleção para compor o sumário, evitando-se a redundância, mas também lidar com os problemas causados pela TA integral dos textos-fonte. Buscando alternativas para esse cenário, investigaram-se dois métodos (Método 1 e 2) que, uma vez pautados em conhecimento profundo do tipo léxico-conceitual, evitam a TA integral dos textos-fonte, gerando sumários informativos e coesos/coerentes. Neles, a seleção do conteúdo tem início com a pontuação e o ranqueamento das sentenças originais em função da frequência de ocorrência na coleção dos conceitos expressos por seus nomes comuns. No Método 1, apenas as sentenças mais bem pontuadas na língua do usuário e não redundantes entre si são selecionadas para compor o sumário até que se atinja a taxa de compressão. No Método 2, as sentenças originais mais bem ranqueadas e não redundantes entre si são selecionadas para compor o sumário sem que se privilegie a língua do usuário; caso sentenças que não estejam na língua do usuário sejam selecionadas, estas são automaticamente traduzidas. Para a produção dos sumários automáticos segundo os Métodos 1 e 2 e subsequente avaliação dos mesmos, construiu-se o corpus CM2News, que possui 20 coleções de notícias jornalísticas, cada uma delas composta por 1 texto original em inglês e 1 texto original em português sobre um mesmo assunto. Os nomes comuns do CM2News foram identificados via anotação morfossintática e anotados com os conceitos da WordNet de Princeton de forma semiautomática, ou seja, por meio do editor gráfico MulSen desenvolvido para a tarefa. Para a produção dos sumários segundo o Método 1, somente as sentenças em português mais bem pontuadas foram selecionadas até que se atingisse determinada taxa de compressão. Para a produção dos sumários segundo o Método 2, as sentenças mais pontuadas foram selecionadas sem privilegiar a língua do usuário. Caso as sentenças selecionadas estivessem em inglês, estas foram automaticamente traduzidas para o português pelo tradutor Bing. Os Métodos 1 e 2 foram avaliados de forma intrínseca, considerando-se a qualidade linguística e a informatividade dos sumários. Para avaliar a qualidade linguística, 15 linguistas computacionais analisaram manualmente a gramaticalidade, a não-redundância, a clareza referencial, o foco e a estrutura/coerência dos sumários e, para avaliar a informatividade, os sumários foram automaticamente comparados a sumários de referência pelo pacote de medidas ROUGE. Em ambas as avaliações, os resultados evidenciam o melhor desempenho do Método 1, o que pode ser justificado pelo fato de que as sentenças selecionadas são provenientes de um mesmo texto-fonte. Além disso, ressalta-se o melhor desempenho dos dois métodos baseados em conhecimento léxico-conceitual frente aos métodos mais simples de SAMM, os quais realizam a TA integral dos textos-fonte. Por fim, salienta-se que, além dos resultados promissores sobre a aplicação de conhecimento léxico-conceitual, este trabalho gerou recursos e ferramentas importantes para a SAMM, como o corpus CM2News e o editor MulSen.
APA, Harvard, Vancouver, ISO, and other styles
31

Laurent, Mario. "Recherche et développement du Logiciel Intelligent de Cartographie Inversée, pour l’aide à la compréhension de texte par un public dyslexique." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAL016/document.

Full text
Abstract:
Les enfants souffrant de troubles du langage, comme la dyslexie, rencontrent de grandes difficultés dans l'apprentissage de la lecture et dans toute tâche de lecture, par la suite. Ces difficultés compromettent grandement l'accès au sens des textes auxquels ils sont confrontés durant leur scolarité, ce qui implique des difficultés d'apprentissage et les entraîne souvent vers une situation d'échec scolaire. Depuis une quinzaine d'années, des outils développés dans le domaine du Traitement Automatique des Langues sont détournés pour être utilisés comme stratégie d'aide et de compensation pour les élèves en difficultés. Parallèlement, l'usage de cartes conceptuelles ou de cartes heuristiques pour aider les enfants dyslexiques à formuler leurs pensées, ou à retenir certaines connaissances, s'est développé. Ce travail de thèse vise à répertorier et croiser, d'une part, les connaissances sur le public dyslexique, sa prise en charge et ses difficultés, d'autre part, les possibilités pédagogiques ouvertes par l'usage de cartes, et enfin, les technologies de résumé automatique et d'extraction de mots-clés. L'objectif est de réaliser un logiciel novateur capable de transformer automatiquement un texte donné en une carte, celle-ci doit faciliter la compréhension du texte tout en comprenant des fonctionnalités adaptées à un public d'adolescents dyslexiques. Ce projet a abouti, premièrement, à la réalisation d'une expérimentation exploratoire, sur l'aide à la compréhension de texte grâce aux cartes heuristiques, qui permet de définir de nouveaux axes de recherche ; deuxièmement, à la réalisation d'un prototype de logiciel de cartographie automatique qui est présenté en fin de thèse
Children with language impairment, such as dyslexia, are often faced with important difficulties when learning to read and during any subsequent reading tasks. These difficulties tend to compromise the understanding of the texts they must read during their time at school. This implies learning difficulties and may lead to academic failure. Over the past fifteen years, general tools developed in the field of Natural Language Processing have been transformed into specific tools for that help with and compensate for language impaired students' difficulties. At the same time, the use of concept maps or heuristic maps to encourage dyslexic children express their thoughts, or retain certain knowledge, has become popular. This thesis aims to identify and explore knowledge about the dyslexic public, how society takes care of them and what difficulties they face; the pedagogical possibilities opened up by the use of maps; and the opportunities created by automatic summarization and Information Retrieval fields. The aim of this doctoral research project was to create an innovative piece of software that automatically transforms a given text into a map. It was important that this piece of software facilitate reading comprehension while including functionalities that are adapted to dyslexic teenagers. The project involved carrying out an exploratory experiment on reading comprehension aid, thanks to heuristic maps, that make the identification of new research topics possible, and implementing an automatic mapping software prototype that is presented at the end of this thesis
APA, Harvard, Vancouver, ISO, and other styles
32

Hohm, Joseph Brandon 1982. "Automatic classification of documents with an in-depth analysis of information extraction and automatic summarization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/29415.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.
Includes bibliographical references (leaves 78-80).
Today, annual information fabrication per capita exceeds two hundred and fifty megabytes. As the amount of data increases, classification and retrieval methods become more necessary to find relevant information. This thesis describes a .Net application (named I-Document) that establishes an automatic classification scheme in a peer-to-peer environment that allows free sharing of academic, business, and personal documents. A Web service architecture for metadata extraction, Information Extraction, Information Retrieval, and text summarization is depicted. Specific details regarding the coding process, competition, business model, and technology employed in the project are also discussed.
by Joseph Brandon Hohm.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
33

Balahur, Dobrescu Alexandra. "Methods and resources for sentiment analysis in multilingual documents of different text types." Doctoral thesis, Universidad de Alicante, 2011. http://hdl.handle.net/10045/19437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tsatsaronis, George. "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-202687.

Full text
Abstract:
This article provides an overview of the first BioASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BioASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies.
APA, Harvard, Vancouver, ISO, and other styles
35

Pokorný, Lubomír. "Metody sumarizace textových dokumentů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236443.

Full text
Abstract:
This thesis deals with one-document summarization of text data. Part of it is devoted to data preparation, mainly to the normalization. Listed are some of the stemming algorithms and it contains also description of lemmatization. The main part is devoted to Luhn"s method for summarization and its extension of use WordNet dictionary. Oswald summarization method is described and applied as well. Designed and implemented application performs automatic generation of abstracts using these methods. A set of experiments where developed, which verified correct functionality of the application and of extension of Luhn"s summarization method too.
APA, Harvard, Vancouver, ISO, and other styles
36

Tsatsaronis, George. "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition." BioMed Central, 2015. https://tud.qucosa.de/id/qucosa%3A29496.

Full text
Abstract:
This article provides an overview of the first BioASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BioASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies.
APA, Harvard, Vancouver, ISO, and other styles
37

Fuentes, Fort Maria. "A Flexible Multitask Summarizer for Documents from Different Media, Domain and Language." Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/6655.

Full text
Abstract:
Automatic Summarization is probably crucial with the increase of document generation. Particularly when retrieving, managing and processing information have become decisive tasks. However, one should not expect perfect systems able to substitute human sumaries. The automatic sumarization process strongly depends not only on the characteristics of the documents, but also on user different needs.Thus, several aspects have to be taken into account when designing an information system for summarizing, because, depending on the characteristics of the input documents and the desired results, several techniques can be aplied. In order to suport this process, the final goal of the thesis is to provide a flexible multitask summarizer architecture. This goal is decomposed in three main research purposes. First, to study the process of porting systems to different summarization tasks, processing documents in different lenguages, domains or media with the aim of designing a generic architecture to permit the easy addition of new tasks by reusing existents tools. Second, the developes prototypes for some tasks involving aspects related with the lenguage, the media and the domain of the document or documents to be summarized as well as aspects related with the summary content: generic, novelly summaries, or summaries that give answer to a specific user need. Third, to create an evaluation framework to analyze the performance of several approaches in written news and scientific oral presentation domains, focusing mainly in its intrinsic evaluation.
El resumen automático probablemente sea crucial en un momento en que la gran cantidad de documentos generados diariamente hace que recuperar, tratar y asimilar la información que contienen se haya convertido en una ardua y a su vez decisiva tarea. A pesar de ello, no podemos esperar que los resúmenes producidos de forma automática vayan a ser capaces de sustituir a los humanos. El proceso de resumen automático no sólo depende de las características propias de los documentos a ser resumidos, sino que es fuertemente dependiente de las necesidades específicas de los usuarios. Por ello, el diseño de un sistema de información para resumen conlleva tener en cuenta varios aspectos. En función de las características de los documentos de entrada y de los resultados deseados es posible aplicar distintas técnicas. Por esta razón surge la necesidad de diseñar una arquitectura flexible que permita la implementación de múltiples tareas de resumen. Este es el objetivo final de la tesis que presento dividido en tres subtemas de investigación. En primer lugar, estudiar el proceso de adaptabilidad de sistemas a diferentes tareas de resumen, como son procesar documentos producidos en diferentes lenguas, dominios y medios (sonido y texto), con la voluntad de diseñar una arquitectura genérica que permita la fácil incorporación de nuevas tareas a través de reutilizar herramientas existentes. En segundo lugar, desarrollar prototipos para distintas tareas, teniendo en cuenta aspectos relacionados con la lengua, el dominio y el medio del documento o conjunto de documentos que requieren ser resumidos, así como aspectos relacionados con el contenido final del resumen: genérico, novedad o resumen que de respuesta a una necesidad especifica. En tercer lugar, crear un marco de evaluación que permita analizar la competencia intrínseca de distintos prototipos al resumir noticias escritas y presentaciones científicas orales.
APA, Harvard, Vancouver, ISO, and other styles
38

沈健誠. "Multi-Document Summarization System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/67547214470615254060.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
89
Most summarization systems are designed for a single document at present. These systems indicate the essence of individual document, but do not transfer similar documents into single summary. Can we develop a multi-document summarization system, which transfers related documents with the same event into a summary? If that is possible, the main points of documents will be clearly and simply displayed with two or three sentences. Users can see whether these documents are what they want in a minute. It can reduce time for collecting documents and enable users to gather information on the Internet more efficiently. To develop a multi-document summarization system is the goal of this thesis. Summary produced by the must system satisfy two conditions: indicative and topic related. The summary should be tailored to suit user’s query. To achieve this goal, we will study the indicativeness and topic relevance of sentences, and the selection of sentences that are important and independence to each other. Finally, unimportant small clauses will be deleted, to make the final summary more concise. System generates summaries with 248 documents and fifty topics of NTCIR. The reduction rate is over 95%. overall, the quality of summaries produced were satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
39

"Automatic bilingual text document summarization." 2002. http://library.cuhk.edu.hk/record=b5891141.

Full text
Abstract:
Lo Sau-Han Silvia.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 137-143).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Definition of a summary --- p.2
Chapter 1.2 --- Definition of text summarization --- p.3
Chapter 1.3 --- Previous work --- p.4
Chapter 1.3.1 --- Extract-based text summarization --- p.5
Chapter 1.3.2 --- Abstract-based text summarization --- p.8
Chapter 1.3.3 --- Sophisticated text summarization --- p.9
Chapter 1.4 --- Summarization evaluation methods --- p.10
Chapter 1.4.1 --- Intrinsic evaluation --- p.10
Chapter 1.4.2 --- Extrinsic evaluation --- p.11
Chapter 1.4.3 --- The TIPSTER SUMMAC text summarization evaluation --- p.11
Chapter 1.4.4 --- Text Summarization Challenge (TSC) --- p.13
Chapter 1.5 --- Research contributions --- p.14
Chapter 1.5.1 --- Text summarization based on thematic term approach --- p.14
Chapter 1.5.2 --- Bilingual news summarization based on an event-driven approach --- p.15
Chapter 1.6 --- Thesis organization --- p.16
Chapter 2 --- Text Summarization based on a Thematic Term Approach --- p.17
Chapter 2.1 --- System overview --- p.18
Chapter 2.2 --- Document preprocessor --- p.20
Chapter 2.2.1 --- English corpus --- p.20
Chapter 2.2.2 --- English corpus preprocessor --- p.22
Chapter 2.2.3 --- Chinese corpus --- p.23
Chapter 2.2.4 --- Chinese corpus preprocessor --- p.24
Chapter 2.3 --- Corpus thematic term extractor --- p.24
Chapter 2.4 --- Article thematic term extractor --- p.26
Chapter 2.5 --- Sentence score generator --- p.29
Chapter 2.6 --- Chapter summary --- p.30
Chapter 3 --- Evaluation for Summarization using the Thematic Term Ap- proach --- p.32
Chapter 3.1 --- Content-based similarity measure --- p.33
Chapter 3.2 --- Experiments using content-based similarity measure --- p.36
Chapter 3.2.1 --- English corpus and parameter training --- p.36
Chapter 3.2.2 --- Experimental results using content-based similarity mea- sure --- p.38
Chapter 3.3 --- Average inverse rank (AIR) method --- p.59
Chapter 3.4 --- Experiments using average inverse rank method --- p.60
Chapter 3.4.1 --- Corpora and parameter training --- p.61
Chapter 3.4.2 --- Experimental results using AIR method --- p.62
Chapter 3.5 --- Comparison between the content-based similarity measure and the average inverse rank method --- p.69
Chapter 3.6 --- Chapter summary --- p.73
Chapter 4 --- Bilingual Event-Driven News Summarization --- p.74
Chapter 4.1 --- Corpora --- p.75
Chapter 4.2 --- Topic and event definitions --- p.76
Chapter 4.3 --- Architecture of bilingual event-driven news summarization sys- tem --- p.77
Chapter 4.4 --- Bilingual event-driven approach summarization --- p.80
Chapter 4.4.1 --- Dictionary-based term translation applying on English news articles --- p.80
Chapter 4.4.2 --- Preprocessing for Chinese news articles --- p.89
Chapter 4.4.3 --- Event clusters generation --- p.89
Chapter 4.4.4 --- Cluster selection and summary generation --- p.96
Chapter 4.5 --- Evaluation for summarization based on event-driven approach --- p.101
Chapter 4.6 --- Experimental results on event-driven summarization --- p.103
Chapter 4.6.1 --- Experimental settings --- p.103
Chapter 4.6.2 --- Results and analysis --- p.105
Chapter 4.7 --- Chapter summary --- p.113
Chapter 5 --- Applying Event-Driven Summarization to a Parallel Corpus --- p.114
Chapter 5.1 --- Parallel corpus --- p.115
Chapter 5.2 --- Parallel documents preparation --- p.116
Chapter 5.3 --- Evaluation methods for the event-driven summaries generated from the parallel corpus --- p.118
Chapter 5.4 --- Experimental results and analysis --- p.121
Chapter 5.4.1 --- Experimental settings --- p.121
Chapter 5.4.2 --- Results and analysis --- p.123
Chapter 5.5 --- Chapter summary --- p.132
Chapter 6 --- Conclusions and Future Work --- p.133
Chapter 6.1 --- Conclusions --- p.133
Chapter 6.2 --- Future work --- p.135
Bibliography --- p.137
Chapter A --- English Stop Word List --- p.144
Chapter B --- Chinese Stop Word List --- p.149
Chapter C --- Event List Items on the Corpora --- p.151
Chapter C.1 --- "Event list items for the topic ""Upcoming Philippine election""" --- p.151
Chapter C.2 --- "Event list items for the topic ""German train derail"" " --- p.153
Chapter C.3 --- "Event list items for the topic ""Electronic service delivery (ESD) scheme"" " --- p.154
Chapter D --- The sample of an English article (9505001.xml). --- p.156
APA, Harvard, Vancouver, ISO, and other styles
40

Tsai, Erh-I., and 蔡而益. "Topic-Based Multi-Document Summarization System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/11463643705190386674.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
99
With the explosion in the amount of information available electronically, information overloading has become a major problem and people have to spend more and more time to look for the information they need. Automatic text summarization has draw much attention in recent years and it exhibits the practicability in document management and search systems.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Cheng-Chang, and 劉政璋. "Concept Cluster Based News Document Summarization." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/22477396909604899181.

Full text
Abstract:
碩士
國立交通大學
資訊科學系所
93
A multi-document summarization system can reduce the time for a user to read a large number of documents. A summarization system, in general, selects salient features from one (or many) document(s) to compose a summarization, in the hope that the generated summarization can help a user understand the meaning of the document(s). This thesis proposes a method to analyze the semantics of news documents. The method is divided into two phases. The first phase attempts to discover the subtle topics called concepts hidden in documents. Due to the phenomenon that similar nouns, verbs, and adjectives usually co-occur with the same representative term, we describe a concept by those terms around it, and use a semantic network to assist the description of a concept more accurately. The second phase distinguishes the concepts discovered in the first phase by their word senses. The K-means clustering algorithm is exploited to gather concepts with the same sense into the same cluster. Clustering can diminish the problem about word sense ambiguation and reduce concepts with similar sense. After the two above phase, we choose five features to weight sentences and order sentences according to their weights. The five features are lengths of clusters, location of a sentence, tf*idf, distance between a sentence and the center of the cluster to which the sentence belongs, and the similarity between a sentence and the cluster to which the sentence belongs. We use the news documents of Document Understanding Conferences 2003 (DUC2003) and its evaluation tool to evaluate the performance of our method.
APA, Harvard, Vancouver, ISO, and other styles
42

Lin, Chih-Lung, and 林志龍. "Mining Association Words for Document Summarization." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/62018588118824230371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Alliheedi, Mohammed. "Multi-document Summarization System Using Rhetorical Information." Thesis, 2012. http://hdl.handle.net/10012/6820.

Full text
Abstract:
Over the past 20 years, research in automated text summarization has grown significantly in the field of natural language processing. The massive availability of scientific and technical information on the Internet, including journals, conferences, and news articles has attracted the interest of various groups of researchers working in text summarization. These researchers include linguistics, biologists, database researchers, and information retrieval experts. However, because the information available on the web is ever expanding, reading the sheer volume of information is a significant challenge. To deal with this volume of information, users need appropriate summaries to help them more efficiently manage their information needs. Although many automated text summarization systems have been proposed in the past twenty years, none of these systems have incorporated the use of rhetoric. To date, most automated text summarization systems have relied only on statistical approaches. These approaches do not take into account other features of language such as antimetabole and epanalepsis. Our hypothesis is that rhetoric can provide this type of additional information. This thesis addresses these issues by investigating the role of rhetorical figuration in detecting the salient information in texts. We show that automated multi-document summarization can be improved using metrics based on rhetorical figuration. A corpus of presidential speeches, which is for different U.S. presidents speeches, has been created. It includes campaign, state of union, and inaugural speeches to test our proposed multi-document summarization system. Various evaluation metrics have been used to test and compare the performance of the produced summaries of both our proposed system and other system. Our proposed multi-document summarization system using rhetorical figures improves the produced summaries, and achieves better performance over MEAD system in most of the cases especially in antimetabole, polyptoton, and isocolon. Overall, the results of our system are promising and leads to future progress on this research.
APA, Harvard, Vancouver, ISO, and other styles
44

Lamkhede, Sudarshan. "Multi-document summarization using concept chain graphs." 2005. http://proquest.umi.com/pqdweb?did=994252731&sid=19&Fmt=2&clientId=39334&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (M.S.)--State University of New York at Buffalo, 2005.
Title from PDF title page (viewed on Mar. 16, 2006) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Srihari, Rohini K. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
45

June-Jei, Kuo. "A Study on Multiple Document Summarization Systems." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0507200616513700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

黃思萱. "Multi-Document Summarization Based on Keyword Clustering." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/53800112370126276087.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
90
With the rapid growth of the World Wide Web, more and more information is accessible on-line. This explosion of information has resulted in an information overload problem. However, people have no time to read everything and have to decide which information is available. The technology of automatic text summarization is indispensable for dealing with this problem. Text summarization is the process of distilling the most important information from a source to produce an abridged version for a particular user and task. Recent researches on multi-document summarization are based on document clustering technology. We propose a method of multi-document summarization, which is based on keyword clustering. In our investigation we develop three methods of keyword clustering to produce multi-document summaries. We distill representative keywords from all documents, and then cluster keywords using connected component, weighted clique and hybrid of both. The purpose of keyword clustering is to gather up information which discusses the same topic or event. In the same cluster, our system computes weight of each sentence and ranks all sentences by weight. The first largest weight sentences will be chosen as the summary of the documents. Our experiments show that stricter keyword clustering method has better summary results. The system which we develop can help people to save time, and read important new documents.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Sheng-Jyun, and 王聖竣. "A Study of Automatic Document Summarization Retrieval." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/30839849743972595166.

Full text
Abstract:
碩士
中華大學
資訊管理學系碩士班
99
Automatic document summarization can help users to quickly understand the content of the article. Automatic document summarization can extract the important meaning, knowledge, or filter unnecessary information. This research combines statistical method and linguistic method to build an automatic document summarization retrieval model. Entropy method as the statistical method of this research to extract importamt features. And linguistic method is used to explore the relationship among features and get the importance of features. First, we extract the features of documents. Secondly, we use statistical methods to calculate the weights of features. Finally, we get the scroes for each sentences througth linguistic approach and get important sentences of documents. This research conducts two experiments to verify the proposed approach. The result reveals that the proposed model can extracts better features proposed. And entropy method is better than the other three statistical methods. The result the proposed framework is feasible and referable. Keywords: Automatic Article Summary, Statistical Approach, Linguistic Approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Pei-Chen, and 楊佩臻. "Using Sentence Network to Automatic Document Summarization." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/93066067306713516291.

Full text
Abstract:
碩士
國立中央大學
資訊管理學系
101
This paper proposed a Graph-based Summarization method by building a sentence network that represent the relation between sentences with NGD. The method can get rid of the dependence of external resources like corpus and lexical database by using the words in the documents and the search result. Using Wiki Engine to calculate NGD and find out the relation between words. Finally, the keywords in the documents are found out. Building a Vector Space Model by the keywords and calculating the similarity between sentences to build a sentence network. The most import sentences are extracted by using Link Analysis. The experiment results showed that the ROUGE value of proposed graph-based single-document summarization method is better than other machine learning methods, and the ROUGE value of proposed graph-based multi-documents summarization method is just lower than few peers using machine learning methods. It proves that this proposed method is an effective unsupervised document summarization without external resources like corpus and lexical database.
APA, Harvard, Vancouver, ISO, and other styles
49

Kuo, June-Jei, and 郭俊桔. "A Study on Multiple Document Summarization Systems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84838785413103052977.

Full text
Abstract:
博士
國立臺灣大學
資訊工程學研究所
94
In order to provide a generic summary to help on-line readers to absorb news information from multiple sources, in this dissertation we study the related issues on the multi-document summarization, e.g., event clustering, sentence selection, redundancy avoidance, sentence ordering and summary evaluation, and focus on two major modules: event clustering and summary generation. Besides using the conventional features, e.g., lexical information or part-of-speech, term frequency, document frequency and paragraph dispersion of a word in a document are used to propose informative words, which can be used to represent the corresponding document. In the event clustering module, to further understand a document we introduce the semantic features, such as event words and co-reference chains. The controlled vocabulary mining from co-reference chains is also proposed to solve the cross document name entity unification issue. Meanwhile, we propose a novel dynamic threshold model to enhance the performance of event clustering. On the other hand, in the summary generation module, we propose a temporal tagger to deal with the temporal resolution and provide sentence dates for sentence ordering. We also introduce the latent semantic analysis (LSA) to tackle the sentence selection issue. On the one hand, to tackle the summary length issue, the sentence reduction algorithm using both event constituent words and informative words is also proposed. Finally, the experimental results on both content and readability for generated multi-document summarization are promising. On the other hand, to investigate the performance of proposed semantic features, the headline generation and multi-lingual multi-document summarization are also studied. Besides, we tackle the automatic evaluation issue on summary evaluation by introducing question answering (QA). Promising results are obtained as well.
APA, Harvard, Vancouver, ISO, and other styles
50

Tsai, Bing-Hong, and 蔡秉宏. "Extractive Document Summarization based on BERT Model." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394076%22.&searchmode=basic.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系所
107
Document summarization is an important application of natural language process (NLP). In this thesis, we propose an extractive summarization model based on BERT model. Our idea is to cast the extractive document summarization into a key sentence selection problem and adapt the BERT model to learn a classification model to predict the score of each sentence token and aggregate the scores of the tokens in a sentence and then average as the sentence score. The experiment evaluation based on CNN and DailyMail dataset demonstrates the performance of the proposed BERT adaption model for the extractive document summarization task from 42.99 to 43.42 in terms of ROUGE-1 score.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography