Dissertations / Theses on the topic 'Document Summarization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Document Summarization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tohalino, Jorge Andoni Valverde. "Extractive document summarization using complex networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24102018-155954/.
Full textDevido à grande quantidade de informações textuais disponíveis na Internet, a tarefa de sumarização automática de documentos ganhou importância significativa. A sumarização de documentos tornou-se importante porque seu foco é o desenvolvimento de técnicas destinadas a encontrar conteúdo relevante e conciso em grandes volumes de informação sem alterar seu significado original. O objetivo deste trabalho de Mestrado é usar os conceitos da teoria de grafos para o resumo extrativo de documentos para Sumarização mono-documento (SDS) e Sumarização multi-documento (MDS). Neste trabalho, os documentos são modelados como redes, onde as sentenças são representadas como nós com o objetivo de extrair as sentenças mais relevantes através do uso de algoritmos de ranqueamento. As arestas entre nós são estabelecidas de maneiras diferentes. A primeira abordagem para o cálculo de arestas é baseada no número de substantivos comuns entre duas sentenças (nós da rede). Outra abordagem para criar uma aresta é através da similaridade entre duas sentenças. Para calcular a similaridade de tais sentenças, foi usado o modelo de espaço vetorial baseado na ponderação Tf-Idf e word embeddings para a representação vetorial das sentenças. Além disso, fazemos uma distinção entre as arestas que vinculam sentenças de diferentes documentos (inter-camada) e aquelas que conectam sentenças do mesmo documento (intra-camada) usando modelos de redes multicamada para a tarefa de Sumarização multi-documento. Nesta abordagem, cada camada da rede representa um documento do conjunto de documentos que será resumido. Além das medições tipicamente usadas em redes complexas como grau dos nós, coeficiente de agrupamento, caminhos mais curtos, etc., a caracterização da rede também é guiada por medições dinâmicas de redes complexas, incluindo simetria, acessibilidade e tempo de absorção. Os resumos gerados foram avaliados usando diferentes corpus para Português e Inglês. A métrica ROUGE-1 foi usada para a validação dos resumos gerados. Os resultados sugerem que os modelos mais simples, como redes baseadas em Noun e Tf-Idf, obtiveram um melhor desempenho em comparação com os modelos baseados em word embeddings. Além disso, excelentes resultados foram obtidos usando a representação de redes multicamada de documentos para MDS. Finalmente, concluímos que várias medidas podem ser usadas para melhorar a caracterização de redes para a tarefa de sumarização.
Ou, Shiyan, Christopher S. G. Khoo, and Dion H. Goh. "Automatic multi-document summarization for digital libraries." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.
Full textHuang, Fang. "Multi-document summarization with latent semantic analysis." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419255.
Full textGrant, Harald. "Extractive Multi-document Summarization of News Articles." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158275.
Full textGeiss, Johanna. "Latent semantic sentence clustering for multi-document summarization." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609761.
Full textChellal, Abdelhamid. "Event summarization on social media stream : retrospective and prospective tweet summarization." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30118/document.
Full textUser-generated content on social media, such as Twitter, provides in many cases, the latest news before traditional media, which allows having a retrospective summary of events and being updated in a timely fashion whenever a new development occurs. However, social media, while being a valuable source of information, can be also overwhelming given the volume and the velocity of published information. To shield users from being overwhelmed by irrelevant and redundant posts, retrospective summarization and prospective notification (real-time summarization) were introduced as two complementary tasks of information seeking on document streams. The former aims to select a list of relevant and non-redundant tweets that capture "what happened". In the latter, systems monitor the live posts stream and push relevant and novel notifications as soon as possible. Our work falls within these frameworks and focuses on developing a tweet summarization approaches for the two aforementioned scenarios. It aims at providing summaries that capture the key aspects of the event of interest to help users to efficiently acquire information and follow the development of long ongoing events from social media. Nevertheless, tweet summarization task faces many challenges that stem from, on one hand, the high volume, the velocity and the variety of the published information and, on the other hand, the quality of tweets, which can vary significantly. In the prospective notification, the core task is the relevancy and the novelty detection in real-time. For timeliness, a system may choose to push new updates in real-time or may choose to trade timeliness for higher notification quality. Our contributions address these levels: First, we introduce Word Similarity Extended Boolean Model (WSEBM), a relevance model that does not rely on stream statistics and takes advantage of word embedding model. We used word similarity instead of the traditional weighting techniques. By doing this, we overcome the shortness and word mismatch issues in tweets. The intuition behind our proposition is that context-aware similarity measure in word2vec is able to consider different words with the same semantic meaning and hence allows offsetting the word mismatch issue when calculating the similarity between a tweet and a topic. Second, we propose to compute the novelty score of the incoming tweet regarding all words of tweets already pushed to the user instead of using the pairwise comparison. The proposed novelty detection method scales better and reduces the execution time, which fits real-time tweet filtering. Third, we propose an adaptive Learning to Filter approach that leverages social signals as well as query-dependent features. To overcome the issue of relevance threshold setting, we use a binary classifier that predicts the relevance of the incoming tweet. In addition, we show the gain that can be achieved by taking advantage of ongoing relevance feedback. Finally, we adopt a real-time push strategy and we show that the proposed approach achieves a promising performance in terms of quality (relevance and novelty) with low cost of latency whereas the state-of-the-art approaches tend to trade latency for higher quality. This thesis also explores a novel approach to generate a retrospective summary that follows a different paradigm than the majority of state-of-the-art methods. We consider the summary generation as an optimization problem that takes into account the topical and the temporal diversity. Tweets are filtered and are incrementally clustered in two cluster types, namely topical clusters based on content similarity and temporal clusters that depends on publication time. Summary generation is formulated as integer linear problem in which unknowns variables are binaries, the objective function is to be maximized and constraints ensure that at most one post per cluster is selected with respect to the defined summary length limit
Linhares, Pontes Elvys. "Compressive Cross-Language Text Summarization." Thesis, Avignon, 2018. http://www.theses.fr/2018AVIG0232/document.
Full textThe popularization of social networks and digital documents increased quickly the informationavailable on the Internet. However, this huge amount of data cannot be analyzedmanually. Natural Language Processing (NLP) analyzes the interactions betweencomputers and human languages in order to process and to analyze natural languagedata. NLP techniques incorporate a variety of methods, including linguistics, semanticsand statistics to extract entities, relationships and understand a document. Amongseveral NLP applications, we are interested, in this thesis, in the cross-language textsummarization which produces a summary in a language different from the languageof the source documents. We also analyzed other NLP tasks (word encoding representation,semantic similarity, sentence and multi-sentence compression) to generate morestable and informative cross-lingual summaries.Most of NLP applications (including all types of text summarization) use a kind ofsimilarity measure to analyze and to compare the meaning of words, chunks, sentencesand texts in their approaches. A way to analyze this similarity is to generate a representationfor these sentences that contains the meaning of them. The meaning of sentencesis defined by several elements, such as the context of words and expressions, the orderof words and the previous information. Simple metrics, such as cosine metric andEuclidean distance, provide a measure of similarity between two sentences; however,they do not analyze the order of words or multi-words. Analyzing these problems,we propose a neural network model that combines recurrent and convolutional neuralnetworks to estimate the semantic similarity of a pair of sentences (or texts) based onthe local and general contexts of words. Our model predicted better similarity scoresthan baselines by analyzing better the local and the general meanings of words andmulti-word expressions.In order to remove redundancies and non-relevant information of similar sentences,we propose a multi-sentence compression method that compresses similar sentencesby fusing them in correct and short compressions that contain the main information ofthese similar sentences. We model clusters of similar sentences as word graphs. Then,we apply an integer linear programming model that guides the compression of theseclusters based on a list of keywords. We look for a path in the word graph that has goodcohesion and contains the maximum of keywords. Our approach outperformed baselinesby generating more informative and correct compressions for French, Portugueseand Spanish languages. Finally, we combine these previous methods to build a cross-language text summarizationsystem. Our system is an {English, French, Portuguese, Spanish}-to-{English,French} cross-language text summarization framework that analyzes the informationin both languages to identify the most relevant sentences. Inspired by the compressivetext summarization methods in monolingual analysis, we adapt our multi-sentencecompression method for this problem to just keep the main information. Our systemproves to be a good alternative to compress redundant information and to preserve relevantinformation. Our system improves informativeness scores without losing grammaticalquality for French-to-English cross-lingual summaries. Analyzing {English,French, Portuguese, Spanish}-to-{English, French} cross-lingual summaries, our systemsignificantly outperforms extractive baselines in the state of the art for all these languages.In addition, we analyze the cross-language text summarization of transcriptdocuments. Our approach achieved better and more stable scores even for these documentsthat have grammatical errors and missing information
Kipp, Darren. "Shallow semantics for topic-oriented multi-document automatic text summarization." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27772.
Full textHennig, Leonhard Verfasser], and Sahin [Akademischer Betreuer] [Albayrak. "Content Modeling for Automatic Document Summarization / Leonhard Hennig. Betreuer: Sahin Albayrak." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/1017593698/34.
Full textTsai, Chun-I. "A Study on Neural Network Modeling Techniques for Automatic Document Summarization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-395940.
Full textKeskes, Iskandar. "Discourse analysis of arabic documents and application to automatic summarization." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30023/document.
Full textWithin a discourse, texts and conversations are not just a juxtaposition of words and sentences. They are rather organized in a structure in which discourse units are related to each other so as to ensure both discourse coherence and cohesion. Discourse structure has shown to be useful in many NLP applications including machine translation, natural language generation and language technology in general. The usefulness of discourse in NLP applications mainly depends on the availability of powerful discourse parsers. To build such parsers and improve their performances, several resources have been manually annotated with discourse information within different theoretical frameworks. Most available resources are in English. Recently, several efforts have been undertaken to develop manually annotated discourse information for other languages such as Chinese, German, Turkish, Spanish and Hindi. Surprisingly, discourse processing in Modern Standard Arabic (MSA) has received less attention despite the fact that MSA is a language with more than 422 million speakers in 22 countries. Computational processing of Arabic language has received a great attention in the literature for over twenty years. Several resources and tools have been built to deal with Arabic non concatenative morphology and Arabic syntax going from shallow to deep parsing. However, the field is still very vacant at the layer of discourse. As far as we know, the sole effort towards Arabic discourse processing was done in the Leeds Arabic Discourse Treebank that extends the Penn Discourse TreeBank model to MSA. In this thesis, we propose to go beyond the annotation of explicit relations that link adjacent units, by completely specifying the semantic scope of each discourse relation, making transparent an interpretation of the text that takes into account the semantic effects of discourse relations. In particular, we propose the first effort towards a semantically driven approach of Arabic texts following the Segmented Discourse Representation Theory (SDRT). Our main contributions are: A study of the feasibility of building a recursive and complete discourse structures of Arabic texts. In particular, we propose: An annotation scheme for the full discourse coverage of Arabic texts, in which each constituent is linked to other constituents. A document is then represented by an oriented acyclic graph, which captures explicit and implicit relations as well as complex discourse phenomena, such as long-distance attachments, long-distance discourse pop-ups and crossed dependencies. A novel discourse relation hierarchy. We study the rhetorical relations from a semantic point of view by focusing on their effect on meaning and not on how they are lexically triggered by discourse connectives that are often ambiguous, especially in Arabic. A thorough quantitative analysis (in terms of discourse connectives, relation frequencies, proportion of implicit relations, etc.) and qualitative analysis (inter-annotator agreements and error analysis) of the annotation campaign. An automatic discourse parser where we investigate both automatic segmentation of Arabic texts into elementary discourse units and automatic identification of explicit and implicit Arabic discourse relations. An application of our discourse parser to Arabic text summarization. We compare tree-based vs. graph-based discourse representations for producing indicative summaries and show that the full discourse coverage of a document is definitively a plus
Qumsiyeh, Rani Majed. "Easy to Find: Creating Query-Based Multi-Document Summaries to Enhance Web Search." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2713.
Full textKarlsson, Simon. "Using semantic folding with TextRank for automatic summarization." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210040.
Full textDet här examensarbetet behandlar automatisk textsammanfattning och hur semantisk vikning kan användas som likhetsmått mellan meningar i algoritmen TextRank. Metoden implementerades och jämfördes med två vanliga likhetsmått. Dessa två likhetsmått var cosinus-likhet mellan tf-idf-vektorer samt antal överlappande termer i två meningar. De tre metoderna implementerades och de lingvistiska särdragen som användes vid konstruktionen var stoppord, filtrering av ordklasser samt en avstämmare. Fem olika filter för ordklasser användes, med olika blandningar av substantiv, verb och adjektiv. De tre metoderna utvärderades genom att sammanfatta dokument från DUC och jämföra dessa mot guldsammanfattningar skapade av mänskliga domare. Jämförelse mellan systemsammanfattningar och guldsammanfattningar gjordes med måttet ROUGE-1. Algoritmen med semantisk vikning presterade sämst av de tre jämförda metoderna, dock bara 0.0096 sämre i F-score än cosinus-likhet mellan tf-idf-vektorer som presterade bäst. För semantisk vikning var den genomsnittliga precisionen 46.2% och recall 45.7% för det ordklassfiltret som presterade bäst.
Aker, Ahmet. "Entity type modeling for multi-document summarization : generating descriptive summaries of geo-located entities." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/5138/.
Full textFang, Yimai. "Proposition-based summarization with a coherence-driven incremental model." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287468.
Full textBost, Xavier. "A storytelling machine ? : automatic video summarization : the case of TV series." Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0216/document.
Full textThese past ten years, TV series became increasingly popular. In contrast to classicalTV series consisting of narratively self-sufficient episodes, modern TV seriesdevelop continuous plots over dozens of successive episodes. However, thenarrative continuity of modern TV series directly conflicts with the usual viewing conditions:due to modern viewing technologies, the new seasons of TV series are beingwatched over short periods of time. As a result, viewers are largely disengaged fromthe plot, both cognitively and emotionally, when about to watch new seasons. Sucha situation provides video summarization with remarkably realistic use-case scenarios,that we detail in Chapter 1. Furthermore, automatic movie summarization, longrestricted to trailer generation based on low-level features, finds with TV series a unprecedentedopportunity to address in well-defined conditions the so-called semanticgap: summarization of narrative media requires content-oriented approaches capableto bridge the gap between low-level features and human understanding. We review inChapter 2 the two main approaches adopted so far to address automatic movie summarization.Chapter 3 is dedicated to the various subtasks needed to build the intermediaryrepresentations on which our summarization framework relies: Section 3.2focuses on video segmentation, whereas the rest of Chapter 3 is dedicated to the extractionof different mid-level features, either saliency-oriented (shot size, backgroundmusic), or content-related (speakers). In Chapter 4, we make use of social network analysisas a possible way to model the plot of modern TV series: the narrative dynamicscan be properly captured by the evolution over time of the social network of interactingcharacters. Nonetheless, we have to address here the sequential nature of thenarrative when taking instantaneous views of the state of the relationships between thecharacters. We show that standard time-windowing approaches can not properly handlethis case, and we detail our own method for extracting dynamic social networksfrom narrative media. Chapter 5 is dedicated to the final generation and evaluation ofcharacter-oriented summaries, both able to reflect the plot dynamics and to emotionallyre-engage viewers into the narrative. We evaluate our framework by performing alarge-scale user study in realistic conditions
MELLO, Rafael Ferreira Leite de. "A solution to extractive summarization based on document type and a new measure for sentence similarity." UNIVERSIDADE FEDERAL DE PERNAMBUCO, 2015. https://repositorio.ufpe.br/handle/123456789/15257.
Full textMade available in DSpace on 2016-02-19T18:25:04Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE Rafael Ferreira Leite de Mello.pdf: 1860839 bytes, checksum: 4d54a6ef5e3c40f8bce57e3cc957a8f4 (MD5) Previous issue date: 2015-03-20
The Internet is a enormous and fast growing digital repository encompassing billions of documents in a diversity of subjects, quality, reliability, etc. It is increasingly difficult to scavenge useful information from it. Thus, it is necessary to provide automatically techniques that allowing users to save time and resources. Automatic text summarization techniques may offer a way out to this problem. Text summarization (TS) aims at automatically compress one or more documents to present their main ideas in less space. TS platforms receive one or more documents as input to generate a summary. In recent years, a variety of text summarization methods has been proposed. However, due to the different document types (such as news, blogs, and scientific articles) it became difficult to create a general TS application to create expressive summaries for each type. Another related relevant problem is measuring the degree of similarity between sentences, which is used in applications, such as: text summarization, information retrieval, image retrieval, text categorization, and machine translation. Recent works report several efforts to evaluate sentence similarity by representing sentences using vectors of bag of words or a tree of the syntactic information among words. However, most of these approaches do not take in consideration the sentence meaning and the words order. This thesis proposes: (i) a new text summarization solution which identifies the document type before perform the summarization, (ii) the creation of a new sentence similarity measure based on lexical, syntactic and semantic evaluation to deal with meaning and word order problems. The previous identification of the document types allows the summarization solution to select the methods that is more suitable to each type of text. This thesis also perform a detailed assessment with the most used text summarization methods to selects which create more informative summaries for news, blogs and scientific articles contexts.The sentence similarity measure proposed is completely unsupervised and reaches results similar to humans annotator using the dataset proposed by Li et al. The proposed measure was satisfactorily applied to evaluate the similarity between summaries and to eliminate redundancy in multi-document summarization.
Atualmente a quantidade de documentos de texto aumentou consideravelmente principalmente com o grande crescimento da internet. Existem milhares de artigos de notícias, livros eletrônicos, artigos científicos, blog, etc. Com isso é necessário aplicar técnicas automáticas para extrair informações dessa grande massa de dados. Sumarização de texto pode ser usada para lidar com esse problema. Sumarização de texto (ST) cria versões comprimidas de um ou mais documentos de texto. Em outras palavras, palataformas de ST recebem um ou mais documentos como entrada e gera um sumário deles. Nos últimos anos, uma grande quantidade de técnicas de sumarização foram propostas. Contudo, dado a grande quantidade de tipos de documentos (por exemplo, notícias, blogs e artigos científicos) é difícil encontrar uma técnica seja genérica suficiente para criar sumários para todos os tipos de forma eficiente. Além disto, outro tópico bastante trabalhado na área de mineração de texto é a análise de similaridade entre sentenças. Essa similaridade pode ser usada em aplicações como: sumarização de texto, recuperação de infromação, recuperação de imagem, categorização de texto e tradução. Em geral, as técnicas propostas são baseados em vetores de palavras ou árvores sintáticas, com isso dois problemas não são abordados: o problema de significado e de ordem das palavras. Essa tese propõe: (i) Uma nova solução em sumarização de texto que identifica o tipo de documento antes de realizar a sumarização. (ii) A criação de uma nova medida de similaridade entre sentenças baseada nas análises léxica, sintática e semântica. A identificação de tipo de documento permite que a solução de sumarização selecione os melhores métodos para cada tipo de texto. Essa tese também realizar um estudo detalhado sobre os métodos de sumarização para selecinoar os que criam sumários mais informativos nos contextos de notícias blogs e artigos científicos. A medida de similaridade entre sentences é completamente não supervisionada e alcança resultados similarires dos anotadores humanos usando o dataset proposed por Li et al. A medida proposta também foi satisfatoriamente aplicada na avaliação de similaridade entre resumos e para eliminar redundância em sumarização multi-documento.
Varadarajan, Ramakrishna R. "Ranked Search on Data Graphs." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/220.
Full textAbuRa'ed, Ahmed Ghassan Tawfiq. "Automatic generation of descriptive related work reports." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669975.
Full textLa sección de trabajos relacionados de un artículo científico resume e integra información clave de una lista de documentos científicos relacionados con el trabajo que se presenta. Para redactar esta sección del artículo científico el autor debe identificar, condensar/resumir y combinar información relevante de diferentes artículos. Esta tarea es complicada debido al gran volumen disponible de artículos científicos. En este contexto, la generación automática de tales secciones es un problema importante a abordar. La generación automática de secciones de trabajo relacionados puede ser considerada como una instancia del problema de resumen de documentos múltiples donde, dada una lista de documentos científicos, el objetivo es resumir automáticamente esos documentos científicos y generar la sección de trabajos relacionados. Para estudiar este problema, hemos creado un corpus de secciones de trabajos relacionados anotado manualmente y procesado automáticamente. Asimismo, hemos investigado la relación entre las citaciones y el artículo científico que se cita para modelar adecuadamente las relaciones entre documentos y, así, informar nuestro método de resumen automático. Además, hemos investigado la identificación de citaciones implícitas a un artículo científico dado que es una tarea importante en varias actividades de minería de textos científicos. Presentamos métodos extractivos y abstractivos para resumir una lista de artículos científicos utilizando su red de citaciones. El enfoque extractivo sigue tres etapas: cálculo de la relevancia las oraciones de cada artículo en función de la red de citaciones, selección de oraciones de cada artículo científico para integrarlas en el resumen y generación de la sección de trabajos relacionados agrupando las oraciones por tema. Por otro lado, el enfoque abstractivo intenta generar citaciones para incluirlas en un resumen utilizando redes neuronales y recursos que hemos creado específicamente para esta tarea. La tesis también presenta y discute la evaluación automática y manual de los resúmenes generados automáticamente, demostrando la viabilidad de los enfoques propuestos.
Una secció d’antecedents o estat de l’art d’un articulo científic resumeix la informació clau d'una llista de documents científics relacionats amb el treball que es presenta. Per a redactar aquesta secció de l’article científic l’autor ha d’identificar, condensar / resumir i combinar informació rellevant de diferents articles. Aquesta activitat és complicada per causa del gran volum disponible d’articles científics. En aquest context, la generació automàtica d’aquestes seccions és un problema important a abordar. La generació automàtica d’antecedents o d’estat de l’art pot considerar-se com una instància del problema de resum de documents. Per estudiar aquest problema, es va crear un corpus de seccions d’estat de l’art d’articles científics manualment anotat i processat automàticament. Així mateix, es va investigar la relació entre citacions i l’article científic que es cita per modelar adequadament les relacions entre documents i, així, informar el nostre mètode de resum automàtic. A més, es va investigar la identificació de citacions implícites a un article científic que és un problema important en diverses activitats de mineria de textos científics. Presentem mètodes extractius i abstractius per resumir una llista d'articles científics utilitzant el conjunt de citacions de cada article. L’enfoc extractiu segueix tres etapes: càlcul de la rellevància de les oracions de cada article en funció de les seves citacions, selecció d’oracions de cada article científic per a integrar-les en el resum i generació de la secció de treballs relacionats agrupant les oracions per tema. Per un altre costat, l’enfoc abstractiu implementa la generació de citacions per a incloure-les en un resum que utilitza xarxes neuronals i recursos que hem creat específicament per a aquest tasca. La tesi també presenta i discuteix l'avaluació automàtica i el manual dels resums generats automàticament, demostrant la viabilitat dels mètodes proposats.
Camargo, Renata Tironi de. "Investigação de estratégias de sumarização humana multidocumento." Universidade Federal de São Carlos, 2013. https://repositorio.ufscar.br/handle/ufscar/5781.
Full textUniversidade Federal de Minas Gerais
The multi-document human summarization (MHS), which is the production of a manual summary from a collection of texts from different sources on the same subject, is a little explored linguistic task. Considering the fact that single document summaries comprise information that present recurrent features which are able to reveal summarization strategies, we aimed to investigate multi-document summaries in order to identify MHS strategies. For the identification of MHS strategies, the source texts sentences from the CSTNews corpus (CARDOSO et al., 2011) were manually aligned to their human summaries. The corpus has 50 clusters of news texts and their multi-document summaries in Portuguese. Thus, the alignment revealed the origin of the selected information to compose the summaries. In order to identify whether the selected information show recurrent features, the aligned (and nonaligned) sentences were semi automatically characterized considering a set of linguistic attributes identified in some related works. These attributes translate the content selection strategies from the single document summarization and the clues about MHS. Through the manual analysis of the characterizations of the aligned and non-aligned sentences, we identified that the selected sentences commonly have certain attributes such as sentence location in the text and redundancy. This observation was confirmed by a set of formal rules learned by a Machine Learning (ML) algorithm from the same characterizations. Thus, these rules translate MHS strategies. When the rules were learned and tested in CSTNews by ML, the precision rate was 71.25%. To assess the relevance of the rules, we performed 3 different kinds of intrinsic evaluations: (i) verification of the occurrence of the same strategies in another corpus, and (ii) comparison of the quality of summaries produced by the HMS strategies with the quality of summaries produced by different strategies. Regarding the evaluation (i), which was automatically performed by ML, the rules learned from the CSTNews were tested in a different newspaper corpus and its precision was 70%, which is very close to the precision obtained in the training corpus (CSTNews). Concerning the evaluating (ii), the quality, which was manually evaluated by 10 computational linguists, was considered better than the quality of other summaries. Besides describing features concerning multi-document summaries, this work has the potential to support the multi-document automatic summarization, which may help it to become more linguistically motivated. This task consists of automatically generating multi-document summaries and, therefore, it has been based on the adjustment of strategies identified in single document summarization or only on not confirmed clues about MHS. Based on this work, the automatic process of content selection in multi-document summarization methods may be performed based on strategies systematically identified in MHS.
A sumarização humana multidocumento (SHM), que consiste na produção manual de um sumário a partir de uma coleção de textos, provenientes de fontes-distintas, que abordam um mesmo assunto, é uma tarefa linguística até então pouco explorada. Tomando-se como motivação o fato de que sumários monodocumento são compostos por informações que apresentam características recorrentes, a ponto de revelar estratégias de sumarização, objetivou-se investigar sumários multidocumento com o objetivo de identificar estratégias de SHM. Para a identificação das estratégias de SHM, os textos-fonte (isto é, notícias) das 50 coleções do corpus multidocumento em português CSTNews (CARDOSO et al., 2011) foram manualmente alinhados em nível sentencial aos seus respectivos sumários humanos, relevando, assim, a origem das informações selecionadas para compor os sumários. Com o intuito de identificar se as informações selecionadas para compor os sumários apresentam características recorrentes, as sentenças alinhadas (e não-alinhadas) foram caracterizadas de forma semiautomática em função de um conjunto de atributos linguísticos identificados na literatura. Esses atributos traduzem as estratégias de seleção de conteúdo da sumarização monodocumento e os indícios sobre a SHM. Por meio da análise manual das caracterizações das sentenças alinhadas e não-alinhadas, identificou-se que as sentenças selecionadas para compor os sumários multidocumento comumente apresentam certos atributos, como localização das sentenças no texto e redundância. Essa constatação foi confirmada pelo conjunto de regras formais aprendidas por um algoritmo de Aprendizado de Máquina (AM) a partir das mesmas caracterizações. Tais regras traduzem, assim, estratégias de SHM. Quando aprendidas e testadas no CSTNews pelo AM, as regras obtiveram precisão de 71,25%. Para avaliar a pertinência das regras, 2 avaliações intrínsecas foram realizadas, a saber: (i) verificação da ocorrência das estratégias em outro corpus, e (ii) comparação da qualidade de sumários produzidos pelas estratégias de SHM com a qualidade de sumários produzidos por estratégias diferentes. Na avaliação (i), realizada automaticamente por AM, as regras aprendidas a partir do CSTNews foram testadas em um corpus jornalístico distinto e obtiveram a precisão de 70%, muito próxima da obtida no corpus de treinamento (CSTNews). Na avaliação (ii), a qualidade, avaliada de forma manual por 10 linguistas computacionais, foi considerada superior à qualidade dos demais sumários de comparação. Além de descrever características relativas aos sumários multidocumento, este trabalho, uma vez que gera regras formais (ou seja, explícitas e não-ambíguas), tem potencial de subsidiar a Sumarização Automática Multidocumento (SAM), tornando-a mais linguisticamente motivada. A SAM consiste em gerar sumários multidocumento de forma automática e, para tanto, baseava-se na adaptação das estratégias identificadas na sumarização monodocumento ou apenas em indícios, não comprovados sistematicamente, sobre a SHM. Com base neste trabalho, a seleção de conteúdo em métodos de SAM poderá ser feita com base em estratégias identificadas de forma sistemática na SHM.
Ermakova, Liana. "Short text contextualization in information retrieval : application to tweet contextualization and automatic query expansion." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20023/document.
Full textThe efficient communication tends to follow the principle of the least effort. According to this principle, using a given language interlocutors do not want to work any harder than necessary to reach understanding. This fact leads to the extreme compression of texts especially in electronic communication, e.g. microblogs, SMS, search queries. However, sometimes these texts are not self-contained and need to be explained since understanding them requires knowledge of terminology, named entities or related facts. The main goal of this research is to provide a context to a user or a system from a textual resource.The first aim of this work is to help a user to better understand a short message by extracting a context from an external source like a text collection, the Web or the Wikipedia by means of text summarization. To this end we developed an approach for automatic multi-document summarization and we applied it to short message contextualization, in particular to tweet contextualization. The proposed method is based on named entity recognition, part-of-speech weighting and sentence quality measuring. In contrast to previous research, we introduced an algorithm for smoothing from the local context. Our approach exploits topic-comment structure of a text. Moreover, we developed a graph-based algorithm for sentence reordering. The method has been evaluated at INEX/CLEF tweet contextualization track. We provide the evaluation results over the 4 years of the track. The method was also adapted to snippet retrieval. The evaluation results indicate good performance of the approach
Boukadida, Haykel. "Création automatique de résumés vidéo par programmation par contraintes." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S074/document.
Full textThis thesis focuses on the issue of automatic video summarization. The idea is to create an adaptive video summary that takes into account a set of rules defined on the audiovisual content on the one hand, and that adapts to the users preferences on the other hand. We propose a novel approach that considers the problem of automatic video summarization as a constraint satisfaction problem. The solution is based on constraint satisfaction programming (CSP) as programming paradigm. A set of general rules for summary production are inherently defined by an expert. These production rules are related to the multimedia content of the input video. The rules are expressed as constraints to be satisfied. The final user can then define additional constraints (such as the desired duration of the summary) or enter a set of high-level parameters involving to the constraints already defined by the expert. This approach has several advantages. This will clearly separate the summary production rules (the problem modeling) from the summary generation algorithm (the problem solving by the CSP solver). The summary can hence be adapted without reviewing the whole summary generation process. For instance, our approach enables users to adapt the summary to the target application and to their preferences by adding a constraint or modifying an existing one, without changing the summaries generation algorithm. We have proposed three models of video representation that are distinguished by their flexibility and their efficiency. Besides the originality related to each of the three proposed models, an additional contribution of this thesis is an extensive comparative study of their performance and the quality of the resulting summaries using objective and subjective measures. Finally, and in order to assess the quality of automatically generated summaries, the proposed approach was evaluated by a large-scale user evaluation. This evaluation involved more than 60 people. All these experiments have been performed within the challenging application of tennis match automatic summarization
Pitarch, Yoann. "Résumé de Flots de Données : motifs, Cubes et Hiérarchies." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20051/document.
Full textDue to the rapid increase of information and communication technologies, the amount of generated and available data exploded and a new kind of data, the stream data, appeared. One possible and common definition of data stream is an unbounded sequence of very precise data incoming at an high rate. Thus, it is impossible to store such a stream to perform a posteriori analysis. Moreover, more and more data streams concern multidimensional and multilevel data and very few approaches tackle these specificities. Thus, in this work, we proposed some practical and efficient solutions to deal with such particular data in a dynamic context. More specifically, we were interested in adapting OLAP (On Line Analytical Processing ) and hierarchy techniques to build relevant summaries of the data. First, after describing and discussing existent similar approaches, we have proposed two solutions to build more efficiently data cube on stream data. Second, we were interested in combining frequent patterns and the use of hierarchies to build a summary based on the main trends of the stream. Third, even if it exists a lot of types of hierarchies in the literature, none of them integrates the expert knowledge during the generalization phase. However, such an integration could be very relevant to build semantically richer summaries. We tackled this issue and have proposed a new type of hierarchies, namely the contextual hierarchies. We provide with this new type of hierarchies a new conceptual, graphical and logical data warehouse model, namely the contextual data warehouse. Finally, since this work was founded by the ANR through the MIDAS project and thus, we had evaluated our approaches on real datasets provided by the industrial partners of this project (e.g., Orange Labs or EDF R&D)
Bawakid, Abdullah. "Automatic documents summarization using ontology based methodologies." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/2896/.
Full textPotapov, Danila. "Supervised Learning Approaches for Automatic Structuring of Videos." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM023/document.
Full textAutomatic interpretation and understanding of videos still remains at the frontier of computer vision. The core challenge is to lift the expressive power of the current visual features (as well as features from other modalities, such as audio or text) to be able to automatically recognize typical video sections, with low temporal saliency yet high semantic expression. Examples of such long events include video sections where someone is fishing (TRECVID Multimedia Event Detection), or where the hero argues with a villain in a Hollywood action movie (Inria Action Movies). In this manuscript, we present several contributions towards this goal, focusing on three video analysis tasks: summarization, classification, localisation.First, we propose an automatic video summarization method, yielding a short and highly informative video summary of potentially long videos, tailored for specified categories of videos. We also introduce a new dataset for evaluation of video summarization methods, called MED-Summaries, which contains complete importance-scorings annotations of the videos, along with a complete set of evaluation tools.Second, we introduce a new dataset, called Inria Action Movies, consisting of long movies, and annotated with non-exclusive semantic categories (called beat-categories), whose definition is broad enough to cover most of the movie footage. Categories such as "pursuit" or "romance" in action movies are examples of beat-categories. We propose an approach for localizing beat-events based on classifying shots into beat-categories and learning the temporal constraints between shots.Third, we overview the Inria event classification system developed within the TRECVID Multimedia Event Detection competition and highlight the contributions made during the work on this thesis from 2011 to 2014
Zacarias, Andressa Caroline Inácio. "Investigação de métodos de sumarização automática multidocumento baseados em hierarquias conceituais." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/7974.
Full textApproved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T16:19:10Z (GMT) No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T16:19:17Z (GMT) No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5)
Made available in DSpace on 2016-10-20T16:19:25Z (GMT). No. of bitstreams: 1 DissACIZ.pdf: 2734710 bytes, checksum: bf061fead4f2a8becfcbedc457a68b25 (MD5) Previous issue date: 2016-03-29
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
The Automatic Multi-Document Summarization (MDS) aims at creating a single summary, coherent and cohesive, from a collection of different sources texts, on the same topic. The creation of these summaries, in general extracts (informative and generic), requires the selection of the most important sentences from the collection. Therefore, one may use superficial linguistic knowledge (or statistic) or deep knowledge. It is important to note that deep methods, although more expensive and less robust, produce more informative extracts and with more linguistic quality. For the Portuguese language, the sole deep methods that use lexical-conceptual knowledge are based on the frequency of the occurrence of the concepts in the collection for the selection of a content. Considering the potential for application of semantic-conceptual knowledge, the proposition is to investigate MDS methods that start with representation of lexical concepts of source texts in a hierarchy for further exploration of certain hierarchical properties able to distinguish the most relevant concepts (in other words, the topics from a collection of texts) from the others. Specifically, 3 out of 50 CSTNews (multi-document corpus of Portuguese reference) collections were selected and the names that have occurred in the source texts of each collection were manually indexed to the concepts of the WordNet from Princenton (WN.Pr), engendering at the end, an hierarchy with the concepts derived from the collection and other concepts inherited from the WN.PR for the construction of the hierarchy. The hierarchy concepts were characterized in 5 graph metrics (of relevancy) potentially relevant to identify the concepts that compose a summary: Centrality, Simple Frequency, Cumulative Frequency, Closeness and Level. Said characterization was analyzed manually and by machine learning algorithms (ML) with the purpose of verifying the most suitable measures to identify the relevant concepts of the collection. As a result, the measure Centrality was disregarded and the other ones were used to propose content selection methods to MDS. Specifically, 2 sentences selection methods were selected which make up the extractive methods: (i) CFSumm whose content selection is exclusively based on the metric Simple Frequency, and (ii) LCHSumm whose selection is based on rules learned by machine learning algorithms from the use of all 4 relevant measures as attributes. These methods were intrinsically evaluated concerning the informativeness, by means of the package of measures called ROUGE, and the evaluation of linguistic quality was based on the criteria from the TAC conference. Therefore, the 6 human abstracts available in each CSTNews collection were used. Furthermore, the summaries generated by the proposed methods were compared to the extracts generated by the GistSumm summarizer, taken as baseline. The two methods got satisfactory results when compared to the GistSumm baseline and the CFSumm method stands out upon the LCHSumm method.
Na Sumarização Automática Multidocumento (SAM), busca-se gerar um único sumário, coerente e coeso, a partir de uma coleção de textos, de diferentes fontes, que tratam de um mesmo assunto. A geração de tais sumários, comumente extratos (informativos e genéricos), requer a seleção das sentenças mais importantes da coleção. Para tanto, pode-se empregar conhecimento linguístico superficial (ou estatística) ou conhecimento profundo. Quanto aos métodos profundos, destaca-se que estes, apesar de mais caros e menos robustos, produzem extratos mais informativos e com mais qualidade linguística. Para o português, os únicos métodos profundos que utilizam conhecimento léxico-conceitual baseiam na frequência de ocorrência dos conceitos na coleção para a seleção de conteúdo. Tendo em vista o potencial de aplicação do conhecimento semântico-conceitual, propôs-se investigar métodos de SAM que partem da representação dos conceitos lexicais dos textos-fonte em uma hierarquia para a posterior exploração de certas propriedades hierárquicas capazes de distinguir os conceitos mais relevantes (ou seja, os tópicos da coleção) dos demais. Especificamente, selecionaram-se 3 das 50 coleções do CSTNews, corpus multidocumento de referência do português, e os nomes que ocorrem nos textos-fonte de cada coleção foram manualmente indexados aos conceitos da WordNet de Princeton (WN.Pr), gerando, ao final, uma hierarquia com os conceitos constitutivos da coleção e demais conceitos herdados da WN.Pr para a construção da hierarquia. Os conceitos da hierarquia foram caracterizados em função de 5 métricas (de relevância) de grafo potencialmente pertinentes para a identificação dos conceitos a comporem um sumário: Centrality, Simple Frequency, Cumulative Frequency, Closeness e Level. Tal caracterização foi analisada de forma manual e por meio de algoritmos de Aprendizado de Máquina (AM) com o objetivo de verificar quais medidas seriam as mais adequadas para identificar os conceitos relevantes da coleção. Como resultado, a medida Centrality foi descartada e as demais utilizadas para propor métodos de seleção de conteúdo para a SAM. Especificamente, propuseram-se 2 métodos de seleção de sentenças, os quais compõem os métodos extrativos: (i) CFSumm, cuja seleção de conteúdo se baseia exclusivamente na métrica Simple Frequency, e (ii) LCHSumm, cuja seleção se baseia em regras aprendidas por algoritmos de AM a partir da utilização em conjunto das 4 medidas relevantes como atributos. Tais métodos foram avaliados intrinsecamente quanto à informatividade, por meio do pacote de medidas ROUGE, e qualidade linguística, com base nos critérios da conferência TAC. Para tanto, utilizaram-se os 6 abstracts humanos disponíveis em cada coleção do CSTNews. Ademais, os sumários gerados pelos métodos propostos foram comparados aos extratos gerados pelo sumarizador GistSumm, tido como baseline. Os dois métodos obtiveram resultados satisfatórios quando comparados ao baseline GistSumm e o método CFSumm se sobressai ao método LCHSumm.
FAPESP 2014/12817-4
Diot, Fabien. "Graph mining for object tracking in videos." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4009/document.
Full textDetecting and following the main objects of a video is necessary to describe its content in order to, for example, allow for a relevant indexation of the multimedia content by the search engines. Current object tracking approaches either require the user to select the targets to follow, or rely on pre-trained classifiers to detect particular classes of objects such as pedestrians or car for example. Since those methods rely on user intervention or prior knowledge of the content to process, they cannot be applied automatically on amateur videos such as the ones found on YouTube. To solve this problem, we build upon the hypothesis that, in videos with a moving background, the main objects should appear more frequently than the background. Moreover, in a video, the topology of the visual elements composing an object is supposed consistent from one frame to another. We represent each image of the videos with plane graphs modeling their topology. Then, we search for substructures appearing frequently in the database of plane graphs thus created to represent each video. Our contributions cover both fields of graph mining and object tracking. In the first field, our first contribution is to present an efficient plane graph mining algorithm, named PLAGRAM. This algorithm exploits the planarity of the graphs and a new strategy to extend the patterns. The next contributions consist in the introduction of spatio-temporal constraints into the mining process to exploit the fact that, in a video, the motion of objects is small from on frame to another. Thus, we constrain the occurrences of a same pattern to be close in space and time by limiting the number of frames and the spatial distance separating them. We present two new algorithms, DYPLAGRAM which makes use of the temporal constraint to limit the number of extracted patterns, and DYPLAGRAM_ST which efficiently mines frequent spatio-temporal patterns from the datasets representing the videos. In the field of object tracking, our contributions consist in two approaches using the spatio-temporal patterns to track the main objects in videos. The first one is based on a search of the shortest path in a graph connecting the spatio-temporal patterns, while the second one uses a clustering approach to regroup them in order to follow the objects for a longer period of time. We also present two industrial applications of our method
Maaloul, Mohamed. "Approche hybride pour le résumé automatique de textes : Application à la langue arabe." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4778.
Full textThis thesis falls within the framework of Natural Language Processing. The problems of automatic summarization of Arabic documents which was approached, in this thesis, are based on two points. The first point relates to the criteria used to determine the essential content to extract. The second point focuses on the means to express the essential content extracted in the form of a text targeting the user potential needs.In order to show the feasibility of our approach, we developed the "L.A.E" system, based on a hybrid approach which combines a symbolic analysis with a numerical processing.The evaluation results are encouraging and prove the performance of the proposed hybrid approach.These results showed, initially, the applicability of the approach in the context of mono documents without restriction as for their topics (Education, Sport, Science, Politics, Interaction, etc), their content and their volume. They also showed the importance of the machine learning in the phase of classification and selection of the sentences forming the final extract
Dias, Márcio de Souza. "Investigação de modelos de coerência local para sumários multidocumento." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11112016-084734/.
Full textMulti-document summarization is the task of automatically producing a single summary from a collection of texts derived from the same subject. It is essential to treat many phenomena, such as: (i) redundancy, complementarity and contradiction of information; (ii) writing styles standardization; (iii) treatment of referential expressions; (iv) text focus and different perspectives; (v) and temporal ordering of information in the summary. The treatment of these phenomena contributes to the informativeness and coherence of the final summary. A particular type of coherence studied in this thesis is the local coherence, which is defined by the relationship between statements (smallest units) in a sequence of sentences. The local coherence contributes to the construction of textual meaning in its totality. Assuming that the use of discursive knowledge can improve the evaluation of the local coherence, this thesis proposes to investigate the use of discursive relations to develop local coherence models, which are able to automatically distinguish coherent summaries from incoherent ones. In addition, a study on the errors that affect the Linguistic Quality of the summaries was conducted in order to verify what are the errors that affect the local coherence of summaries, as well as if the coherence models can identify such errors, and whether there is any relationship between coherence models and informativenessof summaries. For thisresearch, it wasnecessary theuseof semantic-discursive information of CST models (Cross-document Structure Theory) and RST (Rhetorical Structure Theory) annoted in the corpora, automatic tools, parser as Palavras, and algorithms that extract information from the corpus. The results showed that the use of semantic-discursive information was successful on the distinction between coherent and incoherent summaries, and that the information about coherence can be used in error detection of linguistic quality that affect the local coherence.
Tosta, Fabricio Elder da Silva. "Aplicação de conhecimento léxico-conceitual na sumarização multidocumento multilíngue." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/5796.
Full textFinanciadora de Estudos e Projetos
Traditionally, Multilingual Multi-document Automatic Summarization (MMAS) is a computational application that, from a single collection of source-texts on the same subject/topic in at least two languages, produces an informative and generic summary (extract) in one of these languages. The simplest methods automatically translate the source-texts and, from a monolingual collection, apply content selection strategies based on shallow and/or deep linguistic knowledge. Therefore, the MMAS applications need to identify the main information of the collection, avoiding the redundancy, but also treating the problems caused by the machine translation (MT) of the full source-texts. Looking for alternatives to the traditional scenario of MMAS, we investigated two methods (Method 1 and 2) that once based on deep linguistic knowledge of lexical-conceptual level avoid the full MT of the sourcetexts, generating informative and cohesive/coherent summaries. In these methods, the content selection starts with the score and the ranking of the original sentences based on the frequency of occurrence of the concepts in the collection, expressed by their common names. In Method 1, only the most well-scored and non redundant sentences from the user s language are selected to compose the extract, until it reaches the compression rate. In Method 2, the original sentences which are better ranked and non redundant are selected to the summary without privileging the user s language; in cases which sentences that are not in the user s language are selected, they are automatically translated. In order to producing automatic summaries according to Methods 1 and 2 and their subsequent evaluation, the CM2News corpus was built. The corpus has 20 collections of news texts, 1 original text in English and 1 original text in Portuguese, both on the same topic. The common names of CM2News were identified through morphosyntactic annotation and then it was semiautomatically annotated with the concepts in Princeton WordNet through the Mulsen graphic editor, which was especially developed for the task. For the production of extracts according to Method 1, only the best ranked sentences in Portuguese were selected until the compression rate was reached. For the production of extracts according to Method 2, the best ranked sentences were selected, without privileging the language of the user. If English sentences were selected, they were automatically translated into Portuguese by the Bing translator. The Methods 1 and 2 were evaluated intrinsically considering the linguistic quality and informativeness of the summaries. To evaluate linguistic quality, 15 computational linguists analyzed manually the grammaticality, non-redundancy, referential clarity, focus and structure / coherence of the summaries and to evaluate the informativeness of the sumaries, they were automatically compared to reference sumaries by ROUGE measures. In both evaluations, the results have shown the better performance of Method 1, which might be explained by the fact that sentences were selected from a single source text. Furthermore, we highlight the best performance of both methods based on lexicalconceptual knowledge compared to simpler methods of MMAS, which adopted the full MT of the source-texts. Finally, it is noted that, besides the promising results on the application of lexical-conceptual knowledge, this work has generated important resources and tools for MMAS, such as the CM2News corpus and the Mulsen editor.
Tradicionalmente, a Sumarização Automática Multidocumento Multilíngue (SAMM) é uma aplicação que, a partir de uma coleção de textos sobre um mesmo assunto em ao menos duas línguas distintas, produz um sumário (extrato) informativo e genérico em uma das línguas-fonte. Os métodos mais simples realizam a tradução automática (TA) dos textos-fonte e, a partir de uma coleção monolíngue, aplicam estratégias superficiais e/ou profundas de seleção de conteúdo. Dessa forma, a SAMM precisa não só identificar a informação principal da coleção para compor o sumário, evitando-se a redundância, mas também lidar com os problemas causados pela TA integral dos textos-fonte. Buscando alternativas para esse cenário, investigaram-se dois métodos (Método 1 e 2) que, uma vez pautados em conhecimento profundo do tipo léxico-conceitual, evitam a TA integral dos textos-fonte, gerando sumários informativos e coesos/coerentes. Neles, a seleção do conteúdo tem início com a pontuação e o ranqueamento das sentenças originais em função da frequência de ocorrência na coleção dos conceitos expressos por seus nomes comuns. No Método 1, apenas as sentenças mais bem pontuadas na língua do usuário e não redundantes entre si são selecionadas para compor o sumário até que se atinja a taxa de compressão. No Método 2, as sentenças originais mais bem ranqueadas e não redundantes entre si são selecionadas para compor o sumário sem que se privilegie a língua do usuário; caso sentenças que não estejam na língua do usuário sejam selecionadas, estas são automaticamente traduzidas. Para a produção dos sumários automáticos segundo os Métodos 1 e 2 e subsequente avaliação dos mesmos, construiu-se o corpus CM2News, que possui 20 coleções de notícias jornalísticas, cada uma delas composta por 1 texto original em inglês e 1 texto original em português sobre um mesmo assunto. Os nomes comuns do CM2News foram identificados via anotação morfossintática e anotados com os conceitos da WordNet de Princeton de forma semiautomática, ou seja, por meio do editor gráfico MulSen desenvolvido para a tarefa. Para a produção dos sumários segundo o Método 1, somente as sentenças em português mais bem pontuadas foram selecionadas até que se atingisse determinada taxa de compressão. Para a produção dos sumários segundo o Método 2, as sentenças mais pontuadas foram selecionadas sem privilegiar a língua do usuário. Caso as sentenças selecionadas estivessem em inglês, estas foram automaticamente traduzidas para o português pelo tradutor Bing. Os Métodos 1 e 2 foram avaliados de forma intrínseca, considerando-se a qualidade linguística e a informatividade dos sumários. Para avaliar a qualidade linguística, 15 linguistas computacionais analisaram manualmente a gramaticalidade, a não-redundância, a clareza referencial, o foco e a estrutura/coerência dos sumários e, para avaliar a informatividade, os sumários foram automaticamente comparados a sumários de referência pelo pacote de medidas ROUGE. Em ambas as avaliações, os resultados evidenciam o melhor desempenho do Método 1, o que pode ser justificado pelo fato de que as sentenças selecionadas são provenientes de um mesmo texto-fonte. Além disso, ressalta-se o melhor desempenho dos dois métodos baseados em conhecimento léxico-conceitual frente aos métodos mais simples de SAMM, os quais realizam a TA integral dos textos-fonte. Por fim, salienta-se que, além dos resultados promissores sobre a aplicação de conhecimento léxico-conceitual, este trabalho gerou recursos e ferramentas importantes para a SAMM, como o corpus CM2News e o editor MulSen.
Laurent, Mario. "Recherche et développement du Logiciel Intelligent de Cartographie Inversée, pour l’aide à la compréhension de texte par un public dyslexique." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAL016/document.
Full textChildren with language impairment, such as dyslexia, are often faced with important difficulties when learning to read and during any subsequent reading tasks. These difficulties tend to compromise the understanding of the texts they must read during their time at school. This implies learning difficulties and may lead to academic failure. Over the past fifteen years, general tools developed in the field of Natural Language Processing have been transformed into specific tools for that help with and compensate for language impaired students' difficulties. At the same time, the use of concept maps or heuristic maps to encourage dyslexic children express their thoughts, or retain certain knowledge, has become popular. This thesis aims to identify and explore knowledge about the dyslexic public, how society takes care of them and what difficulties they face; the pedagogical possibilities opened up by the use of maps; and the opportunities created by automatic summarization and Information Retrieval fields. The aim of this doctoral research project was to create an innovative piece of software that automatically transforms a given text into a map. It was important that this piece of software facilitate reading comprehension while including functionalities that are adapted to dyslexic teenagers. The project involved carrying out an exploratory experiment on reading comprehension aid, thanks to heuristic maps, that make the identification of new research topics possible, and implementing an automatic mapping software prototype that is presented at the end of this thesis
Hohm, Joseph Brandon 1982. "Automatic classification of documents with an in-depth analysis of information extraction and automatic summarization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/29415.
Full textIncludes bibliographical references (leaves 78-80).
Today, annual information fabrication per capita exceeds two hundred and fifty megabytes. As the amount of data increases, classification and retrieval methods become more necessary to find relevant information. This thesis describes a .Net application (named I-Document) that establishes an automatic classification scheme in a peer-to-peer environment that allows free sharing of academic, business, and personal documents. A Web service architecture for metadata extraction, Information Extraction, Information Retrieval, and text summarization is depicted. Specific details regarding the coding process, competition, business model, and technology employed in the project are also discussed.
by Joseph Brandon Hohm.
M.Eng.
Balahur, Dobrescu Alexandra. "Methods and resources for sentiment analysis in multilingual documents of different text types." Doctoral thesis, Universidad de Alicante, 2011. http://hdl.handle.net/10045/19437.
Full textTsatsaronis, George. "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-202687.
Full textPokorný, Lubomír. "Metody sumarizace textových dokumentů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236443.
Full textTsatsaronis, George. "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition." BioMed Central, 2015. https://tud.qucosa.de/id/qucosa%3A29496.
Full textFuentes, Fort Maria. "A Flexible Multitask Summarizer for Documents from Different Media, Domain and Language." Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/6655.
Full textEl resumen automático probablemente sea crucial en un momento en que la gran cantidad de documentos generados diariamente hace que recuperar, tratar y asimilar la información que contienen se haya convertido en una ardua y a su vez decisiva tarea. A pesar de ello, no podemos esperar que los resúmenes producidos de forma automática vayan a ser capaces de sustituir a los humanos. El proceso de resumen automático no sólo depende de las características propias de los documentos a ser resumidos, sino que es fuertemente dependiente de las necesidades específicas de los usuarios. Por ello, el diseño de un sistema de información para resumen conlleva tener en cuenta varios aspectos. En función de las características de los documentos de entrada y de los resultados deseados es posible aplicar distintas técnicas. Por esta razón surge la necesidad de diseñar una arquitectura flexible que permita la implementación de múltiples tareas de resumen. Este es el objetivo final de la tesis que presento dividido en tres subtemas de investigación. En primer lugar, estudiar el proceso de adaptabilidad de sistemas a diferentes tareas de resumen, como son procesar documentos producidos en diferentes lenguas, dominios y medios (sonido y texto), con la voluntad de diseñar una arquitectura genérica que permita la fácil incorporación de nuevas tareas a través de reutilizar herramientas existentes. En segundo lugar, desarrollar prototipos para distintas tareas, teniendo en cuenta aspectos relacionados con la lengua, el dominio y el medio del documento o conjunto de documentos que requieren ser resumidos, así como aspectos relacionados con el contenido final del resumen: genérico, novedad o resumen que de respuesta a una necesidad especifica. En tercer lugar, crear un marco de evaluación que permita analizar la competencia intrínseca de distintos prototipos al resumir noticias escritas y presentaciones científicas orales.
沈健誠. "Multi-Document Summarization System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/67547214470615254060.
Full text國立清華大學
資訊工程學系
89
Most summarization systems are designed for a single document at present. These systems indicate the essence of individual document, but do not transfer similar documents into single summary. Can we develop a multi-document summarization system, which transfers related documents with the same event into a summary? If that is possible, the main points of documents will be clearly and simply displayed with two or three sentences. Users can see whether these documents are what they want in a minute. It can reduce time for collecting documents and enable users to gather information on the Internet more efficiently. To develop a multi-document summarization system is the goal of this thesis. Summary produced by the must system satisfy two conditions: indicative and topic related. The summary should be tailored to suit user’s query. To achieve this goal, we will study the indicativeness and topic relevance of sentences, and the selection of sentences that are important and independence to each other. Finally, unimportant small clauses will be deleted, to make the final summary more concise. System generates summaries with 248 documents and fifty topics of NTCIR. The reduction rate is over 95%. overall, the quality of summaries produced were satisfactory.
"Automatic bilingual text document summarization." 2002. http://library.cuhk.edu.hk/record=b5891141.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 137-143).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Definition of a summary --- p.2
Chapter 1.2 --- Definition of text summarization --- p.3
Chapter 1.3 --- Previous work --- p.4
Chapter 1.3.1 --- Extract-based text summarization --- p.5
Chapter 1.3.2 --- Abstract-based text summarization --- p.8
Chapter 1.3.3 --- Sophisticated text summarization --- p.9
Chapter 1.4 --- Summarization evaluation methods --- p.10
Chapter 1.4.1 --- Intrinsic evaluation --- p.10
Chapter 1.4.2 --- Extrinsic evaluation --- p.11
Chapter 1.4.3 --- The TIPSTER SUMMAC text summarization evaluation --- p.11
Chapter 1.4.4 --- Text Summarization Challenge (TSC) --- p.13
Chapter 1.5 --- Research contributions --- p.14
Chapter 1.5.1 --- Text summarization based on thematic term approach --- p.14
Chapter 1.5.2 --- Bilingual news summarization based on an event-driven approach --- p.15
Chapter 1.6 --- Thesis organization --- p.16
Chapter 2 --- Text Summarization based on a Thematic Term Approach --- p.17
Chapter 2.1 --- System overview --- p.18
Chapter 2.2 --- Document preprocessor --- p.20
Chapter 2.2.1 --- English corpus --- p.20
Chapter 2.2.2 --- English corpus preprocessor --- p.22
Chapter 2.2.3 --- Chinese corpus --- p.23
Chapter 2.2.4 --- Chinese corpus preprocessor --- p.24
Chapter 2.3 --- Corpus thematic term extractor --- p.24
Chapter 2.4 --- Article thematic term extractor --- p.26
Chapter 2.5 --- Sentence score generator --- p.29
Chapter 2.6 --- Chapter summary --- p.30
Chapter 3 --- Evaluation for Summarization using the Thematic Term Ap- proach --- p.32
Chapter 3.1 --- Content-based similarity measure --- p.33
Chapter 3.2 --- Experiments using content-based similarity measure --- p.36
Chapter 3.2.1 --- English corpus and parameter training --- p.36
Chapter 3.2.2 --- Experimental results using content-based similarity mea- sure --- p.38
Chapter 3.3 --- Average inverse rank (AIR) method --- p.59
Chapter 3.4 --- Experiments using average inverse rank method --- p.60
Chapter 3.4.1 --- Corpora and parameter training --- p.61
Chapter 3.4.2 --- Experimental results using AIR method --- p.62
Chapter 3.5 --- Comparison between the content-based similarity measure and the average inverse rank method --- p.69
Chapter 3.6 --- Chapter summary --- p.73
Chapter 4 --- Bilingual Event-Driven News Summarization --- p.74
Chapter 4.1 --- Corpora --- p.75
Chapter 4.2 --- Topic and event definitions --- p.76
Chapter 4.3 --- Architecture of bilingual event-driven news summarization sys- tem --- p.77
Chapter 4.4 --- Bilingual event-driven approach summarization --- p.80
Chapter 4.4.1 --- Dictionary-based term translation applying on English news articles --- p.80
Chapter 4.4.2 --- Preprocessing for Chinese news articles --- p.89
Chapter 4.4.3 --- Event clusters generation --- p.89
Chapter 4.4.4 --- Cluster selection and summary generation --- p.96
Chapter 4.5 --- Evaluation for summarization based on event-driven approach --- p.101
Chapter 4.6 --- Experimental results on event-driven summarization --- p.103
Chapter 4.6.1 --- Experimental settings --- p.103
Chapter 4.6.2 --- Results and analysis --- p.105
Chapter 4.7 --- Chapter summary --- p.113
Chapter 5 --- Applying Event-Driven Summarization to a Parallel Corpus --- p.114
Chapter 5.1 --- Parallel corpus --- p.115
Chapter 5.2 --- Parallel documents preparation --- p.116
Chapter 5.3 --- Evaluation methods for the event-driven summaries generated from the parallel corpus --- p.118
Chapter 5.4 --- Experimental results and analysis --- p.121
Chapter 5.4.1 --- Experimental settings --- p.121
Chapter 5.4.2 --- Results and analysis --- p.123
Chapter 5.5 --- Chapter summary --- p.132
Chapter 6 --- Conclusions and Future Work --- p.133
Chapter 6.1 --- Conclusions --- p.133
Chapter 6.2 --- Future work --- p.135
Bibliography --- p.137
Chapter A --- English Stop Word List --- p.144
Chapter B --- Chinese Stop Word List --- p.149
Chapter C --- Event List Items on the Corpora --- p.151
Chapter C.1 --- "Event list items for the topic ""Upcoming Philippine election""" --- p.151
Chapter C.2 --- "Event list items for the topic ""German train derail"" " --- p.153
Chapter C.3 --- "Event list items for the topic ""Electronic service delivery (ESD) scheme"" " --- p.154
Chapter D --- The sample of an English article (9505001.xml). --- p.156
Tsai, Erh-I., and 蔡而益. "Topic-Based Multi-Document Summarization System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/11463643705190386674.
Full text國立交通大學
資訊科學與工程研究所
99
With the explosion in the amount of information available electronically, information overloading has become a major problem and people have to spend more and more time to look for the information they need. Automatic text summarization has draw much attention in recent years and it exhibits the practicability in document management and search systems.
Liu, Cheng-Chang, and 劉政璋. "Concept Cluster Based News Document Summarization." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/22477396909604899181.
Full text國立交通大學
資訊科學系所
93
A multi-document summarization system can reduce the time for a user to read a large number of documents. A summarization system, in general, selects salient features from one (or many) document(s) to compose a summarization, in the hope that the generated summarization can help a user understand the meaning of the document(s). This thesis proposes a method to analyze the semantics of news documents. The method is divided into two phases. The first phase attempts to discover the subtle topics called concepts hidden in documents. Due to the phenomenon that similar nouns, verbs, and adjectives usually co-occur with the same representative term, we describe a concept by those terms around it, and use a semantic network to assist the description of a concept more accurately. The second phase distinguishes the concepts discovered in the first phase by their word senses. The K-means clustering algorithm is exploited to gather concepts with the same sense into the same cluster. Clustering can diminish the problem about word sense ambiguation and reduce concepts with similar sense. After the two above phase, we choose five features to weight sentences and order sentences according to their weights. The five features are lengths of clusters, location of a sentence, tf*idf, distance between a sentence and the center of the cluster to which the sentence belongs, and the similarity between a sentence and the cluster to which the sentence belongs. We use the news documents of Document Understanding Conferences 2003 (DUC2003) and its evaluation tool to evaluate the performance of our method.
Lin, Chih-Lung, and 林志龍. "Mining Association Words for Document Summarization." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/62018588118824230371.
Full textAlliheedi, Mohammed. "Multi-document Summarization System Using Rhetorical Information." Thesis, 2012. http://hdl.handle.net/10012/6820.
Full textLamkhede, Sudarshan. "Multi-document summarization using concept chain graphs." 2005. http://proquest.umi.com/pqdweb?did=994252731&sid=19&Fmt=2&clientId=39334&RQT=309&VName=PQD.
Full textTitle from PDF title page (viewed on Mar. 16, 2006) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Srihari, Rohini K. Includes bibliographical references.
June-Jei, Kuo. "A Study on Multiple Document Summarization Systems." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0507200616513700.
Full text黃思萱. "Multi-Document Summarization Based on Keyword Clustering." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/53800112370126276087.
Full text國立臺灣科技大學
資訊管理系
90
With the rapid growth of the World Wide Web, more and more information is accessible on-line. This explosion of information has resulted in an information overload problem. However, people have no time to read everything and have to decide which information is available. The technology of automatic text summarization is indispensable for dealing with this problem. Text summarization is the process of distilling the most important information from a source to produce an abridged version for a particular user and task. Recent researches on multi-document summarization are based on document clustering technology. We propose a method of multi-document summarization, which is based on keyword clustering. In our investigation we develop three methods of keyword clustering to produce multi-document summaries. We distill representative keywords from all documents, and then cluster keywords using connected component, weighted clique and hybrid of both. The purpose of keyword clustering is to gather up information which discusses the same topic or event. In the same cluster, our system computes weight of each sentence and ranks all sentences by weight. The first largest weight sentences will be chosen as the summary of the documents. Our experiments show that stricter keyword clustering method has better summary results. The system which we develop can help people to save time, and read important new documents.
Wang, Sheng-Jyun, and 王聖竣. "A Study of Automatic Document Summarization Retrieval." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/30839849743972595166.
Full text中華大學
資訊管理學系碩士班
99
Automatic document summarization can help users to quickly understand the content of the article. Automatic document summarization can extract the important meaning, knowledge, or filter unnecessary information. This research combines statistical method and linguistic method to build an automatic document summarization retrieval model. Entropy method as the statistical method of this research to extract importamt features. And linguistic method is used to explore the relationship among features and get the importance of features. First, we extract the features of documents. Secondly, we use statistical methods to calculate the weights of features. Finally, we get the scroes for each sentences througth linguistic approach and get important sentences of documents. This research conducts two experiments to verify the proposed approach. The result reveals that the proposed model can extracts better features proposed. And entropy method is better than the other three statistical methods. The result the proposed framework is feasible and referable. Keywords: Automatic Article Summary, Statistical Approach, Linguistic Approach.
Yang, Pei-Chen, and 楊佩臻. "Using Sentence Network to Automatic Document Summarization." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/93066067306713516291.
Full text國立中央大學
資訊管理學系
101
This paper proposed a Graph-based Summarization method by building a sentence network that represent the relation between sentences with NGD. The method can get rid of the dependence of external resources like corpus and lexical database by using the words in the documents and the search result. Using Wiki Engine to calculate NGD and find out the relation between words. Finally, the keywords in the documents are found out. Building a Vector Space Model by the keywords and calculating the similarity between sentences to build a sentence network. The most import sentences are extracted by using Link Analysis. The experiment results showed that the ROUGE value of proposed graph-based single-document summarization method is better than other machine learning methods, and the ROUGE value of proposed graph-based multi-documents summarization method is just lower than few peers using machine learning methods. It proves that this proposed method is an effective unsupervised document summarization without external resources like corpus and lexical database.
Kuo, June-Jei, and 郭俊桔. "A Study on Multiple Document Summarization Systems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84838785413103052977.
Full text國立臺灣大學
資訊工程學研究所
94
In order to provide a generic summary to help on-line readers to absorb news information from multiple sources, in this dissertation we study the related issues on the multi-document summarization, e.g., event clustering, sentence selection, redundancy avoidance, sentence ordering and summary evaluation, and focus on two major modules: event clustering and summary generation. Besides using the conventional features, e.g., lexical information or part-of-speech, term frequency, document frequency and paragraph dispersion of a word in a document are used to propose informative words, which can be used to represent the corresponding document. In the event clustering module, to further understand a document we introduce the semantic features, such as event words and co-reference chains. The controlled vocabulary mining from co-reference chains is also proposed to solve the cross document name entity unification issue. Meanwhile, we propose a novel dynamic threshold model to enhance the performance of event clustering. On the other hand, in the summary generation module, we propose a temporal tagger to deal with the temporal resolution and provide sentence dates for sentence ordering. We also introduce the latent semantic analysis (LSA) to tackle the sentence selection issue. On the one hand, to tackle the summary length issue, the sentence reduction algorithm using both event constituent words and informative words is also proposed. Finally, the experimental results on both content and readability for generated multi-document summarization are promising. On the other hand, to investigate the performance of proposed semantic features, the headline generation and multi-lingual multi-document summarization are also studied. Besides, we tackle the automatic evaluation issue on summary evaluation by introducing question answering (QA). Promising results are obtained as well.
Tsai, Bing-Hong, and 蔡秉宏. "Extractive Document Summarization based on BERT Model." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394076%22.&searchmode=basic.
Full text國立中興大學
資訊科學與工程學系所
107
Document summarization is an important application of natural language process (NLP). In this thesis, we propose an extractive summarization model based on BERT model. Our idea is to cast the extractive document summarization into a key sentence selection problem and adapt the BERT model to learn a classification model to predict the score of each sentence token and aggregate the scores of the tokens in a sentence and then average as the sentence score. The experiment evaluation based on CNN and DailyMail dataset demonstrates the performance of the proposed BERT adaption model for the extractive document summarization task from 42.99 to 43.42 in terms of ROUGE-1 score.