Academic literature on the topic 'Document Summarization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Document Summarization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Document Summarization"
Rahamat Basha, S., J. Keziya Rani, and J. J. C. Prasad Yadav. "A Novel Summarization-based Approach for Feature Reduction Enhancing Text Classification Accuracy." Engineering, Technology & Applied Science Research 9, no. 6 (December 1, 2019): 5001–5. http://dx.doi.org/10.48084/etasr.3173.
Full textSingh, Sandhya, Kevin Patel, Krishnanjan Bhattacharjee, Hemant Darbari, and Seema Verma. "Towards Better Single Document Summarization using Multi-Document Summarization Approach." International Journal of Computer Sciences and Engineering 7, no. 5 (May 31, 2019): 695–703. http://dx.doi.org/10.26438/ijcse/v7i5.695703.
Full textKongara, Srinivasa Rao, Dasika Sree Rama Chandra Murthy, and Gangadhara Rao Kancherla. "An Automatic Text Summarization Method with the Concern of Covering Complete Formation." Recent Advances in Computer Science and Communications 13, no. 5 (November 5, 2020): 977–86. http://dx.doi.org/10.2174/2213275912666190716105347.
Full textDiedrichsen, Elke. "Linguistic challenges in automatic summarization technology." Journal of Computer-Assisted Linguistic Research 1, no. 1 (June 26, 2017): 40. http://dx.doi.org/10.4995/jclr.2017.7787.
Full textD’Silva, Suzanne, Neha Joshi, Sudha Rao, Sangeetha Venkatraman, and Seema Shrawne. "Improved Algorithms for Document Classification &Query-based Multi-Document Summarization." International Journal of Engineering and Technology 3, no. 4 (2011): 404–9. http://dx.doi.org/10.7763/ijet.2011.v3.261.
Full textVikas, A., Pradyumna G.V.N, and Tahir Ahmed Shaik. "Text Summarization." International Journal of Engineering and Computer Science 9, no. 2 (February 3, 2020): 24940–45. http://dx.doi.org/10.18535/ijecs/v9i2.4437.
Full textSirohi, Neeraj Kumar, Dr Mamta Bansal, and Dr S. N. Rajan Rajan. "Text Summarization Approaches Using Machine Learning & LSTM." Revista Gestão Inovação e Tecnologias 11, no. 4 (September 1, 2021): 5010–26. http://dx.doi.org/10.47059/revistageintec.v11i4.2526.
Full textManju, K., S. David Peter, and Sumam Idicula. "A Framework for Generating Extractive Summary from Multiple Malayalam Documents." Information 12, no. 1 (January 18, 2021): 41. http://dx.doi.org/10.3390/info12010041.
Full textMamidala, Kishore Kumar, and Suresh Kumar Sanampudi. "A Novel Framework for Multi-Document Temporal Summarization (MDTS)." Emerging Science Journal 5, no. 2 (April 1, 2021): 184–90. http://dx.doi.org/10.28991/esj-2021-01268.
Full textYadav, Avaneesh Kumar, Ashish Kumar Maurya, Ranvijay, and Rama Shankar Yadav. "Extractive Text Summarization Using Recent Approaches: A Survey." Ingénierie des systèmes d information 26, no. 1 (February 28, 2021): 109–21. http://dx.doi.org/10.18280/isi.260112.
Full textDissertations / Theses on the topic "Document Summarization"
Tohalino, Jorge Andoni Valverde. "Extractive document summarization using complex networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24102018-155954/.
Full textDevido à grande quantidade de informações textuais disponíveis na Internet, a tarefa de sumarização automática de documentos ganhou importância significativa. A sumarização de documentos tornou-se importante porque seu foco é o desenvolvimento de técnicas destinadas a encontrar conteúdo relevante e conciso em grandes volumes de informação sem alterar seu significado original. O objetivo deste trabalho de Mestrado é usar os conceitos da teoria de grafos para o resumo extrativo de documentos para Sumarização mono-documento (SDS) e Sumarização multi-documento (MDS). Neste trabalho, os documentos são modelados como redes, onde as sentenças são representadas como nós com o objetivo de extrair as sentenças mais relevantes através do uso de algoritmos de ranqueamento. As arestas entre nós são estabelecidas de maneiras diferentes. A primeira abordagem para o cálculo de arestas é baseada no número de substantivos comuns entre duas sentenças (nós da rede). Outra abordagem para criar uma aresta é através da similaridade entre duas sentenças. Para calcular a similaridade de tais sentenças, foi usado o modelo de espaço vetorial baseado na ponderação Tf-Idf e word embeddings para a representação vetorial das sentenças. Além disso, fazemos uma distinção entre as arestas que vinculam sentenças de diferentes documentos (inter-camada) e aquelas que conectam sentenças do mesmo documento (intra-camada) usando modelos de redes multicamada para a tarefa de Sumarização multi-documento. Nesta abordagem, cada camada da rede representa um documento do conjunto de documentos que será resumido. Além das medições tipicamente usadas em redes complexas como grau dos nós, coeficiente de agrupamento, caminhos mais curtos, etc., a caracterização da rede também é guiada por medições dinâmicas de redes complexas, incluindo simetria, acessibilidade e tempo de absorção. Os resumos gerados foram avaliados usando diferentes corpus para Português e Inglês. A métrica ROUGE-1 foi usada para a validação dos resumos gerados. Os resultados sugerem que os modelos mais simples, como redes baseadas em Noun e Tf-Idf, obtiveram um melhor desempenho em comparação com os modelos baseados em word embeddings. Além disso, excelentes resultados foram obtidos usando a representação de redes multicamada de documentos para MDS. Finalmente, concluímos que várias medidas podem ser usadas para melhorar a caracterização de redes para a tarefa de sumarização.
Ou, Shiyan, Christopher S. G. Khoo, and Dion H. Goh. "Automatic multi-document summarization for digital libraries." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.
Full textHuang, Fang. "Multi-document summarization with latent semantic analysis." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419255.
Full textGrant, Harald. "Extractive Multi-document Summarization of News Articles." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158275.
Full textGeiss, Johanna. "Latent semantic sentence clustering for multi-document summarization." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609761.
Full textChellal, Abdelhamid. "Event summarization on social media stream : retrospective and prospective tweet summarization." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30118/document.
Full textUser-generated content on social media, such as Twitter, provides in many cases, the latest news before traditional media, which allows having a retrospective summary of events and being updated in a timely fashion whenever a new development occurs. However, social media, while being a valuable source of information, can be also overwhelming given the volume and the velocity of published information. To shield users from being overwhelmed by irrelevant and redundant posts, retrospective summarization and prospective notification (real-time summarization) were introduced as two complementary tasks of information seeking on document streams. The former aims to select a list of relevant and non-redundant tweets that capture "what happened". In the latter, systems monitor the live posts stream and push relevant and novel notifications as soon as possible. Our work falls within these frameworks and focuses on developing a tweet summarization approaches for the two aforementioned scenarios. It aims at providing summaries that capture the key aspects of the event of interest to help users to efficiently acquire information and follow the development of long ongoing events from social media. Nevertheless, tweet summarization task faces many challenges that stem from, on one hand, the high volume, the velocity and the variety of the published information and, on the other hand, the quality of tweets, which can vary significantly. In the prospective notification, the core task is the relevancy and the novelty detection in real-time. For timeliness, a system may choose to push new updates in real-time or may choose to trade timeliness for higher notification quality. Our contributions address these levels: First, we introduce Word Similarity Extended Boolean Model (WSEBM), a relevance model that does not rely on stream statistics and takes advantage of word embedding model. We used word similarity instead of the traditional weighting techniques. By doing this, we overcome the shortness and word mismatch issues in tweets. The intuition behind our proposition is that context-aware similarity measure in word2vec is able to consider different words with the same semantic meaning and hence allows offsetting the word mismatch issue when calculating the similarity between a tweet and a topic. Second, we propose to compute the novelty score of the incoming tweet regarding all words of tweets already pushed to the user instead of using the pairwise comparison. The proposed novelty detection method scales better and reduces the execution time, which fits real-time tweet filtering. Third, we propose an adaptive Learning to Filter approach that leverages social signals as well as query-dependent features. To overcome the issue of relevance threshold setting, we use a binary classifier that predicts the relevance of the incoming tweet. In addition, we show the gain that can be achieved by taking advantage of ongoing relevance feedback. Finally, we adopt a real-time push strategy and we show that the proposed approach achieves a promising performance in terms of quality (relevance and novelty) with low cost of latency whereas the state-of-the-art approaches tend to trade latency for higher quality. This thesis also explores a novel approach to generate a retrospective summary that follows a different paradigm than the majority of state-of-the-art methods. We consider the summary generation as an optimization problem that takes into account the topical and the temporal diversity. Tweets are filtered and are incrementally clustered in two cluster types, namely topical clusters based on content similarity and temporal clusters that depends on publication time. Summary generation is formulated as integer linear problem in which unknowns variables are binaries, the objective function is to be maximized and constraints ensure that at most one post per cluster is selected with respect to the defined summary length limit
Linhares, Pontes Elvys. "Compressive Cross-Language Text Summarization." Thesis, Avignon, 2018. http://www.theses.fr/2018AVIG0232/document.
Full textThe popularization of social networks and digital documents increased quickly the informationavailable on the Internet. However, this huge amount of data cannot be analyzedmanually. Natural Language Processing (NLP) analyzes the interactions betweencomputers and human languages in order to process and to analyze natural languagedata. NLP techniques incorporate a variety of methods, including linguistics, semanticsand statistics to extract entities, relationships and understand a document. Amongseveral NLP applications, we are interested, in this thesis, in the cross-language textsummarization which produces a summary in a language different from the languageof the source documents. We also analyzed other NLP tasks (word encoding representation,semantic similarity, sentence and multi-sentence compression) to generate morestable and informative cross-lingual summaries.Most of NLP applications (including all types of text summarization) use a kind ofsimilarity measure to analyze and to compare the meaning of words, chunks, sentencesand texts in their approaches. A way to analyze this similarity is to generate a representationfor these sentences that contains the meaning of them. The meaning of sentencesis defined by several elements, such as the context of words and expressions, the orderof words and the previous information. Simple metrics, such as cosine metric andEuclidean distance, provide a measure of similarity between two sentences; however,they do not analyze the order of words or multi-words. Analyzing these problems,we propose a neural network model that combines recurrent and convolutional neuralnetworks to estimate the semantic similarity of a pair of sentences (or texts) based onthe local and general contexts of words. Our model predicted better similarity scoresthan baselines by analyzing better the local and the general meanings of words andmulti-word expressions.In order to remove redundancies and non-relevant information of similar sentences,we propose a multi-sentence compression method that compresses similar sentencesby fusing them in correct and short compressions that contain the main information ofthese similar sentences. We model clusters of similar sentences as word graphs. Then,we apply an integer linear programming model that guides the compression of theseclusters based on a list of keywords. We look for a path in the word graph that has goodcohesion and contains the maximum of keywords. Our approach outperformed baselinesby generating more informative and correct compressions for French, Portugueseand Spanish languages. Finally, we combine these previous methods to build a cross-language text summarizationsystem. Our system is an {English, French, Portuguese, Spanish}-to-{English,French} cross-language text summarization framework that analyzes the informationin both languages to identify the most relevant sentences. Inspired by the compressivetext summarization methods in monolingual analysis, we adapt our multi-sentencecompression method for this problem to just keep the main information. Our systemproves to be a good alternative to compress redundant information and to preserve relevantinformation. Our system improves informativeness scores without losing grammaticalquality for French-to-English cross-lingual summaries. Analyzing {English,French, Portuguese, Spanish}-to-{English, French} cross-lingual summaries, our systemsignificantly outperforms extractive baselines in the state of the art for all these languages.In addition, we analyze the cross-language text summarization of transcriptdocuments. Our approach achieved better and more stable scores even for these documentsthat have grammatical errors and missing information
Kipp, Darren. "Shallow semantics for topic-oriented multi-document automatic text summarization." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27772.
Full textHennig, Leonhard Verfasser], and Sahin [Akademischer Betreuer] [Albayrak. "Content Modeling for Automatic Document Summarization / Leonhard Hennig. Betreuer: Sahin Albayrak." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/1017593698/34.
Full textTsai, Chun-I. "A Study on Neural Network Modeling Techniques for Automatic Document Summarization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-395940.
Full textBooks on the topic "Document Summarization"
Hovy, Eduard. Text Summarization. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0032.
Full textInnovative Document Summarization Techniques Revolutionizing Knowledge Understanding. Idea Group,U.S., 2014.
Find full textJacquemin, Christian, and Didier Bourigault. Term Extraction and Automatic Indexing. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0033.
Full textHirschman, Lynette, and Inderjeet Mani. Evaluation. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0022.
Full textBook chapters on the topic "Document Summarization"
Torres-Moreno, Juan-Manuel. "Single-Document Summarization." In Automatic Text Summarization, 53–108. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781119004752.ch3.
Full textTorres-Moreno, Juan-Manuel. "Evaluating Document Summaries." In Automatic Text Summarization, 243–73. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781119004752.ch8.
Full textTorres-Moreno, Juan-Manuel. "Guided Multi-Document Summarization." In Automatic Text Summarization, 109–50. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781119004752.ch4.
Full textRamanathan, Krishnan, Yogesh Sankarasubramaniam, Nidhi Mathur, and Ajay Gupta. "Document Summarization using Wikipedia." In Proceedings of the First International Conference on Intelligent Human Computer Interaction, 254–60. New Delhi: Springer India, 2009. http://dx.doi.org/10.1007/978-81-8489-203-1_25.
Full textSonawane, Sheetal, Archana Ghotkar, and Sonam Hinge. "Context-Based Multi-document Summarization." In Advances in Intelligent Systems and Computing, 153–65. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1540-4_16.
Full textBathija, Richeeka, Pranav Agarwal, Rakshith Somanna, and G. B. Pallavi. "Multi-document Text Summarization Tool." In Evolutionary Computing and Mobile Sustainable Networks, 683–91. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5258-8_63.
Full textKumaresh, Nandhini, and Balasundaram Sadhu Ramakrishnan. "Graph Based Single Document Summarization." In Lecture Notes in Computer Science, 32–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27872-3_5.
Full textCarrillo-Mendoza, Pabel, Hiram Calvo, and Alexander Gelbukh. "Intra-document and Inter-document Redundancy in Multi-document Summarization." In Advances in Computational Intelligence, 105–15. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-62434-1_9.
Full textWan, Xiaojun. "Document-Based HITS Model for Multi-document Summarization." In PRICAI 2008: Trends in Artificial Intelligence, 454–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89197-0_42.
Full textAfantenos, Stergos D., Irene Doura, Eleni Kapellou, and Vangelis Karkaletsis. "Exploiting Cross-Document Relations for Multi-document Evolving Summarization." In Methods and Applications of Artificial Intelligence, 410–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24674-9_43.
Full textConference papers on the topic "Document Summarization"
Wang, Fu Lee, Tak-Lam Wong, and Aston Nai Hong Mak. "Organization of Documents for Multiple Document Summarization." In 2008 Seventh International Conference on Web-based Learning, ICWL. IEEE, 2008. http://dx.doi.org/10.1109/icwl.2008.6.
Full textJin, Hanqi, and Xiaojun Wan. "Abstractive Multi-Document Summarization via Joint Learning with Single-Document Summarization." In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.231.
Full textChristensen, Janara, Stephen Soderland, Gagan Bansal, and Mausam. "Hierarchical Summarization: Scaling Up Multi-Document Summarization." In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/p14-1085.
Full textRanjitha, N. S., and Jagadish S. Kallimani. "Abstractive multi-document summarization." In 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2017. http://dx.doi.org/10.1109/icacci.2017.8126086.
Full textHu, Meishan, Aixin Sun, and Ee-Peng Lim. "Comments-oriented document summarization." In the 31st annual international ACM SIGIR conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390334.1390385.
Full textKishore, V. V. Krishna, and Pramod Kumar Singh. "Multiple data document summarization." In 2017 Conference on Information and Communication Technology (CICT). IEEE, 2017. http://dx.doi.org/10.1109/infocomtech.2017.8340602.
Full textZhu, Junyan, Can Wang, Xiaofei He, Jiajun Bu, Chun Chen, Shujie Shang, Mingcheng Qu, and Gang Lu. "Tag-oriented document summarization." In the 18th international conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1526709.1526925.
Full textWang, Feng, and Bernard Merialdo. "Multi-document video summarization." In 2009 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2009. http://dx.doi.org/10.1109/icme.2009.5202747.
Full textYapinus, Glorian, Alva Erwin, Maulhikmah Galinium, and Wahyu Muliady. "Automatic multi-document summarization for Indonesian documents using hybrid abstractive-extractive summarization technique." In 2014 6th International Conference on Information Technology and Electrical Engineering (ICITEE). IEEE, 2014. http://dx.doi.org/10.1109/iciteed.2014.7007896.
Full textNaveen, Gopal K. R., and Prema Nedungadi. "Query-based Multi-Document Summarization by Clustering of Documents." In the 2014 International Conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2660859.2660972.
Full textReports on the topic "Document Summarization"
Sekine, Satoshi, and Chikashi Nobata. A Survey for Multi-Document Summarization. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada460234.
Full textSiddharthan, Advaith, Ani Nenkova, and Kathleen McKeown. Syntactic Simplification for Improving Content Selection in Multi-Document Summarization. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada457833.
Full textKaplin, David B. Automatic Summarization with Sloth (Summarizes Lengthy Documents and Outputs The Highlights). Fort Belvoir, VA: Defense Technical Information Center, November 2002. http://dx.doi.org/10.21236/ada408523.
Full text